text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
#include <sys/ddi.h> #include <sys/sunddi.h> struct buf *ddi_umem_iosetup(ddi_umem_cookie_t cookie,off_t off, size_t len, int direction, dev_t dev, daddr_t blkno, int (*iodone) (struct buf *), int sleepflag);
Solaris DDI specific (Solaris DDI)
The kernel memory cookie allocated by ddi_umem_lock(9F).
Offset from the start of the cookie.
Length of the I/O request in bytes.
Must be set to B_READ for reads from the device or B_WRITE for writes to the device.
Device number
Block number on device.
Specific biodone(9F) routine.
Determines whether caller can sleep for memory. Possible flags are DDI_UMEM_SLEEP to allow sleeping until memory is available, or DDI_UMEM_NOSLEEP to return NULL immediately if memory is not available.
The ddi_umem_iosetup(9F) function is used by drivers to setup I/O requests to application memory which has been locked down using ddi_umem_lock(9F).
The ddi_umem_iosetup(9F) function returns a pointer to a buf(9S) structure corresponding to the memory cookie cookie. Drivers can setup multiple buffer structures simultaneously active using the same memory cookie. The buf(9S) structures can span all or part of the region represented by the cookie and can overlap each other. The buf(9S) structure can be passed to ddi_dma_buf_bind_handle(9F) to initiate DMA transfers to or from the locked down memory.
The off parameter specifies the offset from the start of the cookie. The len parameter represents the length of region to be mapped by the buffer. The direction parameter must be set to either B_READ or B_WRITE, to indicate the action that will be performed by the device. (Note that this direction is in the opposite sense of the VM system's direction of DDI_UMEMLOCK_READ and DDI_UMEMLOCK_WRITE.) The direction must be compatible with the flags used to create the memory cookie in ddi_umem_lock(9F). For example, if ddi_umem_lock() is called with the flags parameter set to DDI_UMEMLOCK_READ, the direction parameter in ddi_umem_iosetup() should be set to B_WRITE.
The dev parameter specifies the device to which the buffer is to perform I/O.The blkno parameter represents the block number on the device. It will be assigned to the b_blkno field of the returned buffer structure. The iodone parameter enables the driver to identify a specific biodone(9F) routine to be called by the driver when the I/O is complete. The sleepflag parameter determines if the caller can sleep for memory. DDI_UMEM_SLEEP allocations may sleep but are guaranteed to succeed. DDI_UMEM_NOSLEEP allocations do not sleep but may fail (return NULL) if memory is currently not available.
After the I/O has completed and the buffer structure is no longer needed, the driver calls freerbuf(9F) to free the buffer structure.
The ddi_umem_iosetup(9F) function returns a pointer to the initialized buffer header, or NULL if no space is available.
The ddi_umem_iosetup(9F) function can be called from any context only if flag is set to DDI_UMEM_NOSLEEP. If DDI_UMEM_SLEEP is set, ddi_umem_iosetup (9F) can be called from user and kernel context only.
ddi_umem_lock(9F), ddi_dma_buf_bind_handle(9F), freerbuf(9F), physio(9F), buf(9S) | http://docs.oracle.com/cd/E36784_01/html/E36886/ddi-umem-iosetup-9f.html | CC-MAIN-2016-07 | refinedweb | 501 | 56.55 |
C# lambdas: How much context should you need?
I had an interesting discussion with a colleague last week about the names that we give to variables inside lambda expressions which got me thinking about the context that we should need to hold when reading code like this.
The particular discussion was around an example like this:
public class Foo { private String bar; private String baz; public Foo(String bar, String baz) { this.bar = bar; this.baz = baz; } public override string ToString() { return string.Format("{0} - {1}", bar, baz); } }
var oneFoo = new Foo("bar", "baz"); var anotherFoo = new Foo("otherBar", "otherBaz"); new List<Foo> {oneFoo, anotherFoo}.Select(foo => foo.ToString().ToUpper()).ForEach(Console.WriteLine);
I suggested that we could just replace the ‘foo’ with ‘x’ since it was obvious that the context we were talking about was applying a function on every item in the collection.
My colleague correctly pointed out that by naming the variable ‘x’ anyone reading the code would need to read more code to understand that x was actually referring to every ‘Foo’ in the collection. In addition naming the variable ‘x’ is quite lazy and is maybe equally bad as naming normal variables x,y and z (unless they’re loop indexes) since it is completely non descriptive.
The only real argument I can think of for having it as ‘x’ is that it makes the code a bit more concise and for this particular example I had to change the name of my first Foo to be ‘oneFoo’ so that I could use the variable name ‘foo’ inside the block since other variables in the same method are accessible from the closure.
I’m not sure what the good practice is in this area. I’ve done a little bit of work with Ruby closures/blocks and the convention there seemed to be that using single letter variables for blocks was fine.
In this case the extra context wouldn’t be that great anyway but I think trying to keep the necessary context that someone needs to remember as small as possible seems to be a reasonable rule to follow.
About the author
Mark Needham is a Developer Relations Engineer for Neo4j, the world's leading graph database. | http://markhneedham.com/blog/2008/12/27/c-lambdas-how-much-context-should-you-need/ | CC-MAIN-2018-17 | refinedweb | 373 | 54.46 |
Recently in build time and responsiveness.
Here were my findings…
Solution with 100 projects each with a single class
I set up a solution with 100 class projects with a single class in each project and did a clean build on the solution. The average build time per project was 00:00.2 units, leaving me with a total build time of 00:16.0 units. Not terrible, but remember these projects are about as small as they can get – this is the best case with this scenario.
Solution with one project and 100 classes in a single namespace
The next step was to establish where the load was on the build, would I get similar results with a solution with one project with 100 classes? Again I set up a solution with one project with 100 classes (exact same content in the classes) – total build time 0:00.17 units, significantly faster.
Solution with one project and 100 classes in separate namespaces
Still not sure if possibly I would see a difference if I had multiple namespaces in one solution, I made another solution with one single project and a 100 different classes – each in their own unique namespace.. nada, still getting performance around 0:00.17 units, significantly faster than one solution with 100 projects with one class per project.
Graphically shown it tells its own story
Build Times
Conclusion
Keep as few projects as possible in a solution. It is nice to have projects as a physical separation of units of code, but if you plan on keeping them in the same solution you are opening yourself up to build time pain.
I would suggest rather using as few projects as possible in a solution – approach A would be if you can separate the solution into multiple solutions (keeping your project count as low as possible). If you don’t like the multiple solution idea you can always take approach B and have a one solution strategy with multiple namespaces within the single solution to isolate things and show the boundary line of what goes where.
With approach B it will require some discipline by and team consensus with your developers, but I am sure they would prefer this than losing hours of their day building a solution. And that’s my take on things – by all means do the hardware tweaks of setting up ram drives etc. – but fundamentally rather avoid the problem totally – If I have missed something obvious or you know of an awesome way to get past this problem, please leave a comment!
If you would like to do your own investigation you can download the test solutions from here and run your own scenarios. | http://blog.markpearl.co.za/VS2010-Large-Solution-Bottle-Neck | CC-MAIN-2019-04 | refinedweb | 452 | 64.75 |
Hi, Thanks for the review. Some more background: On Sat 24 Apr 2010 18:55, address@hidden (Ludovic Courtès) writes: > Regarding the single module/variable name space, I was convinced that it > was annoyingly constraining module names; however, as illustrated at > <> that constraint > has vanished somehow, making it less of a problem IMO. It still does constrain module names, but only if you have a local variable of the same name. If I even do (define bar ...) in my '(foo) module, I can't define a '(foo bar) module. This is undocumented, leads to poorly-understood errors, and is just wrong ;-) > Besides, you said at > <> that this is “not > something we can change right now.”. :-) I realized that we could change it in a backwards-compatible way. My experience points have allowed me to level up in Refactoring ;) > So, apologize if that should be obvious to me, but can you please > re-explain why this is important? It's important to me to get things right, and if we can do that without downsides, then we should. Beyond that, I don't want to merge in r6rs support that has hacks around the module system (cf recent thread about (rnrs) and (rnrs syntax-case)). The version comparison code is already complicated enough as it is; I would rather not maintain hacks in the future. > To app or not to %app? So, this question is a slightly different one ;) It used to be that there was an (app modules ...) hierarchy, traversed through the normal value namespace. The root was the module known as (guile), and it had a binding for `app'. Module resolution was implemented in resolve-module to resolve with respect to (guile), via "full names" -- so one would resolve, for example, (app modules guile), and that found the root of the hierarchy via the `app' binding in the (guile) module. This provided the illusion of a separate root, at (app), or even above it, but that was never the case, because (resolve-module '(guile)) = (resolve-module '(guile app modules guile)). At some point it was realized that if you had a script, if you defined a variable named "app", you were screwed -- because the (guile-user) module shared an obarray with (guile), and that would overwrite the binding for "app", overwriting the root of the module resolution tree. So, app was changed to %app. Only there was some code out there that did (nested-ref the-root-module '(app module ...)), so "app" had to stay around -- but there was no deprecation mechanism to encourage users to switch to %app. > commit 30ce621c5ac9b67420a9f159b2195f6cd682e237 > Author: Andy Wingo <address@hidden> > Date: Thu Apr 22 15:08:13 2010 +0200 > > (app modules) -> (%app modules) This kind of commit is simply to switch away from the deprecated "app" to "%app". Now, "%app" is still something of a hack; instead of doing that wierd thing in resolve-module where resolving '(foo) actually resolved '(app modules foo), why not make resolve-module close over the module formerly addressed as (app modules), and just resolve modules relative to that new root, the module that is returned when doing a (resolve-modules '() #f) ? Then we can deprecate the "%app" binding, and never need to document it. > commit cb67c838f5658a74793f593c342873d32e4a145c > Author: Andy Wingo <address@hidden> > Date: Thu Apr 22 15:25:09 2010 +0200 > > deprecate %app That's what we're up to here. So, we've gotten rid of %app, but old code will still work if --enable-deprecated is used. Now that we've simplified the resolve-module mechanism, we can look at separating the value namespace from the module namespace. Basically we add module-ref-submodule and module-define-submodule!, which will in the future access that second namespace but whose initial definition is in terms of module-ref. Then we make nested-ref etc traverse submodules via module-ref-submodule, and when they reach the last element in the nested name they use nested-ref. We also define nested-ref-module, which uses module-ref-module for the last element. Then eventually we add a submodules array to modules themselves, switch module-ref-submodule to use that, and voila separated namespaces, with back-compat in deprecated.scm. > diff --git a/libguile/modules.c b/libguile/modules.c > index ccb68b7..c8a4c2a 100644 > --- a/libguile/modules.c > +++ b/libguile/modules.c > @@ -44,7 +44,16 @@ scm_t_bits scm_module_tag; > > static SCM the_module; > > +static SCM module_make_local_var_x_var; > > Please keep comments above variables, or add one for all of them (info > "(standards) Comments"). OK, will do. > There also needs to be the C counterpart: > > #define scm_module_index_import_public_interface 9 > > #define SCM_MODULE_PUBLIC_INTERFACE(module) \ > SCM_PACK (SCM_STRUCT_DATA (module) > [scm_module_index_public_interface]) > > and change ‘scm_module_public_interface ()’ to use that instead of > ‘scm_call_1 (...)’. This won't work in general, because in the case in which you have deprecated code enabled, you still need to go through the Scheme function (overridden by ice-9 deprecated). Regarding the indices, only a subset of module's fields currently have record indices and accessors defined. Given that these would be new C interfaces, I would prefer to avoid defining them unless there were a need. > Finally there needs to be ‘scm_set_module_public_interface_x ()’ as > discussed at <>. I'm not sure; Lilypond seems to be using this interface as: mod = scm_call_0 (maker); scm_module_define (mod, ly_symbol2scm ("%module-public-interface"), mod); Which is a verbose (and somewhat incorrect) way to say, "export everything in this module". Perhaps, given that this would be a new interface, we should just provide a more appropriate interface -- module-export-all! or something. Andy -- | http://lists.gnu.org/archive/html/guile-devel/2010-04/msg00168.html | CC-MAIN-2019-04 | refinedweb | 916 | 60.45 |
Heaps of people have been tweeting me about the status of Grails In Action 2.0 – so it’s time to blog up a bit about how things are progressing.
We have about five chapters in bed so far (including a reworked Groovy introduction, a new chapter on Scaffolding, and the several of the original “intro” chapters from Part I and II of the book. Our current plan is to square off a few more chapters in Part II then set it free to the Manning MEAP programme. We’re keen to get some good value in place before we unleash it on you all. Not being perfectionists, just making sure that when you get it it was worth the wait!
The good news is that Peter and I are presently sinking a good amount of time into writing and the pace of things is definitely picking up now! I would expect you’d see a MEAP sometime in August.
One of the things that has pleased me about the update process is that I”ve definitely improved as a Grails programmer in the last few years! There’s stuff in First Ed that was probably a bit hacky, and I’ll do my best to tidy that stuff up as I rework the chapters. I remember listening to a great keynote by David Heinemeier Hansson on “Legacy Code and Code Rot” – (can’t find the MP3 link, but it might be this one) and him saying that it’s not that your code rots, it’s that you’re a different person than when you wrote it! Great concept! I’m noticing it a lot in the update.
We’ve made the move to Spock for this version of the book – starting from Chapter 3 – and you’re gonna love it! Peter and I work in it exclusively now, and asking around twitter it seems a lot of working Grails devs have made the switch too. IMHO it makes our examples much clearer to the beginner too. Take our chapter 3 Listing 3.3 example explaining how to test property updates. Here’s the before and after (JUnit vs Spock):
void testSaveAndUpdate() { def user = new User(userId: 'joe', password: 'secret', homepage: '') assertNotNull user.save() def foundUser = User.get(user.id) foundUser.password = 'sesame' foundUser.save() def editedUser = User.get(user.id) assertEquals 'sesame', editedUser.password }
Which is replaced by:
def "Updating a saved user changes its properties"() { given: "An existing user" def existingUser = new User(userId: 'joe', password: 'secret', homepage: '').save(failOnError: true) when: "A property is changed" def foundUser = User.get(existingUser.id) foundUser.password = 'sesame' foundUser.save() then: "The change is reflected in the database" User.get(existingUser.id).password == 'sesame' }
Personally, I think Spock really does make the intention of the test much clearer to the new guy on the project.
We’ll keep you posted. I can’t imagine it would be much more than a month before we release a serious chunk of chapters to MEAP. We’re working hard to keep the same fun tone of the original book, but making sure we spend a lot more time focusing on “Best Practice” type issues to incorporate the stuff we’ve learned over the last few years in Grails consulting and commercial development.
Thanks everyone for your encouragement and support!
Looking forward to the 2.0 book, Glen! I hope you will provide more guidance on two related areas that have been challenging for me: maven integration (especially resolving dependency conflicts) and plugin-oriented architecture. They’re related because the more plugins I try to integrate, the more dependency conflicts pop up. The online documentation, while useful as a starting point, doesn’t even begin to cover the problems that emerge when using Grails to build the presentation tier of a complex enterprise app. thanks! | http://blogs.bytecode.com.au/glen/2012/07/18/grails-in-action-2-0-the-great-spock-update.html | CC-MAIN-2013-20 | refinedweb | 641 | 71.65 |
User talk:Rcmurphy/archive3
From Uncyclopedia, the content-free encyclopedia
Apologia
RC, I wanted to apologize for voting twice on Rocky Mountain Oysters. I honestly didn't remember
DAMMIT! You KNEW he'd catch you! And you did it anyway! DUMB, DUMB, DUMB.
having voted on it before, and of course my ID is so generic, it's not surprising that I'd just
Now what's he gonna think? Cheater. Ooooh yeah, you've really put your foot in it now. That's just great.
completely miss it! Anyway, would you mind if I just deleted the spurious entry completely?
--Some user 04:22, 20 January 2006 (UTC) Way to go, sooper genius.
- I removed it. --—rc (t) 04:56, 20 January 2006 (UTC)
- And I'll remove your face too if it happens again...
Meta
I was kicking around ideas for Uncyclopedia: namespace logo with Algo, and we had a thought: Image:Puzzlemeta.png (since it is a namespace about uncyc). Then I had a thought that he hated, Uncyclopedia_talk: namespace -> Image:Puzzlemetameta.png. These are just 'demo' images at 5am and they suck. If you liked the idea you could make them better and not so suck. Also aglo likes waffles. --Splaka 12:43, 21 January 2006 (UTC)
- Looks good to me, except the talk: one might be a bit too subtle. With the new namespaces (I don't know if we'd want to make logos for them, but we might) I decided to start a potential logo page so that people can add their own for consideration. --—rc (t) 19:38, 21 January 2006 (UTC)
Tx:
The only thing left to make Tx: ready to go is to substitute in the most recent "Recent Articles". I have done most of them and I left two spots for new ones, which can be either filled or just deleted if I am not around whe you want to put it up. ---
Rev. Isra (talk) 00:16, 24 January 2006 (UTC)
As the Template:Recent itself is meta data, couldn't you just incorporate it? Reformatting the code in Template:Recent_meta to plain text? --Splaka 00:33, 24 January 2006 (UTC)
Congrats
For your psychobabble skillz, and
For excellent Uncyclopedia trivia knowledge.
Woof
I hope I'm not being a pest, but if it's not too much trouble, could you please restore the edit history of Woof as BobBobBob did with Meow? Thanks again for restoring the article.--Naughtyned 20:12, 29 January 2006 (UTC)
- Absolutely not.
- Er, I mean sure. I should have done that in the first place, I just wasn't thinking. --—rc (t) 20:16, 29 January 2006 (UTC)
- Thanks again and I get what you mean about Todd "not [being] around right now." 1 Who'd have thought that one of those silly templates at the top of a page would actually have valuable information? :-) --Naughtyned 03:42, 31 January 2006 (UTC)
heh
<rcmurphy> Later fellows. * rcmurphy (i=rcmurphy@x1x7-15x-112-xx5.auto.) has left #uncyclopedia <miss> adios <miss> I can't stand that admin anyways <miss> I can say that since no one's listening anyway <miss> fucking asshole
--Splaka 06:31, 30 January 2006 (UTC)
- Oh, good. I was beginning to get a reputation as a friendly admin. --—rc (t) 15:08, 30:Rcmurphy/ebay.css -> MediaWiki:Skin/EBay.css (You can delete the redirect when you wish.)
-_- why me?
People are fucking with my articles again, I checked the VFD and QVFD, and the Category "sandwhich wars", keeps getting deleted, it happened twice today.
- If you've noticed, there's been an effort to scale down the number of useless categories lately. Your articles are already plentifully categorized without you making a special vanity category for them. Truth be told, I was close to deleting Mayonkaizer as NNP, and wasn't too impressed with your linkbacks to your deleted articles under your user page (e.g. User:Jack_Cain/Hattori_Hanzo), but i left that alone. Your persecution complex makes me wonder why I bothered. --T. (talk) 01:13, 5 February 2006 (UTC)
- Yes, the site has been cluttered with limited-use categories. Only a few article series (serieses?) warrant their own category (see category:Bloodbath or category:Wheeling Jesuit University for examples). Also, please note that I'm not going to try to circumvent other admins without a very good reason, and there isn't one here. --—rc (t) 01:35, 5 February 2006 (UTC)
Dude!
Totally! - Nonymous 01:17, 5 February 2006 (UTC)
- Many combolations on your recent alection to Unicycle of the year elizagerth...but sewiously...I have writed stuff... see...Lincoln-Douglas Debates and No Talent Texas Hold 'Em--Sir Slackerboya CUN VFH (talk) 15:26, 8 February 2006 (UTC)
- I take it back you dick, thanks for removing Crapmobile from VFH after I just saw it and voted for it yesterday...give a brother a chance--Sir Slackerboya CUN VFH (talk) 15:41, 8 February 2006 (UTC)
Mohammed Picture!
Okay, fine. I swear on a stack of Qu'rans that the picture is original and if it ain't may Allah struck me dead within 24 for hours.--Mrasdfghjkl 03:54, 10 February 2006 (UTC)
- You're going to have to prove it to me. You realize that, right? I mean, I'm rather skeptical by now. --—rc (t) 03:59, 10 February 2006 (UTC)
- By still being here in 24 hours is the most I can do. Sorry.--Mrasdfghjkl 04:16, 10 February 2006 (UTC)
- Unfortunately for you, I am not Islamic! Though if you are dead a day from now I may well convert out of fear. --—rc (t) 04:19, 10 February 2006 (UTC)
- But if I don't die, and you go on to later prove the picture was unoriginal, you'll get to go down in history as the guy who debunked Islam as monolithic rubbbish.--Mrasdfghjkl 04:25, 10 February 2006 (UTC)
- But then I'd probably get fatwa'd and die. --—rc (t) 05:19, 10 February 2006 (UTC)
- Everybody dies.--Mrasdfghjkl 12:14, 10 February 2006 (UTC)
- Update: I am still alive.--Mrasdfghjkl 12:14, 10 February 2006 (UTC)
- Update: I'm not dead.--Mrasdfghjkl 01:43, 11 February 2006 (UTC)
- You still have two hours. --—rc (t) 01:44, 11 February 2006 (UTC)
- Update: Ooh! Ooh! Breath... short. Left arm... numb! Can't go on... describing symptoms much longer!--Mrasdfghjkl 03:04, 11 February 2006 (UTC)
- Update: I made it!--Mrasdfghjkl 04:38, 11 February 2006 (UTC)
- Congratulations. However, in that case, I must revert to the previous statement that I'm not Islamic. I'm afraid your 24 hours of fear were unproductive. --—rc (t) 04:62, 11 February 2006 (UTC)
- You're afraid?! --Mrasdfghjkl 06:39, 11 February 2006 (UTC)
- An ill-considered choice of words on my part. --—rc (t) 06:41, 11 February 2006 (UTC)
- Yeah, actually Allah has has very little to do with Islam, he is more of an excuse. And other religions have Allah too, they just call him... God, Ychch, Homer, etc.--Mrasdfghjkl 06:51, 11 February 2006 (UTC)
- And Grover. --—rc (t) 07:01, 11 February 2006 (UTC)
- Grover is more of an incon for an Anti-Christ.--Mrasdfghjkl 07:12, 11 February 2006 (UTC)
A friendly nooblet who called me a "potato face"
HOWDY :)
I find the site amusing and i am a stranger dont add me to msn stranger danger also why can we add things and alter them on the site ?
- Because it's a wiki. Since you're new here I recommend the Beginner's Guide and How to Be Funny. --—rc (t) 01:37, 11 February 2006 (UTC)
To be sure to be sure to be sure would y alike some potato,
So potato face what country are you from i am from australia the land of well umm vewgimite and that isnt even made here anymore
- I am from a mystical land far, far away from your prison state. We don't have vewgimite here. --—rc (t) 01:43, 11 February 2006 (UTC)
Well you see if there was a flea that landed on a pea the flea would fall in love with the pea and all would be right in the world a friend wants to know why his explanation of bogans didnt come up it said no body likes no body likes nobody likes a bogan ?
- Flea-pea relationships don't work out. Just think of the biological complications. --—rc (t) 04:55, 11 February 2006 (UTC)
It dosent matter about the complications if they love each other thats all that matters and i have registered
- I see you've chosen a username to reflect your belief in trans-kingdom romance. It's sweet but I don't think it's realistic. I am sorry. --—rc (t) 05:05, 11 February 2006 (UTC)
They said that a black man and a white woman would never work and visa versa but they came threw the flea and pea are being wed as we speak
- All I'm saying is that I think they need to think it through. Passion runs quick and hot, but a steady flame lasts longer. --—rc (t) 05:49, 11 February 2006 (UTC)
I care about You
{{Icare}} --Da, Y?YY?YYY?:-:CUN3 NotM BLK |_LG8+::: 06:23, 11 February 2006 (UTC)
testy westy
This is a requested tested. --Splaka 08:04, 11 February 2006 (UTC)
- Test successful. Will report new findings at 0900 hours. --—rc (t) 07:06, 12 February 2006 (UTC)
- Nobody cares --Splaka 07:12, 12 February 2006 (UTC)
OMG
<Emmanuel_Chanel> Hello!
<keitei> Hello!
<keitei> how cruel of you to speak when rc isn't here...
I weep for you. --KATIE!! 12:44, 11 February 2006 (UTC)
- I'm sure he knew what he was doing...to my heart. Actually, he did say something once when I was in the room - I said something to him and he was like "yeah right." And that was it. Emmanuel Chanel is an enigma. --—rc (t) 19:43, 11 February 2006 (UTC)
A Bother
- (cc: from User_talk:Angela#A_Bother at user's request))
- See also: User_talk:Rangeley#Iraq_War, User_talk:TheGrza
- I guess the thing is that I more or less, actually entirely, wrote the Iraq War article, and intended it to be a specific sort of article. There are a lot of articles from a left wing point of view, critiquing the right (George W. Bush for instance) and a relative lack of ones critiquing the left. So I pondered the possibilities, and surprise-surprise, there wasnt an article on the Iraq War. So I jumped on the opportunity and wrote it out one day in a form close to its current form. There is no need for an article to be fair and balanced, and it is 'legal' for one to be entirely from one point of view. Your different perspective is appreciated at Uncyclopedia, ofcourse. It just doesnt jive with the article at hand. Articles are good when they are coherent, and not choppilly slapped together. I try to keep my articles coherent, in one way or another, and give them a flow so that from start to finish it feels like one article, and not a cut and paste job of several trains of thought. As such, your edits adding a different perspective dont really add to the continuity or humor of this specific article. :35, 12 February 2006 (UTC)
- Though this is a wiki site and collaboration is encouraged, there are some cases in which an article has a very specific tone/POV, and breaking that tone can weaken the overall article (even if what is added is generally funny). Obviously this is different from Wikipedia, where the important thing is to not have a POV at all. So it's not necessarily that your edits were bad, it's just that, as Rangeley said, they sort of subvert the purpose of the page. Also, you're completely right that Uncyc has a ton of garbage. Unfortunately that pretty much comes with the territory (but I actually think things have been getting better the past couple months). --—rc (t) 17:43, 12 February 2006 (UTC)
Mohammed revisited
Mohammad's younger lego figure.
Anyway, other than the face (which i can't find), here's a reproducion.
It's a combo of Maharaja Lallu (brown body, could be anybodies), Scorpion Palace Guard (head), and Count Dooku (cape and chest). If you care you can go hereand find the face and I will reproduce that same figure. There's a lot of figures to sort through. I have probably spent a few hours of research already over this stupid event. You busted him in a lie but you couldn't prove it. OH well. -- – Mahroww a.k.a. Emir Henry A. Tootsie 23:02, 13 February 2006 (UTC)
- NM. Tompkins already took care of that. -- – Mahroww a.k.a. Emir Henry A. Tootsie 23:53, 13 February 2006 (UTC)
- Heh. I figured, hey, if it was original, he'd have said that from the beginning. Very suspicious. --—rc (t) 04:35, 14 February 2006 (UTC)
- You will no doubt be rewarded for your work to keep the VFP honest! In the mean time...
- Hooray--Mrasdfghjkl 01:10, 15 February 2006 (UTC) | http://uncyclopedia.wikia.com/wiki/User_talk:Rcmurphy/archive3 | CC-MAIN-2016-30 | refinedweb | 2,228 | 73.27 |
The ungetwc() function is defined in <cwchar> header file.
wint_t ungetwc( wint_t ch, FILE* stream );
The ungetwc() function pushes the wide character ch back to the buffer associated with the file stream unless ch is equal to WEOF. If ch is equal to WEOF, the operation fails and there is no change in the stream.
Calls to ungetwc() may fails if it is called more than once without any read or repositioning operation in the middle.
If a call to ungetwc() is successful, the end of file status flag feof is cleared.
For both text and binary stream, a successful call to ungetwc modifies the stream position indicator in an unspecified manner. But it is guaranteed that after all pushed-back characters are retrieved with a read operation, the stream position indicator is equal to its value before calling ungetwc().
#include <cwchar> #include <clocale> #include <cwctype> #include <iostream> #include <cstdio> using namespace std; int main() { setlocale(LC_ALL, "en_US.UTF-8"); wint_t c; long value = 0; wchar_t str[] = L"\u0037\u0031\u0039\u00b6\u03ee"; FILE *fp = fopen("file.txt", "r+"); fputws(str,fp); rewind(fp); while(1) { c = fgetwc(fp); if (iswdigit(c)) value = value*10 + c - L'0'; else break; } ungetwc(c, fp); cout << "Value = " << value << endl; fclose(fp); return 0; }
When you run the program, a possible output will be:
Value = 719 | https://cdn.programiz.com/cpp-programming/library-function/cwchar/ungetwc | CC-MAIN-2019-47 | refinedweb | 223 | 59.43 |
rc_find_pids man page
rc_find_pids — finds the pids of processes that match the given criteria
Library
Run Command library (librc, -lrc)
Synopsis
#include <rc.h>
RC_PIDLIST *
rc_find_pids(const char *const *argv, const char *cmd, uid_t uid, pid_t pid);
Description
rc_find_pids() returns RC_PIDLIST, a structure based on the LIST macro from queue(3) which contains all the pids found matching the given criteria. If pid is given then only that pid is returned if it is running. Otherise we check all instances of argv with a process name of cmd owned by uid, all of which are optional.
The returned list should be freed when done.
Implementation Notes
On BSD systems we use Kernel Data Access Library (libkvm, -lkvm) and on Linux systems we use the
/proc filesystem to find our processes.
Each RC_PID should be freed in the list as well as the list itself when done.
See Also
free(3), queue(3)
Authors
Roy Marples <roy@marples.name>
Referenced By
start-stop-daemon.openrc(8). | https://www.mankier.com/3/rc_find_pids | CC-MAIN-2019-13 | refinedweb | 165 | 71.04 |
Closures in Ruby argument, etc.
- Remembers the values of all the variables that were in scope when the function was defined and is able to access these variables even if it is executed in a different scope.
Put differently, a closure is a first-class function that has lexical scope.
Code Blocks
A block is, in effect, an anonymous function that also offers closure-like functionality. Consider the following,
outer = 1 def m inner = 99 puts "inner var = #{inner}" end
Unlike some other languages, Ruby doesn’t nest scopes, so the inner and outer variables are totally shielded from each other. What if we want to access the inner variable from the outer (a.k.a main) scope? One way of doing it is by using a block:
outer = 1 def m inner = 99 yield inner puts "inner var = #{inner}" end m {|inner| outer += inner} puts "outer var = #{outer}"
Output:
#=> inner var = 99 #=> outer var = 100
From within the method, we’re yielding the inner variable to a block that’s appended to our method call. We then use the inner variable value to do some arithmetic in our main scope. Althoh the code block is, effectively, an anonymous function, it can still access the variables in its surrounding scope, such as ‘outer’, while accessing variables within the method that yields it. In other words, using a block like that allows us to cross a scope gate.
“Why Can’t I Use the Method’s Return Value Instead?”
In this simple example, you could. However, if you want to return more than one variables, you fail, while you can easily
yield multiple variables to a block. More importantly, the method won’t have access to the surrounding variables at the point of its definition, so you won’t be able to do the cool things we’ll see in the rest of this article :)
Procs
You may have noticed that our code block example didn’t really fulfill the criteria we defined for a closure. While the block remembers variables in its surrounding scope and can access variables yielded to it from a different scope, we can’t pass it as a method argument or assign it to another object. In other words, blocks used with the yield statement are not true closures. Blocks can be true closures but they have to be treated as Procs. Let’s look at how we can do just that:
outer = 1 def m &a_block inner = 99 a_block.call(inner) puts "inner var = #{inner}" puts "argument is a #{a_block.class}" end m {|inner| outer += inner} puts "outer var = #{outer}"
Output:
#=> inner var = 99 #=> argument is a Proc #=> outer var = 100
The first difference between this and our previous example is that we’re now defining a parameter for our method:
&a_block. This tells the method to expect a block as an argument and also treat it like a
Proc object (the & operator implicitly converts the block to a
Proc). A Proc is simple a named code block, meaning, an actual object. Since it is an object, the block can be passed around and have methods invoked on it. The method of interest here is
#call, which invokes the Proc. Knowing all this, we can rewrite our code as follows:
outer = 1 def m a_proc inner = 99 a_proc.call(inner) puts "inner var = #{inner}" puts "argument is a #{a_proc.class}" end m proc {|inner| outer += inner} # we can also use Proc.new instead of proc, with the same effect: # m Proc.new {|inner| outer += inner} puts "outer var = #{outer}"
Output:
#=> inner var = 99 #=> argument is a Proc #=> outer var = 100
Here, we’re creating a
Proc object on the fly and passing it as a method argument. To fully leverage a closure, we need to be able to assign it to another variable and call it when we need it (deferred execution). Let’s modify our method to achieve just that:
outer = 1 def m a_var inner = 99 puts "inner var = #{inner}" proc {inner + a_var} end p = m(outer) puts "p is a #{p.class}" outer = 0 puts "changed outer to #{outer}" puts "result of proc call: #{p.call}"
Output:
#=> inner var = 99 #=> p is a Proc #=> changed outer to 0 #=> result of proc call: 100
Our method now receives the outer variable as an argument and returns the
Proc that does the addition of
inner and
outer. We then assign the
Proc to a variable (
p), called at our leisure further down in the code. Note that, even when we change the value of
outer before the proc call and set it to 0, our result isn’t affected. The
Proc remembers the value of
outer when it was defined, not when it was called. We now have a real born closure!
Lambdas
Time for a spot-the-difference game. Take a look at the following code and see how it differs from the code in the previous section:
outer = 1 def m a_var inner = 99 puts "inner var = #{inner}" lambda {inner + a_var} end p = m(outer) puts "p is a #{p.class}" outer = 0 puts "changed outer to #{outer}" puts "result of proc call: #{p.call}"
Output:
#=> inner var = 99 #=> p is a Proc #=> changed outer to 0 #=> result of proc call: 100
Yes, the only difference is that our method now returns a lambda instead of a proc. The functionality and output of our code remain exactly the same. But…hang on, we assigned a lambda to
p, but
p tells us it’s a
Proc! How’s that possible? The answer is: that lambda is but a
Proc in disguise.
#lambda is a
Kernel method that creates a
Proc object which behaves slightly differently to other
Proc objects.
“How Can I Tell If an Object is a proc or a lambda?”
Just ask it by calling its
#lambda? method.
obj = lambda {"hello"}. puts obj.lambda? #=> true
Also, calling
#inspect on a lambda will give you its class as a
Proc(lambda).
Procs vs Lambdas
We already saw that a lambda is just a
Proc with some different behavior. The differences between a
Proc (proc) and a lambda encompass:
- Returning
- Argument checking
Returning from a Proc
def method_a lambda { return "return from lambda" }.call return "method a returns" end def method_b proc { return "return from proc" }.call return "method b returns" end puts method_a puts method_b
Output:
#=> method a returns #=> return from proc
While our lambda-using method (
method_a) behaves as expected, our proc-using method (
method_b) never gets to return, returning the Proc’s return value instead. There’s a simple explanation for this:
- A block created with lambda returns back to its parent scope just like methods do, so no surprises there.
A block created with
proc(or
Proc.new) thinks it’s part of its calling method, so returns back to its calling method’s parent scope. Which can be a bit shocking when you realize half of your method’s code wasn’t executed because the
Procyou put half-way through it had a
returnstatement. So, can we return from a
Procthe “normal” way? Yes, if we use the
nextkeyword, instead of
return. We can re-]write
method_bso that it’s functionally the same to
method_a:
def method_b proc { next "return from proc" }.call return "method b returns" end puts method_b
Output:
#=> method b returns
Argument Checking
Let’s create and call a lambda with two parameters:
l = lambda {|x, y| "#{x}#{y}"} puts l.call("foo","bar")
Output:
#=> foobar
What happens if we omit an argument?
puts l.call("foo")
Output:
#=> wrong number of arguments (1 for 2) (ArgumentError)
That’s right, a lambda is strict about its arguments (arity), just like a method. What about procs?
p = proc {|x, y| "#{x}#{y}"} puts p.call("foo","bar") puts p.call("foo")
Output:
#=> foobar #=> foo
We can see that the proc is much more chill about its arguments. If an argument’s missing, the proc will simply assume it’s
nil and get on with life.
Syntactic Sugar
One of the many things I love about Ruby is that it allows us to do the same thing in many different ways. Think that the
#call method is too Fortran-like for your tastes? No problem, call your proc with a dot or double-colon. Don’t like that either? There’s always the square bracket notation.
p = proc {|x, y| "#{x}#{y}"} puts p.call("foo","bar") puts p::("foo","bar") puts p.("foo","bar") puts p["foo","bar"]
Output:
#=> foobar #=> foobar #=> foobar #=> foobar
Also, remember the unary ampersand operator (
&) we used earlier to convert a block to a proc? Well, it works the other way too. 1
p = proc {|i| i * 2} l = proc {|i| i * 3} puts [1,2,3].map(&p) puts [1,2,3].map(&l)
Output:
#=> 2 #=> 4 #=> 6 #=> 3 #=> 6 #=> 9
Here, the
#map method expects a block, but we can easily pass it a proc or a lambda instead. This means we can utilize the power of closures to transpose variables from other scopes into the goodness of Ruby’s block-expecting methods, which is pretty powerful stuff.
So What’s the Big Deal?
Ruby offers unrivaled versatility in implementing closures. Blocks, procs, and lambdas can be used interchangeably to great effect. Much of the “magic” created by many of our favorite gems is facilitated by closures. Closures allow us to abstract our code in ways that make it smaller, tighter, re-usable and elegant 2. Ruby empowers developers with these wonderful and flexible constructs so that we can utilize closure power to the max.
[1]: Actually, the unary ampersand operator is nuanced in its own way
[2]: I can feel a new article coming up ;) | https://www.sitepoint.com/closures-ruby/?utm_source=sitepoint&utm_medium=articletile&utm_campaign=comments&utm_term=ruby/ | CC-MAIN-2020-10 | refinedweb | 1,626 | 71.14 |
Hi!
I want to debug my library which my program uses in the offload section.
1. I tried to use library with debugging information. I use the following command line to build the library:
:: set environment call D:\Intel\compilers_and_libraries\windows\bin\compilervars.bat intel64 :: compile icl /Qmic -c -g -Wall -Werror -fpic mylib/*.cpp -lm icl /Qmic -g -shared -o libmylib.so *.o
The size of the library tells me that the debugging information is included in it. When I'm try to debug a program (I use VS 2015 and Intell Parallel Studio 2017), the debugger does not go inside a library.
2. I tried to use the library source files for debugging, but I can't compile a program.
Simple example:
main.cpp
#pragma offload_attribute(push, target(mic)) #include "nativelib.h" #include <iostream> #pragma offload_attribute(pop) int main(int artc, char* argv[]) { #pragma offload target(mic) { std::cout << someFunc(3, 5); } return 0; };
nativelib.h
#ifndef nativelib_h__ #define nativelib_h__ int someFunc(int, int); #endif
nativelib.cpp
#include "nativelib.h" int someFunc(int a, int b) { return a + 2 * b; };
And it lead to link error: "undefined reference to `someFunc(int, int)' ". Of course I can only use header files, but this is not a very nice option.
Please tell me what I'm doing wrong in the above options or explain another version of the debugging the library used in the offload section.
Best regards, Alexander.
Link Copied
So I found on Git the next project and did my project by analogy.
In the case of the example above I had to move '#pragma offload_attribute clauses' to 'nativelib.h' :
main.cpp
#include "nativelib.h" #include <iostream> int main(int artc, char* argv[]) { #pragma offload target(mic) { std::cout << someFunc(3, 5); } return 0; };
nativelib.h
#ifndef nativelib_h__ #define nativelib_h__ #pragma offload_attribute(push, target(mic)) int someFunc(int, int); #pragma offload_attribute(pop) #endif
#include "nativelib.h" int someFunc(int a, int b) { return a + 2 * b; }; | https://community.intel.com/t5/Software-Archive/How-to-debug-a-library-used-by-a-program-in-the-offload-section/td-p/1131110 | CC-MAIN-2021-21 | refinedweb | 328 | 58.79 |
Ext Certified Developer
hi team and community,
i am not sure if this topic came up already. in germany, some companies are searching for developers, who have skills in developing apps with your framework. but since there is no official certified test or an education for this, it is not easy to determine who is a good choice and who is not.
i am not only interested to get such an certification, but i am willing to help to build up something like this in germany.
kind regards,
tobiu
Some folks have asked about this in the past and I don't recall the exact response, but I believe it was not favorable.
How you determine who is a good choice is by asking techinical questions during an interview. Questions that Only an Ext JS developer with the proper experience can answer.
For instance (it took me only a minute or two to generate these):
Q: What are the three phases of the component lifecycle?
Q: What is the lowest component class that can participate as a child in a layout?
Q: What is the lowest component class that can manage other child items?
Q: What does the Data Reader class provide for the Data Store class?
Q: Name five layouts in the framework.
Q: What layout allows for multiple children, each taking 100% of the parent's available body space but only allows one child to be shown at a time
Q: What do Fn.createDelegate, Fn.createInterceptor and Fn.createSquence do?
Q: Explain what Ext.Element's role is in the framework
I think you get the point
If the interviewer does not know the framework, they can create questions from Learning Ext JS () or Ext JS in Action ()
No ballgame is not complete without its curveballs dude.
trick question (I still can't believe I didn't see the answer and tried to calculate it).
I never cottoned to "interview questions" like that. Nor, of course, to "certifications."
My "question" has always been: "show me a sample of your work."
Then: "give me the names of three good references who have been your former clients." Pick up the phone, preferably while they're there, and just see if you can talk to them. A good workman's reputation follows him. So does a bad one's.
If the person has built a web-site, then by going to that site I will be able to see the source code that runs it. (It's actually fairly unlikely that the builder bothered to compress it.) Get this person actively involved in conversation about how they did it, and what they did. You can sniff out a fake pretty readily, i-f you know the stuff yourself.
Then, just make it clear (in writing) from the start that there will be a 30-day probationary period. Anyone who can "get on their feet" will get on their feet within such time.
In my experience, quite frankly,"certifications" are worthless. They're just a product. When all is said and done, whoever is cranking them out has such an extraordinary incentive to do just that, that it renders their little pieces of paper utterly useless as a discriminating factor. Your mileage, of course, may vary...
Asking for examples of their work is secondary. Any monkey can go out there and copy someone's code. Hell many developers here do that with the Ext JS Examples, where they build entire applications with the ExtJS Copyright and Ext.example namespace written all over it.
The original post was about an Ext JS certification, which does nothing to display one's ability to effectively and efficiently use the product.
Jay Garcia @ModusJesus || Modus Create co-founder
Ext JS in Action author
Sencha Touch in Action author
Get in touch for Ext JS & Sencha Touch Touch Training
- | https://www.sencha.com/forum/showthread.php?88415-Ext-Certified-Developer | CC-MAIN-2015-35 | refinedweb | 643 | 72.66 |
I was working Cgo and wanted an equivalence chart for the C primitive types for reference. Since none exists, this is what I came up with for amd64 Linux.
There are a few caveats to be aware of:
- Some of the aliases have sequence numbers associated with them to deduplicate their type names within the package namespace in which they were generated. This is to say, carefully understand what is expressed here before using verbatim.
- If you are reading this page, you are presumed to understand struct alignment and padding.
- This list was generated mostly through an automated means, with some by-hand reconcilation in places. Please do not hesitate to report any errors or typos (even though I have done a once-over already)!
The source that powers this listing is found here at github.com/matttproud/cgotypes. It is worth running the
go tool cgo tool to get the intermediate output contained in the
_obj directory:
$ go tool cgo main.go $ ls _obj
Namely
_obj/_cgo_gotypes.go will be of interest!
| http://blog.matttproud.com/2015/04/appendix-of-cgo-and-go-type-mappings.html | CC-MAIN-2017-13 | refinedweb | 173 | 64.71 |
This is the mail archive of the gdb-patches@sourceware.org mailing list for the GDB project.
Wei-cheng Wang wrote: >I just found my mail client the it to the wrong address. >Here are some detailed explanation in my previous mail, >in cases you've not read yet. > > Sorry for the late reply; I didn't find the time to do a thorough review before now. Thanks again for working on this feature. In general, the patch is looking good; I do have a couple of comments below. See also Pedro's comments on the patch here: I'll follow up on the outstanding questions in the other patches shortly. >2. Add testcases for bytecode compilation in ftrace.exp > It is used to testing various emit_OP functions. Adding additional tests is good, but should be done as a separate patch (can be done before the main ppc support patch). >diff --git a/gdb/gdbserver/linux-ppc-low.c b/gdb/gdbserver/linux-ppc-low.c >index 188fac0..0b47543 100644 >--- a/gdb/gdbserver/linux-ppc-low.c >+++ b/gdb/gdbserver/linux-ppc-low.c > > >+/* Put a 32-bit INSN instruction in BUF in target endian. */ >+ >+static int >+put_i32 (unsigned char *buf, uint32_t insn) >+{ >+ if (__BYTE_ORDER == __LITTLE_ENDIAN) >+ { >+ buf[3] = (insn >> 24) & 0xff; >+ buf[2] = (insn >> 16) & 0xff; >+ buf[1] = (insn >> 8) & 0xff; >+ buf[0] = insn & 0xff; >+ } >+ else >+ { >+ buf[0] = (insn >> 24) & 0xff; >+ buf[1] = (insn >> 16) & 0xff; >+ buf[2] = (insn >> 8) & 0xff; >+ buf[3] = insn & 0xff; >+ } >+ >+ return 4; >+} This seems a bit overkill -- this is gdbserver code, which always runs in the same endianness as the inferior. So this could be done via a simple copy. (In order to avoid aliasing violations, the copy should be done via memcpy -- which the compiler will optimize away --, or even better, the type of buf could be changed to uint32_t throughout, since all instructions are 4 bytes.) Returning "number of bytes" from all these routines is likewise a bit odd on PowerPC. (Obviously, it makes sense on Intel, which is where you've probably copied it from.) Maybe all the GEN_ routines should just return an uint32_t instruction on PowerPC, which the user could then place into the buffer via e.g. *p++ = GEN_... (if p is a uint32_t *)? >+/* Generate a ds-form instruction in BUF and return the number of bytes written >+ >+ 0 6 11 16 30 32 >+ | OPCD | RST | RA | DS |XO| */ >+ >+__attribute__((unused)) /* Maybe unused due to conditional compilation. */ >+static int >+gen_ds_form (unsigned char *buf, int opcd, int rst, int ra, int ds, int xo) >+{ >+ uint32_t insn = opcd << 26; >+ >+ insn |= (rst << 21) | (ra << 16) | (ds & 0xfffc) | (xo & 0x3); Maybe mask off excess bits of rst and rs here too? Just to make sure you don't get completely random instructions if the macro is used incorrectly? Or just assert the values are in range? (Similarly with the other gen_ routines.) >+#define GEN_LWARX(buf, rt, ra, rb) gen_x_form (buf, 31, rt, ra, rb, 20, 0) Depending on which synchronization primitives are needed, we might want to expose the EH flag. >+/* Generate a md-form instruction in BUF and return the number of bytes written. >+ >+ 0 6 11 16 21 27 30 31 32 >+ | OPCD | RS | RA | sh | mb | XO |sh|Rc| */ >+ >+static int >+gen_md_form (unsigned char *buf, int opcd, int rs, int ra, int sh, int mb, >+ int xo, int rc) >+{ >+ uint32_t insn = opcd << 26; >+ unsigned int n = ((mb & 0x1f) << 1) | ((mb >> 5) & 0x1); >+ unsigned int sh0_4 = sh & 0x1f; >+ unsigned int sh5 = (sh >> 5) & 1; >+ >+ insn |= (rs << 21) | (ra << 16) | (sh0_4 << 11) | (n << 5) | (sh5 << 1) >+ | (xo << 2); "rc" is missing here. (Doesn't matter right now, but should still be fixed.) >+/* Generate a sequence of instructions to load IMM in the register REG. >+ Write the instructions in BUF and return the number of bytes written. */ >+ >+static int >+gen_limm (unsigned char *buf, int reg, uint64_t imm) >+{ >+ unsigned char *p = buf; >+ >+ if ((imm >> 8) == 0) >+ { >+ /* li reg, imm[7:0] */ >+ p += GEN_LI (p, reg, imm); Actually, you can load values up to 32767 with a single LI. >+ } >+ else if ((imm >> 16) == 0) >+ { >+ /* li reg, 0 >+ ori reg, reg, imm[15:0] */ >+ p += GEN_LI (p, reg, 0); >+ p += GEN_ORI (p, reg, reg, imm); >+ } >+ else if ((imm >> 32) == 0) >+ { >+ /* lis reg, imm[31:16] >+ ori reg, reg, imm[15:0] >+ rldicl reg, reg, 0, 32 */ >+ p += GEN_LIS (p, reg, (imm >> 16) & 0xffff); >+ p += GEN_ORI (p, reg, reg, imm & 0xffff); >+ p += GEN_RLDICL (p, reg, reg, 0, 32); You really need the rldicl only if the top bit was set; otherwise, lis already zeros out the high bits. >+ } >+ else >+ { >+ /* lis reg, <imm[63:48]> >+ ori reg, reg, <imm[48:32]> >+ rldicr reg, reg, 32, 31 >+ oris reg, reg, <imm[31:16]> >+ ori reg, reg, <imm[15:0]> */ >+ p += GEN_LIS (p, reg, ((imm >> 48) & 0xffff)); >+ p += GEN_ORI (p, reg, reg, ((imm >> 32) & 0xffff)); >+ p += GEN_RLDICR (p, reg, reg, 32, 31); >+ p += GEN_ORIS (p, reg, reg, ((imm >> 16) & 0xffff)); >+ p += GEN_ORI (p, reg, reg, (imm & 0xffff)); >+ } >+ >+ return p - buf; >+} >+/* Generate a sequence for atomically exchange at location LOCK. >+ This code sequence clobbers r6, r7, r8, r9. */ >+ >+static int >+gen_atomic_xchg (unsigned char *buf, CORE_ADDR lock, int old_value, int new_value) >+{ >+ const int r_lock = 6; >+ const int r_old = 7; >+ const int r_new = 8; >+ const int r_tmp = 9; >+ unsigned char *p = buf; >+ >+ /* >+ 1: lwsync >+ 2: lwarx TMP, 0, LOCK >+ cmpwi TMP, OLD >+ bne 1b >+ stwcx. NEW, 0, LOCK >+ bne 2b */ >+ >+ p += gen_limm (p, r_lock, lock); >+ p += gen_limm (p, r_new, new_value); >+ p += gen_limm (p, r_old, old_value); >+ >+ p += put_i32 (p, 0x7c2004ac); /* lwsync */ >+ p += GEN_LWARX (p, r_tmp, 0, r_lock); >+ p += GEN_CMPW (p, r_tmp, r_old); >+ p += GEN_BNE (p, -12); >+ p += GEN_STWCX (p, r_new, 0, r_lock); >+ p += GEN_BNE (p, -16); >+ >+ return p - buf; >+} A generic compare-and-swap will be correct, but probably not the most efficient way to implement a spinlock on PowerPC. We might want to look into implementing release/acquire semantics along the lines of the sample code in B.2.1.1 / B 2.2.1 of the PowerISA. (I guess this doesn't need to be done in the initial version of the patch.) >+/* Implement install_fast_tracepoint_jump_pad of target_ops. >+ See target.h for details. */ >+ >+static int >+ppc_install_fast_tracepoint_jump_pad (CORE_ADDR tpoint, CORE_ADDR tpaddr, >+ CORE_ADDR collector, >+ CORE_ADDR lockaddr, >+ ULONGEST orig_size, >+ CORE_ADDR *jump_entry, >+ CORE_ADDR *trampoline, >+ ULONGEST *trampoline_size, >+ unsigned char *jjump_pad_insn, >+ ULONGEST *jjump_pad_insn_size, >+ CORE_ADDR *adjusted_insn_addr, >+ CORE_ADDR *adjusted_insn_addr_end, >+ char *err) >+{ >+ unsigned char buf[1024]; >+ unsigned char *p = buf; >+ int j, offset; >+ CORE_ADDR buildaddr = *jump_entry; >+ const CORE_ADDR entryaddr = *jump_entry; >+#if __PPC64__ >+ const int rsz = 8; >+#else >+ const int rsz = 4; >+#endif >+ const int frame_size = (((37 * rsz) + 112) + 0xf) & ~0xf; See below for comments of the frame size (112 byte constant) ... >+ >+ /* Stack frame layout for this jump pad, >+ >+ High CTR -8(sp) >+ LR -16(sp) >+ XER >+ CR >+ R31 >+ R29 >+ ... >+ R1 >+ R0 >+ Low PC/<tpaddr> >+ >+ The code flow of this jump pad, >+ >+ 1. Save GPR and SPR >+ 3. Adjust SP >+ 4. Prepare argument >+ 5. Call gdb_collector >+ 6. Restore SP >+ 7. Restore GPR and SPR >+ 8. Build a jump for back to the program >+ 9. Copy/relocate original instruction >+ 10. Build a jump for replacing orignal instruction. */ >+ >+ for (j = 0; j < 32; j++) >+ p += GEN_STORE (p, j, 1, (-rsz * 36 + j * rsz)); This writes to below the SP, which is OK or ppc64 since there is a stack "red zone", but may fail on ppc32. You should (save and) update SP before saving the other registers there. >+ /* Save PC<tpaddr> */ >+ p += gen_limm (p, 3, tpaddr); >+ p += GEN_STORE (p, 3, 1, (-rsz * 37)); This is actually a problem even on ELFv1 ppc64 since the red zone size is only 288 bytes. (Only on ELFv2, the red zone size is 512 bytes.) >+ /* Save CR, XER, LR, and CTR. */ >+ p += put_i32 (p, 0x7c600026); /* mfcr r3 */ >+ p += GEN_MFSPR (p, 4, 1); /* mfxer r4 */ >+ p += GEN_MFSPR (p, 5, 8); /* mflr r5 */ >+ p += GEN_MFSPR (p, 6, 9); /* mfctr r6 */ >+ p += GEN_STORE (p, 3, 1, -4 * rsz); /* std r3, -32(r1) */ >+ p += GEN_STORE (p, 4, 1, -3 * rsz); /* std r4, -24(r1) */ >+ p += GEN_STORE (p, 5, 1, -2 * rsz); /* std r5, -16(r1) */ >+ p += GEN_STORE (p, 6, 1, -1 * rsz); /* std r6, -8(r1) */ >+ >+ /* Adjust stack pointer. */ >+ p += GEN_ADDI (p, 1, 1, -frame_size); /* subi r1,r1,FRAME_SIZE */ This violates the ABI because the back chain link is not maintained. At any point, r1 should point to a word that holds the back chain to the next higher frame. >+ /* Set r4 to collected registers. */ >+ p += GEN_ADDI (p, 4, 1, frame_size - rsz * 37); >+ /* Set r3 to TPOINT. */ >+ p += gen_limm (p, 3, tpoint); >+ >+ p += gen_atomic_xchg (p, lockaddr, 0, 1); This seems wrong. Shouldn't *lockaddr be set to the address of a collecting_t object, and not just "1"? >+ /* Restore stack and registers. */ >+ p += GEN_ADDI (p, 1, 1, frame_size); /* addi r1,r1,FRAME_SIZE */ Similar to above, this doesn't work on ppc32. >+ p += GEN_LOAD (p, 3, 1, -4 * rsz); /* ld r3, -32(r1) */ >+ p += GEN_LOAD (p, 4, 1, -3 * rsz); /* ld r4, -24(r1) */ >+ p += GEN_LOAD (p, 5, 1, -2 * rsz); /* ld r5, -16(r1) */ >+ p += GEN_LOAD (p, 6, 1, -1 * rsz); /* ld r6, -8(r1) */ >+ p += put_i32 (p, 0x7c6ff120); /* mtcr r3 */ >+ p += GEN_MTSPR (p, 4, 1); /* mtxer r4 */ >+ p += GEN_MTSPR (p, 5, 8); /* mtlr r5 */ >+ p += GEN_MTSPR (p, 6, 9); /* mtctr r6 */ >+ for (j = 0; j < 32; j++) >+ p += GEN_LOAD (p, j, 1, (-rsz * 36 + j * rsz)); >+ /* Now, insert the original instruction to execute in the jump pad. */ >+ *adjusted_insn_addr = buildaddr + (p - buf); >+ *adjusted_insn_addr_end = *adjusted_insn_addr; >+ relocate_instruction (adjusted_insn_addr_end, tpaddr); >+ >+ /* Verify the relocation size. If should be 4 for normal copy, or 8 >+ for some conditional branch. */ >+ if ((*adjusted_insn_addr_end - *adjusted_insn_addr == 0) >+ || (*adjusted_insn_addr_end - *adjusted_insn_addr > 8)) >+ { >+ sprintf (err, "E.Unexpected instruction length = %d" >+ "when relocate instruction.", >+ (int) (*adjusted_insn_addr_end - *adjusted_insn_addr)); >+ return 1; >+ } Hmm. This calls back to GDB to perform the relocation of the original instruction. On PowerPC, there are only a handful of instructions that need to be relocated; I'm not sure it is really necessary to call back to GDB. Can't those just be relocated directly here? This might even make the code simpler overall. >+ buildaddr = *adjusted_insn_addr_end; >+ p = buf; >+ /* Finally, write a jump back to the program. */ >+ offset = (tpaddr + 4) - buildaddr; >+ if (offset >= (1 << 26) || offset < -(1 << 26)) I guess this needs to check for (1 << 25) like below, since we have a signed displacement. >+/* >+ >+ Bytecode execution stack frame >+ >+ | Parameter save area (SP + 48) [8 doublewords] >+ | TOC save area (SP + 40) >+ | link editor doubleword (SP + 32) >+ | compiler doubleword (SP + 24) save TOP here during call >+ | LR save area (SP + 16) >+ | CR save area (SP + 8) >+ SP' -> +- Back chain (SP + 0) >+ | Save r31 >+ | Save r30 >+ | Save r4 for *value >+ | Save r3 for CTX >+ r30 -> +- Bytecode execution stack >+ | >+ | 64-byte (8 doublewords) at initial. Expand stack as needed. >+ | >+ r31 -> +- Note that the stack frame layout as above only applies to ELFv1, but you're actually only supporting ELFv2 at the moment. For ELFv2, there is no parameter save area (for this specific call), there is no compiler or linker doubleword, and the TOC save area is at SP + 24. (So this location probably shouldn't be used to save something else ...) >+ initial frame size >+ = (48 + 8 * 8) + (4 * 8) + 64 >+ = 112 + 96 >+ = 208 This is also a bit bigger than required for ELFv2. On the other hand, having a larger buffer doesn't hurt. >+static void >+ppc64_emit_reg (int reg) >+{ >+ unsigned char buf[10 * 4]; >+ unsigned char *p = buf; >+ >+ p += GEN_LD (p, 3, 31, bc_framesz - 32); >+ p += GEN_LD (p, 3, 3, 48); /* offsetof (fast_tracepoint_ctx, regs) */ This seems a bit fragile, it would be better to determine the offset automatically ... (I don't quite understand why the x86 code works either, as it is right now ...) >+static void >+ppc64_emit_stack_flush (void) >+{ >+ /* Make sure bytecode stack is big enough before push. >+ Otherwise, expand 64-byte more. */ >+ >+ EMIT_ASM (" std 3, 0(30) \n" >+ " addi 4, 30, -(112 + 8) \n" >+ " cmpd 7, 4, 1 \n" >+ " bgt 1f \n" >+ " ld 4, 0(1) \n" >+ " addi 1, 1, -64 \n" >+ " std 4, 0(1) \n" For full ABI compliance, the back chain needs to be maintained at every instruction, so you always have to update the stack pointer using stdu. Should be simple enough to do: " ld 4, 0(1) \n" " stdu 4, -64(1) \n" >+/* Discard N elements in the stack. */ >+ >+static void >+ppc64_emit_stack_adjust (int n) >+{ >+ unsigned char buf[4]; >+ unsigned char *p = buf; >+ >+ p += GEN_ADDI (p, 30, 30, n << 3); /* addi r30, r30, (n << 3) */ "n" probably isnt't too big for addi here, but should better be verified, just in case new callers are ever added ... >+static void >+ppc64_emit_int_call_1 (CORE_ADDR fn, int arg1) >+{ >+ unsigned char buf[8 * 4]; >+ unsigned char *p = buf; >+ >+ /* Setup argument. arg1 is a 16-bit value. */ >+ p += GEN_LI (p, 3, arg1); /* li r3, arg1 */ Well ... even so, you still cannot load values > 32767 with LI. Either check, or just call gen_limm, which should always do the right thing. >+static void >+ppc64_emit_void_call_2 (CORE_ADDR fn, int arg1) >+{ >+ unsigned char buf[12 * 4]; >+ unsigned char *p = buf; >+ >+ /* Save TOP */ >+ p += GEN_STD (p, 3, 31, bc_framesz + 24); On ELFv2, that is really the TOC save slot, see above. Why not just save TOP at 0(30)? That should be available ... >+ /* Setup argument. arg1 is a 16-bit value. */ >+ p += GEN_MR (p, 4, 3); /* mr r4, r3 */ >+ p += GEN_LI (p, 3, arg1); /* li r3, arg1 */ See above. >+static void >+ppc64_emit_if_goto (int *offset_p, int *size_p) >+{ >+ EMIT_ASM ("mr 4, 3 \n" >+ "ldu 3, 8(30) \n" >+ "cmpdi 7, 4, 0 \n" >+ "1:bne 7, 1b \n"); Why not just: cmpdi 7, 3, 0 ldu 3, 8(30) 1:bne 7, 1b >+static void >+ppc_write_goto_address (CORE_ADDR from, CORE_ADDR to, int size) >+{ >+ int rel = to - from; >+ uint32_t insn; >+ int opcd; >+ unsigned char buf[4]; >+ >+ read_inferior_memory (from, buf, 4); >+ insn = get_i32 (buf); >+ opcd = (insn >> 26) & 0x3f; >+ >+ switch (size) >+ { >+ case 14: >+ if (opcd != 16) >+ emit_error = 1; >+ insn = (insn & ~0xfffc) | (rel & 0xfffc); >+ break; >+ case 24: >+ if (opcd != 18) >+ emit_error = 1; >+ insn = (insn & ~0x3fffffc) | (rel & 0x3fffffc); >+ break; So this really should check for overflow -- I guess usually the code generated here shouldn't be too big, but if it is, we really should detect that and fail cleanly instead of just jumping to random locations ... >diff --git a/gdb/rs6000-tdep.c b/gdb/rs6000-tdep.c >index ef94bba..dc27cfb 100644 >--- a/gdb/rs6000-tdep.c >+++ b/gdb/rs6000-tdep.c >@@ -966,6 +969,21 @@ rs6000_breakpoint_from_pc (struct gdbarch *gdbarch, CORE_ADDR *bp_addr, > return little_breakpoint; > } > >+/* Return true if ADDR is a valid address for tracepoint. Set *ISZIE >+ to the number of bytes the target should copy elsewhere for the >+ tracepoint. */ >+ >+static int >+ppc_fast_tracepoint_valid_at (struct gdbarch *gdbarch, >+ CORE_ADDR addr, int *isize, char **msg) >+{ >+ if (isize) >+ *isize = gdbarch_max_insn_length (gdbarch); >+ if (msg) >+ *msg = NULL; >+ return 1; >+} Should/can we check here where the jump to the jump pad will be in range? Might be better to detect this early ... >+/* Copy the instruction from OLDLOC to *TO, and update *TO to *TO + size >+ of instruction. This function is used to adjust pc-relative instructions >+ when copying. */ >+ >+static void >+ppc_relocate_instruction (struct gdbarch *gdbarch, >+ CORE_ADDR *to, CORE_ADDR oldloc) See above for whether we need this here; maybe all this should be done directly in gdbserver. Nothing in here seems to require support from the full GDB code base. >+ { >+ /* conditional branch && AA = 0 */ >+ >+ rel = PPC_BD (insn); >+ newrel = (oldloc - *to) + rel; >+ >+ if (newrel >= (1 << 25) || newrel < -(1 << 25)) >+ return; >+ >+ newrel -= 4; Why is this correct? If we fit in a conditional branch, the value of newrel computed above should be correct. Only if we do the jump-over, we need to adjust newrel ... >+ if (newrel >= (1 << 15) || newrel < -(1 << 15)) >+ { >+ /* The offset of to big for conditional-branch (16-bit). >+ Try to invert the condition and jump with 26-bit branch. >+ For example, >+ >+ beq .Lgoto >+ INSN1 >+ >+ => >+ >+ bne 1f >+ b .Lgoto >+ 1:INSN1 >+ >+ */ >+ >+ /* Check whether BO is 001at or 011 at. */ >+ if ((PPC_BO (insn) & 0x14) != 0x4) >+ return; Well, we really should handle the other cases too; there's no reason to simply fail if this happens to be a branch on count or such ... >+/* Implement gdbarch_gen_return_address. Generate a bytecode expression >+ to get the value of the saved PC. SCOPE is the address we want to >+ get return address for. SCOPE maybe in the middle of a function. */ >+ >+static void >+ppc_gen_return_address (struct gdbarch *gdbarch, >+ struct agent_expr *ax, struct axs_value *value, >+ CORE_ADDR scope) >+{ >+ struct rs6000_framedata frame; >+ CORE_ADDR func_addr; >+ >+ /* Try to find the start of the function and analyze the prologue. */ >+ if (find_pc_partial_function (scope, NULL, &func_addr, NULL)) >+ { >+ skip_prologue (gdbarch, func_addr, scope, &frame); >+ >+ if (frame.lr_offset == 0) >+ { >+ value->type = register_type (gdbarch, PPC_LR_REGNUM); >+ value->kind = axs_lvalue_register; >+ value->u.reg = PPC_LR_REGNUM; >+ return; >+ } >+ } >+ else >+ { >+ /* If we don't where the function starts, we cannot analyze it. >+ Assuming it's not a leaf function, not frameless, and LR is >+ saved at back-chain + 16. */ >+ >+ frame.frameless = 0; >+ frame.lr_offset = 16; This isn't correct for ppc32 ... >+ } >+ >+ /* if (frameless) >+ load 16(SP) >+ else >+ BC = 0(SP) >+ load 16(BC) */ In any case, this code makes many assumptions that may not always be true. But then again, the same is true for the i386 case, so maybe this is OK for now ... In general, if we have DWARF CFI for the function, it would be much preferable to refer to that in order to determine the exact stack layout. Bye, Ulrich -- Dr. Ulrich Weigand GNU/Linux compilers and toolchain Ulrich.Weigand@de.ibm.com | http://sourceware.org/ml/gdb-patches/2015-03/msg00484.html | CC-MAIN-2019-26 | refinedweb | 2,871 | 69.62 |
Sometimes it’s tempting to re-invent the wheel to make a device function exactly the way you want. I am re-visiting the field of homemade electrophysiology equipment, and although I’ve already published a home made electocardiograph (ECG), I wish to revisit that project and make it much more elegant, while also planning for a pulse oximeter, an electroencephalograph (EEG), and an electrogastrogram (EGG). This project is divided into 3 major components: the low-noise microvoltage amplifier, a digital analog to digital converter with PC connectivity, and software to display and analyze the traces. My first challenge is to create that middle step, a device to read voltage (from 0-5V) and send this data to a computer.
This project demonstrates a simple solution for the frustrating problem of sending data from a microcontroller to a PC with a USB connection. My solution utilizes a USB FTDI serial-to-usb cable, allowing me to simply put header pins on my device which I can plug into providing the microcontroller-computer link. This avoids the need for soldering surface-mount FTDI chips (which gets expensive if you put one in every project). FTDI cables are inexpensive (about $11 shipped on eBay) and I’ve gotten a lot of mileage out of mine and know I will continue to use it for future projects. If you are interested in MCU/PC communication, consider one of these cables as a rapid development prototyping tool. I’m certainly enjoying mine!
It is important to me that my design is minimalistic, inexpensive, and functions natively on Linux and Windows without installing special driver-related software, and can be visualized in real-time using native Python libraries, such that the same code can be executed identically on all operating systems with minimal computer-side configuration. I’d say I succeeded in this effort, and while the project could use some small touches to polish it up, it’s already solid and proven in its usefulness and functionality.
This is my final device. It’s reading voltage on a single pin, sending this data to a computer through a USB connection, and custom software (written entirely in Python, designed to be a cross-platform solution) displays the signal in real time. Although it’s capable of recording and displaying 5 channels at the same time, it’s demonstrated displaying only one. Let’s check-out a video of it in action:
This 5-channel realtime USB analog sensor, coupled with custom cross-platform open-source software, will serve as the foundation for a slew of electrophysiological experiments, but can also be easily expanded to serve as an inexpensive multichannel digital oscilloscope. While more advanced solutions exist, this has the advantage of being minimally complex (consisting of a single microchip), inexpensive, and easy to build.
To the right is my working environment during the development of this project. You can see electronics, my computer, microchips, and coffee, but an intriguingly odd array of immunological posters in the background. I spent a couple weeks camping-out in a molecular biology laboratory here at UF and got a lot of work done, part of which involved diving into electronics again. At the time this photo was taken, I hadn’t worked much at my home workstation. It’s a cool picture, so I’m holding onto it.
Below is a simplified description of the circuit schematic that is employed in this project. Note that there are 6 ADC (analog to digital converter) inputs on the ATMega48 IC, but for whatever reason I ended-up only hard-coding 5 into the software. Eventually I’ll go back and re-declare this project a 6-channel sensor, but since I don’t have six things to measure at the moment I’m fine keeping it the way it is. RST, SCK, MISO, and MOSI are used to program the microcontroller and do not need to be connected to anything for operation. The max232 was initially used as a level converter to allow the micro-controller to communicate with a PC via the serial port. However, shortly after this project was devised an upgrade was used to allow it to connect via USB. Continue reading for details…
Below you can see the circuit breadboarded. The potentiometer (small blue box) simulated an analog input signal.
The lower board is my AVR programmer, and is connected to RST, SCK, MISO, MOSI, and GND to allow me to write code on my laptop and program the board. It’s a Fun4DIY.com AVR programmer which can be yours for $11 shipped! I’m not affiliated with their company, but I love that little board. It’s a clone of the AVR ISP MK-II.
As you can see, the USB AVR programmer I’m using is supported in Linux. I did all of my development in Ubuntu Linux, writing AVR-GCC (C) code in my favorite Linux code editor Geany, then loaded the code onto the chip with AVRDude.
I found a simple way to add USB functionality in a standard, reproducible way that works without requiring the soldering of a SMT FTDI chip, and avoids custom libraries like V-USB which don’t easily have drivers that are supported by major operating systems (Windows) without special software. I understand that the simplest long-term and commercially-logical solution would be to use that SMT chip, but I didn’t feel like dealing with it. Instead, I added header pins which allow me to snap-on a pre-made FTDI USB cable. They’re a bit expensive ($12 on ebay) but all I need is 1 and I can use it in all my projects since it’s a sinch to connect and disconnect. Beside, it supplies power to the target board! It’s supported in Linux and in Windows with established drivers that are shipped with the operating system. It’s a bit of a shortcut, but I like this solution. It also eliminates the need for the max232 chip, since it can sense the voltages outputted by the microcontroller directly.
The system works by individually reading the 10-bit ADC pins on the microcontroller (providing values from 0-1024 to represent voltage from 0-5V or 0-1.1V depending on how the code is written), converting these values to text, and sending them as a string via the serial protocol. The FTDI cable reads these values and transmits them to the PC through a USB connection, which looks like “COM5” on my Windows computer. Values can be seen in any serial terminal program (i.e., hyperterminal), or accessed through Python with the PySerial module.
As you can see, I’m getting quite good at home-brewn PCBs. While it would be fantastic to design a board and have it made professionally, this is expensive and takes some time. In my case, I only have a few hours here or there to work on projects. If I have time to design a board, I want it made immediately! I can make this start to finish in about an hour. I use a classic toner transfer method with ferric chloride, and a dremel drill press to create the holes. I haven’t attacked single-layer SMT designs yet, but I can see its convenience, and look forward to giving it a shot before too long.
Here’s the final board ready for digitally reporting analog voltages. You can see 3 small headers on the far left and 2 at the top of the chip. These are for RST, SCK, MISO, MOSI, and GND for programming the chip. Once it’s programmed, it doesn’t need to be programmed again. Although I wrote the code for an ATMega48, it works fine on a pin-compatible ATMega8 which is pictured here. The connector at the top is that FTDI USB cable, and it supplies power and USB serial connectivity to the board.
If you look closely, you can see that modified code has been loaded on this board with a Linux laptop. This thing is an exciting little board, because it has so many possibilities. It could read voltages of a single channel in extremely high speed and send that data continuously, or it could read from many channels and send it at any rate, or even cooler would be to add some bidirectional serial communication capabilities to allow the computer to tell the microcontroller which channels to read and how often to report the values back. There is a lot of potential for this little design, and I’m glad I have it working.
Unfortunately I lost the schematics to this device because I formatted the computer that had the Eagle files on it. It should be simple and intuitive enough to be able to design again. The code for the microcontroller and code for the real-time visualization software will be posted below shortly. Below are some videos of this board in use in one form or another:
Here is the code that is loaded onto the microcontroller:
#define F_CPU 8000000UL #include <avr/io.h> #include <util/delay.h> void readADC(char adcn){ //ADMUX = 0b0100000+adcn; // AVCC ref on ADCn ADMUX = 0b1100000+adcn; // AVCC ref on ADCn ADCSRA |= (1<<ADSC); // reset value while (ADCSRA & (1<<ADSC)) {}; // wait for measurement } int main (void){ DDRD=255; init_usart(); ADCSRA = 0b10000111; //ADC Enable, Manual Trigger, Prescaler ADCSRB = 0; int adcs[8]={0,0,0,0,0,0,0,0}; char i=0; for(;;){ for (i=0;i<8;i++){readADC(i);adcs[i]=ADC>>6;} for (i=0;i<5;i++){sendNum(adcs[i]);send(44);} readADC(0); send(10);// LINE BREAK send(13); //return _delay_ms(3);_delay_ms(5); } } void sendNum(unsigned int num){ char theIntAsString[7]; int i; sprintf(theIntAsString, "%u", num); for (i=0; i < strlen(theIntAsString); i++){ send(theIntAsString[i]); } } void send (unsigned char c){ while((UCSR0A & (1<<UDRE0)) == 0) {} UDR0 = c; } void init_usart () { // ATMEGA48 SETTINGS int BAUD_PRESCALE = 12; UBRR0L = BAUD_PRESCALE; // Load lower 8-bits UBRR0H = (BAUD_PRESCALE >> 8); // Load upper 8-bits UCSR0A = 0; UCSR0B = (1<<RXEN0)|(1<<TXEN0); //rx and tx UCSR0C = (1<<UCSZ01) | (1<<UCSZ00); //We want 8 data bits }
Here is the code that runs on the computer, allowing reading and real-time graphing of the serial data. It’s written in Python and has been tested in both Linux and Windows. It requires *NO* non-standard python libraries, making it very easy to distribute. Graphs are drawn (somewhat inefficiently) using lines in TK. Subsequent development went into improving the visualization, and drastic improvements have been made since this code was written, and updated code will be shared shortly. This is functional, so it’s worth sharing.
import Tkinter, random, time import socket, sys, serial class App: def white(self): self.lines=[] self.lastpos=0 self.c.create_rectangle(0, 0, 800, 512, fill="black") for y in range(0,512,50): self.c.create_line(0, y, 800, y, fill="#333333",dash=(4, 4)) self.c.create_text(5, y-10, fill="#999999", text=str(y*2), anchor="w") for x in range(100,800,100): self.c.create_line(x, 0, x, 512, fill="#333333",dash=(4, 4)) self.c.create_text(x+3, 500-10, fill="#999999", text=str(x/100)+"s", anchor="w") self.lineRedraw=self.c.create_line(0, 800, 0, 0, fill="red") self.lines1text=self.c.create_text(800-3, 10, fill="#00FF00", text=str("TEST"), anchor="e") for x in range(800): self.lines.append(self.c.create_line(x, 0, x, 0, fill="#00FF00")) def addPoint(self,val): self.data[self.xpos]=val self.line1avg+=val if self.xpos%10==0: self.c.itemconfig(self.lines1text,text=str(self.line1avg/10.0)) self.line1avg=0 if self.xpos>0:self.c.coords(self.lines[self.xpos],(self.xpos-1,self.lastpos,self.xpos,val)) if self.xpos<800:self.c.coords(self.lineRedraw,(self.xpos+1,0,self.xpos+1,800)) self.lastpos=val self.xpos+=1 if self.xpos==800: self.xpos=0 self.totalPoints+=800 print "FPS:",self.totalPoints/(time.time()-self.timeStart) t.update() def __init__(self, t): self.xpos=0 self.line1avg=0 self.data=[0]*800 self.c = Tkinter.Canvas(t, width=800, height=512) self.c.pack() self.totalPoints=0 self.white() self.timeStart=time.time() t = Tkinter.Tk() a = App(t) #ser = serial.Serial('COM1', 19200, timeout=1) ser = serial.Serial('/dev/ttyUSB0', 38400, timeout=1) sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) sock.setsockopt(socket.SOL_SOCKET, socket.SO_BROADCAST, 1) while True: while True: #try to get a reading #print "LISTENING" raw=str(ser.readline()) #print raw raw=raw.replace("n","").replace("r","") raw=raw.split(",") #print raw try: point=(int(raw[0])-200)*2 break except: print "FAIL" pass point=point/2 a.addPoint(point)
If you re-create this device of a portion of it, let me know! I’d love to share it on my website. Good luck!
21 thoughts on “Multichannel USB Analog Sensor with ATMega48”
This looks like a great platform for sensor experimentation. And the Python realtime display program is beautifully done. Nice work and thanks for posting.
I was wondering if you were planning on publishing the schematics for the EKG device you demonstrated in the DIY ecg 2 – prototype 1 video that was based on the AD620? I am working on a very similar prototype that would use the AD620 as well and am just looking for some guidance. Thanks in advance.
-Ed
Hi, Scott!
Great project, I’d love to recreate this. Do you think you could post the full schematics sometime? Also, I noticed that the microcontroller code is incomplete, could you post the full code? 🙂
Thanks!
–Rob
This looks awesome! Cant wait for you to share the updated code 🙂
Ill try playing with this for now 🙂
Hi!
Seems so cool! but… there’s a problem in getting the whole code… it shows up uncomplete…
Thanks for noticing that! I added the rest of the code. I’m not sure how it got clipped off 🙂
Fascinating stuff!
What sample rate is this setup able to achieve?
Nice. But why not just use a PIC18F2550 (or 2455 or 4550)? It has way more code space, more SRAM, and it does USB so you don’t need the FTDI cable or the MAX232. Doesn’t get any more minimalist than that.
Keep on with the projects.
Cannot get the AVR code to compile. Here is the error I’m getting from the make command in OSX 10.7. Any help?
Thanks in advance
avr-gcc -Wall -Os -DF_CPU=8000000 -mmcu=atmega8 -c main.c -o main.o
main.c: In function ‘main’:
main.c:13:9: warning: implicit declaration of function ‘init_usart’
main.c:15:5: error: ‘ADCSRB’ undeclared (first use in this function)
main.c:15:5: note: each undeclared identifier is reported only once for each function it appears in
main.c:21:17: warning: array subscript has type ‘char’
main.c:22:17: warning: implicit declaration of function ‘sendNum’
main.c:22:17: warning: array subscript has type ‘char’
main.c:22:17: warning: implicit declaration of function ‘send’
main.c: At top level:
main.c:30:6: warning: conflicting types for ‘sendNum’
main.c:22:35: note: previous implicit declaration of ‘sendNum’ was here
main.c: In function ‘sendNum’:
main.c:33:9: warning: implicit declaration of function ‘sprintf’
main.c:33:9: warning: incompatible implicit declaration of built-in function ‘sprintf’
main.c:34:9: warning: implicit declaration of function ‘strlen’
main.c:34:23: warning: incompatible implicit declaration of built-in function ‘strlen’
main.c: At top level:
main.c:40:6: warning: conflicting types for ‘send’
main.c:22:52: note: previous implicit declaration of ‘send’ was here
main.c: In function ‘send’:
main.c:41:16: error: ‘UCSR0A’ undeclared (first use in this function)
main.c:41:29: error: ‘UDRE0’ undeclared (first use in this function)
main.c:42:9: error: ‘UDR0’ undeclared (first use in this function)
main.c: At top level:
main.c:45:6: warning: conflicting types for ‘init_usart’
main.c:13:9: note: previous implicit declaration of ‘init_usart’ was here
main.c: In function ‘init_usart’:
main.c:48:9: error: ‘UBRR0L’ undeclared (first use in this function)
main.c:49:9: error: ‘UBRR0H’ undeclared (first use in this function)
main.c:50:9: error: ‘UCSR0A’ undeclared (first use in this function)
main.c:51:9: error: ‘UCSR0B’ undeclared (first use in this function)
main.c:51:22: error: ‘RXEN0’ undeclared (first use in this function)
main.c:51:33: error: ‘TXEN0’ undeclared (first use in this function)
main.c:52:9: error: ‘UCSR0C’ undeclared (first use in this function)
main.c:52:22: error: ‘UCSZ01’ undeclared (first use in this function)
main.c:52:36: error: ‘UCSZ00’ undeclared (first use in this function)
make: *** [main.o] Error 1
Sounds like a library problem. Can you blink a led with a 3 line LED_blink.c program? I think its a compiler configuration problem rather than a code problem.
Yes I have gotten the blink program from this tutorial running
I am still attempting to base the ECG circuit off of the Arduino board but am having some troubles so I thought Id give the true AVR a try. Also I am using the AD627 which I am assuming is similar enough to the 620 to yield the same results.
Hi Scott,
I found this post very helpful. I recreated the device, but with an ATmega32 on a small AVR development board that I have. I used the serial port connected via a MAX-232 level converter. This I connected to the USB port using a serial-USB adapter. I modified the code on the micro-controller according to the ATmega32. In the Python code I just uncommented the “print ser” and “print raw”, which runs on the PC.
Now the problem — When I run the device, the python code shows the “FAIL” message. The ‘print’ lines revealed that the line being read from the serial port is blank.
So I guess this means that data are not reaching the serial port.
I have checked that the serial-USB adapter is working and also checked that my TXD pin is transmitting data. Therefore, I think the problem is because of a baud-rate mismatch between the device and the PC.
I noticed that in your code you have declared F_CPU as 8 MHz, but on your device there is no external crystal, nor is there any code to set the internal RC oscillator (which must be 1 MHz) to 8 MHz. So my question is, what clock source is your device using and how did you get an accurate baud rate ?
Thanks in advance.
Nishchay
The clock source actually _is_ the internal oscillator. The internal oscillator runs at 8MHz. Often, by default, the DIV/8 fuse is enabled, causing the clock to be divided by 8 (1MHz). I imagine if you disable the DIV/8 fuse, your device will work. You can then use an AVR baud rate calculator reference page like to help you with the rest. Good luck!
I am trying to send out (x,y) coordinates of object in image to microcontroller via USB to Serial cable in Kubuntu version.I am using OpenCV C library for programming purpose.How to interface cable to laptop and make changes in my existing C program.
Great project..
I am working on designing ecg system using arm processor. i need to write assembly c language to read and display ecg in the pc. can u suggest me how to do this?
Thank You very much , that helped me .
I followed your code and successfully implemented on ATMEGA16/32 . I didn’t known about sprintf function and by this post i came to know about that useful function and also a similar function “itoa” .
I dont know python programming hence i am not able to compile it successfully ,although i tried to compile by following some tutorials but always stuck at “import serial” , hence i used an alternate named “live-graph” written in java.
The IA-2142-U is a Dual 8bit Analog Output module, with up to 8 Digital Inputs and 3 Digital Outputs. The IA-2142-U structure and software control commands are Series-3000 compatible, including the watchdog protection circuit, and user interface extra Led and Jumper.
USB Analog
The post is very interesting. I think the hardware possibilities have widened now, too. I haven’t had the opportunity to do much beyond demo programs yet with one device I’ve gotten, but I think it might be something you would find useful. This is the Teensy++ 2.0 board from PJRC electronics. It is programmed via USB and offers the capability to use USB in user programs in any of several different modes, including HID. The HID operation is especially interesting since that is supported cross-platform without needing device driver installation.
Teensy++ 2.0 website
It can be programmed with C or in an Arduino mode.
I located this device as something to pair with a Raspberry Pi single-board Linux computer. The Teensy board supplies the few IO and AD capabilities that the Raspberry Pi lacks, and communication can be via USB, simplifying hardware interactions. In terms of your project, I would think this pair of devices would handle the complete set of acquisition, processing, and display tasks, and could be made into a small, portable, battery-driven package. I’ve gotten a UBEC to pair with a 2500MAH lead-acid 12V battery to handle power needs for my work.
Scott, looks like the source code is truncated, the Python programs ends at a.addPoint(point), with the two while clause unfinished…
No, that’s accurate. One of the while clauses has already exited by the time that addPoint() is called. The other one loops forever.
Dear Scott;
Thank you for posting your multichannel usb device. You certainly did your homework, and what a beautiful job.
I am working on somethig quasi-similiar, and need a little advice. My ThunderVolt device will soon have a voltage
output, and I want to display that voltage on a PC. I am looking for a programmer for hire to build a program to
display my data. I wonder if you would like to make some cash doing this for me.
Basically I want to input +5 volts and a ground from a PC or Mac into my device. There the +5 will be dropped across two
resistances, one of which will be a precision 47 K ohm resistor in series with the user’s body resistance, which is usually 50 K to several million ohms. A third wire from the USB will tap off the voltage across the 47 K ohm and send it back to the user’s computer. There, the computer will read this voltage, compute the amperage by using the 47 K ohm resistance, and calculate the power level in watts. The computer will display this first reading of the resistor on the screen, with print capability.
One of my questions is: Does a PC have sufficient onboard capability so that no additional hardware is needed to determine
the voltage from the third usb line? I do not want to have to make an additional hardware attachment onto my device, since
there isn’t room in my case box to do it, and the cost would be prohibitive. I just want to plug my device’s usb female into a usb-male-to-male cable and connect that to a usb port on a user’s computer. Then, the user will install my software, grab the
two handholds and his/her computer will display a number on the screen. The computer will measure the 47 K ohm resistor’s voltage, calculate the amperage across the resistor, and compute the power across the user’s resistance in micro watts.
The computer will use the first initial reading from the 47 K resistor, because that is the most accurate.
I am a poor Maine resident trying to make a go of a cottage industry, but I believe I have found a very interesting method of
quantifying the vitality of a human being in watts. My pre-zap and post-zap tests show a gain in power of about 6%, depending on how healthy or sick the tester is. I previously used a DVM to do the testing, which is really the same thing but it measures in resistance. Since the body is essentially an electrical charge, I wanted a reading in watts, not ohms.
Dr. Albert Szent Gygorgyi, MD Ph.D, and Nobel prize winner wrote a thesis on this subject. It beacame the backstop for my
ThunderVolt device, along with Tesla’s radiant energy, Royal Raymond Rife work, William Reich, and Hulda Clark, who make the first practical zapper. My device is built on these five outstanding people.
I invite your kind reply.
Sincerely yours,
Steve Coffman
antioxc@gmail.com
I know that there are software programs out there that read cpu voltages on a computer, and they don’t need hardware
accessories or other installs to work.
Hi Scott,
This is a great Python code to monitor my arduino uno attach with a geophone~
And it works awesome !!!!
Thanks a lot~~
I am trying to make a wireless geophone, it just getting started
Jimmy
You must log in to post a comment. | https://www.swharden.com/wp/2012-06-14-multichannel-usb-analog-sensor-with-atmega48/ | CC-MAIN-2017-43 | refinedweb | 4,252 | 64.41 |
Vue.js is a popular JavaScript library used for developing prototypes in a short time. This includes user interfaces, front-end applications, static webpages, and native mobile apps. It is known for its easy to use syntax and simple data binding features.
Recently, a new package was released in the Vue.js ecosystem. It is an integration of the popular Bootstrap framework and Vue.js. This package is known as BootstrapVue. It allows us to use custom components that integrate with Bootstrap (v4) on the fly.
What’s more? It supports the custom Bootstrap components, the grid system, and also supports Vue.js directives.
In this post, we’ll walk through the basics of BootstrapVue, explain the general concepts, demonstrate the setup process and build out a mini Vue.js project in the process to give you more practical experience.
Why BootstrapVue?
Given that Bootstrap is the most popular CSS framework available (In my opinion), most developers who have moved or who are intending to move from frameworks like Vanilla JavaScript to Vue.js always find the migration a bit difficult because of Bootstrap’s heavy dependency on jQuery.
With BootstrapVue, any developer can make that switch from Vanilla.js or jQuery to Vue.js without bothering about Bootstrap’s heavy dependency on jQuery or even finding a way around it. That’s how BootstrapVue comes to the rescue. It helps to bridge that gap and allows incoming Vue developers to use Bootstrap in their projects with ease.
Getting started
When using module bundlers like webpack, babel, etc, It is preferable to include the package into your project directly. For demonstration purposes and to give you a hands-on approach to understanding and using BootstrapVue, we’ll set up a Vue.js project with BootstrapVue and build it up to a functional Vue.js application that fetches meals from an external API.
Prerequisites
- Basic knowledge of Vue.js will be helpful to understand this demonstration
- Globally install the Vue CLI tool on your laptop to follow up with this post. If you currently don’t have it installed, follow this installation guide
Create a Vue project
First, we have to create a Vue.js project that we can use to demonstrate the implementation of the BootstrapVue component. First, open a terminal window on your preferred directory and run the command below:
vue create bootstrapvue-demo
If you don’t have Vue CLI installed globally, please follow this guide to do so and come back to continue with this tutorial afterwards.
The above command will throw a preset selection dialogue like this:
Select the default preset and click Enter to proceed:
Now, you’re done bootstrapping your Vue application, change into the new Vue project directory and start the development server with the commands below:
cd bootstrapvue-demo
npm run serve
This will serve your Vue application on localhost:8080. Navigate to it on your browser and you will see your Vue application live:
How to add Bootstrap and BootstrapVue to the project
There are two major ways to do this, using package managers like and npm and yarn or using the CDN links.
Using npm or yarn
We’ll install all the necessary packages we mentioned earlier for the project using npm or yarn. To do that, navigate to the project’s root directory and run either of the commands below, depending on your preferred package manager:
# With npm
npm install bootstrap-vue bootstrap axios
OR
# With yarn
yarn add bootstrap-vue bootstrap axios
The above command will install the BootstrapVue and Bootstrap packages. The BoostrapVue package contains all of the BootstrapVue components and the regular Bootstrap contains the CSS file. We’ve also installed Axios to help with fetching meals for our app from themealdb API.
Using CDN
We’ve seen the package manager way of installing BootstrapVue into our Vue project, now let’s take a look at a different approach that may require less effort. To add Bootstrap and BootstrapVue to your Vue project via CDN, open the index.html file in the projects public folder and add this code in the appropriate places:
This will pull in the minified, and the latest version of each library into our project, nice and easy! However, for the purpose of this project, we’ll stick to the first option of using the package manager. So, let’s proceed with setting up the BootstrapVue package.
Setting up BootstrapVue
Next, let’s set up the BootstrapVue package we just installed. Head on over to your main.js file and add this line of code to the top:
//src/main.js
import BootstrapVue from 'bootstrap-vue'
Vue.use(BootstrapVue)
What we did here is pretty straightforward, we imported the BoostrapVue package and then registered it in the application using the
Vue.use() function so that our Vue application can recognize it.
For everything to work, we also need to import the Bootstrap CSS file into our project. Add this snippet into the main.js file:
//src/main.js
import 'bootstrap/dist/css/bootstrap.css'
import 'bootstrap-vue/dist/bootstrap-vue.css'
Your main.js file should look similar to the snippet below after importing the necessary modules into your Vue app:
Creating Bootstrap components
We are now at the stage where we can start exploring the BoostrapVue component. Let’s get started by creating our first component. The first component we’ll create will be the Navbar component. Go to your components directory, create a file with the name
Navbar.vueand update it with the code below:
The Navbar components contain several BootstrapVue components, one of which is the
b-navbarcomponent. This component is the mother component of every other component in the Navbar. Without this component, all other components in the Navbar won’t render properly.
The text color on the Navbar can be changed with the
typeprops. The
background-colorof the Navbar can also be changed with the
variantprops. These colors could be any of the normal Bootstrap default colors —
info,
primary,
successetc.
Another component is the
b-navbar-brand component. This is where the logo of the website can be rendered. It also takes in the
variantand
typeprops which can be used to change the
background-color and
text-color respectively.
Other BootstrapVue components are:
- b-nav-form
- b-nav-item-dropdown
- b-dropdown-item
- b-navbar-toggle
- b-collapse
- b-nav-item (which could be disabled with the “disabled” attribute)
- b-navbar-nav
- b-nav-item.
- And so much more
One beautiful thing about BootstrapVue components is that they are responsive by default. As a result, you will not need to write any additional code or use external libraries to make them responsive.
Another component I’d like us to look at is the
Cardcomponent. The card component allows us to display images, text, and so on, in a card. It is written as
b-card. To demonstrate it, let’s create a
Cards.vuefile in our components directory. Then update it with the code below:
To render the Cards component we just created, let’s modify the
HelloWorld.vuefile. Open it up and update it with the code below:
What we’ve done here is create a Cards component and nest it into the
HelloWorld.vuefile. Notice that in the Cards component, we have a lifecycle hook that modifies our data. As a result, the data gets populated into the
b-cardcomponent before being rendered to the browser.
Next, let’s update our
App.vuefile to capture our recent changes and render the right component to the browser. Open it up and update it with the snippet below:
At this point, if you check back on the browser, you should see our meal store app running like this:
As you can see, our card isn’t properly laid out and we’ll have to correct that. Luckily, BootstrapVue has some in-built components we could use to put our cards in a grid.
They are:
- b-row, and
- b-col
Let’s modify the code in our
Cards.vuefile to render the content in a grid with the BootstrapVue components we mentioned above. Open up the
Cards.vue file and update it with the snippet below:
Now if we check back on the browser, we should see a properly laid out card with rendered contents in a grid.
Now we have a neatly rendered responsive meals application. We built all of this with just a handful of BootstrapVue’s components. To learn more about BootstrapVue and all of the things you can do with it, consider checking out the official documentation.
Handling migrations
What if you would like to migrate an existing project from regular Bootstrap4 to BootstrapVue? How simple would it be? It’ll be a breeze. Here’s all you need to do:
- Remove the
bootstrap.jsfile from build scripts
- Remove
jQueryfrom your application, BootstrapVue works independently
- Convert your markup from native Bootstrap to BootstrapVue custom component markup
And that’s it! In three clear steps, you can migrate your existing projects from regular jQuery dependent Bootstrap to the simpler independent BootstrapVue package without breaking any existing code.
Conclusion
In this post, we have demonstrated by way of examples how to get started with BootstrapVue. We went from the installation steps to setting it up in a Vue project, and finally using its custom components to build out parts of our Mealzers application. As we can see, the BootstrapVue module is simple and easy to use. If you have a working knowledge of the regular Bootstrap package, getting started with BootstrapVue will be a breeze.. | https://blog.logrocket.com/getting-started-with-bootstrapvue-2d8bf907ef11/ | CC-MAIN-2019-39 | refinedweb | 1,600 | 64 |
Transform3d’ in namespace ‘Eigen’ does not name a type
asked 2012-09-02 14:12:53 -0600
This post is a wiki. Anyone with karma >75 is welcome to improve it.
Hello everyone!
I'm a beginner in ROS and I was working around this tutorial: Coding a real time Cartesian controller with Eigen
However, the following declaration:
" typedef Eigen::Transform3d CartPose;" leads to the error: "Transform3d’ in namespace ‘Eigen’ does not name a type".
Any suggestion is very much welcome!
Yuquan
Could you post some more context? Which version of ROS? Surrounding lines of the error? These types of errors mostly occur if you are missing '#include' statements, or don't have the (proper versions of) headers installed.
Yes, I'm using ubuntu 11.10 and ROS-electric. I guess probably it's a problem about eigen2 or eigen3. I'll try. Thanks!
It's probably easier to edit your original question, instead of trying to use comments for it. | https://answers.ros.org/question/42836/transform3d-in-namespace-eigen-does-not-name-a-type/ | CC-MAIN-2018-05 | refinedweb | 161 | 67.45 |
void ccifrm_c ( SpiceInt frclss,
SpiceInt clssid,
SpiceInt lenout,
SpiceInt * frcode,
SpiceChar * frname,
SpiceInt * center,
SpiceBoolean * found )
Return the frame name, frame ID, and center associated with
a given frame class and class ID.
FRAMES
FRAMES
VARIABLE I/O DESCRIPTION
-------- --- --------------------------------------------------
frclss I Class of frame.
clssid I Class ID of frame.
lenout I Maximum length of output string.
frcode O ID code of the frame.
frname O Name of the frame.
center O ID code of the center of the frame.
found O SPICETRUE if the requested information is available.
frclss is the class or type of the frame. This identifies which
subsystem will be used to perform frame transformations.
clssid is the ID code used for the frame within its class. This
may be different from the frame ID code.
lenout The allowed length of the output frame name. This length
must large enough to hold the output string plus the
null terminator. If the output string is expected to have
n characters, `lenout' should be n + 1.
frcode is the frame ID code for the reference frame
identified by `frclss' and `clssid'.
frname is the name of the frame identified by
`frclss' and `clssid'.
`frname' should be declared
SpiceChar frname [33]
to ensure that it can contain the full name of the
frame..
None.
1) This routine assumes that the first frame found with matching
class and class ID is the correct one. SPICE's frame system
does not diagnose the situation where there are multiple,
distinct frames with matching classes and class ID codes, but
this situation could occur if such conflicting frame
specifications are loaded via one or more frame kernels. The
user is responsible for avoiding such frame specification
conflicts.
2) If `frname' does not have room to contain the frame name, the
name will be truncated on the right. (Declaring `frname' to have
a length of 33 characters will ensure that the name will not be
truncated.
3) If a frame class assignment is found that associates a
string (as opposed to numeric) value with a frame class
keyword, the error SPICE(INVALIDFRAMEDEF) will be signaled.
4) If a frame class assignment is found that matches the input
class, but a corresponding class ID assignment is not
found in the kernel pool, the error SPICE(INVALIDFRAMEDEF)
will be signaled.
5) If a frame specification is found in the kernel pool with
matching frame class and class ID, but either the frame name
or frame ID code are not found, the error
SPICE(INVALIDFRAMEDEF) will be signaled.
6) If a frame specification is found in the kernel pool with
matching frame class and class ID, but the frame center
is not found, the error will be diagnosed by routines
in the call tree of this routine.
7) The error SPICE(NULLPOINTER) is signaled if the output string
pointer is null.
8) The caller must pass a value indicating the length of the output
string. If this value is not at least 2, the error
SPICE(STRINGTOOSHORT) is signaled.
The frame specifications sought by this routine may be provided
by loaded frames kernels. Such kernels will always be required if
the frame class is CK, TK, or dynamic, and will be required if
the frame class is PCK but the frame of interest is not built-in.
This routine allows the user to determine the frame associated
with a given frame class and class ID code. The kernel pool is
searched first for a matching frame; if no match is found, then
the set of built-in frames is searched.
Since the neither the frame class nor the class ID are primary
keys, searching for matching frames is a linear (and therefore
typically slow) process.
Suppose that you want to find the frame information of a named frame,
"ITRF93" for this example. One could use the following code fragment:
#include <stdlib.h>
#include <stdio.h>
#include "SpiceUsr.h"
#include "SpiceZfc.h"
#define FRNAMLEN 33
SpiceChar frname[FRNAMLEN];
SpiceInt clss;
SpiceInt clss_ID;
SpiceInt frcode1;
SpiceInt frcode2;
SpiceInt center1;
SpiceInt center2;
SpiceBoolean found;
int main()
{
namfrm_ ( "ITRF93", &frcode1, 6 );
frinfo_c ( frcode1,
¢er1, &clss, &clss_ID, &found );
ccifrm_c ( clss, clss_ID, FRNAMLEN,
&frcode2, frname, ¢er2, &found );
if ( !found )
{
puts( "No joy");
exit(1);
}
printf( "Class : %d \n"
"Class ID : %d \n"
"Fame name : %s \n"
"Frame Code: %d \n"
"Center ID : %d \n",
(int)clss,
(int)clss_ID,
frname,
(int)frcode2,
(int)center2 );
exit(0);
}
The program outputs:
Class : 2
Class ID : 3000
Frame name: ITRF93
Frame Code: 13000
Center ID : 399
See item (1) in the Exceptions section above.
N.J. Bachman (JPL)
-CSPICE Version 1.0.1, 04-FEB-2017 (EDW)(BVS)
Edit to example program to use "%d" with explicit casts
to int for printing SpiceInts with printf.
Shortened one of permuted index entries.
-CSPICE Version 1.0.0, 14-JUL-2014 (NJB)
Added to the Brief_I/O header section a description
of input argument `lenout'.
Last update was 10-JAN-2011 (NJB)(EDW)
Find info associated with a frame class and class id
Map frame class and class id to frame info
Map frame class and class id to frame name, id, center
Wed Apr 5 17:54:29 2017 | https://naif.jpl.nasa.gov/pub/naif/toolkit_docs/C/cspice/ccifrm_c.html | CC-MAIN-2019-30 | refinedweb | 861 | 70.94 |
New to Java and OOP in general here.
I'm trying to understand when an object's members are able to be used. Specifically, why can't I pass an instance variable to a constructor when calling it within the class? Example code below:
public class Point {
public String name;
{
name = "Henry";
}
public Point()
{
this(this.name); // cannot reference this before supertype constructor has been called
}
public Point(String name)
{
this.name = name;
}
}
this.name
this(this.name)
new Point()
this.name
this(variable)
Point myPoint = new Point();
myPoint.name
"Henry"
Point()
this(this.name) // which should be "Henry"
The only time that constructor is going to be executed is when an object is created using new Point(), by which time an object reference will be available, and the compiler wouldn't have anything to complain about.
I'd say that the key point here is that the object reference is only available after the constructor has been called. Hence referencing this during the constructor's operation doesn't really make sense.
Also remember that with inheritance, you can have multiple constructors being called before a particular object is instantiated/created. For example:
public class ParentPoint { public String name = "Parent Point"; public ParentPoint() { // do something } } public class Point extends ParentPoint { public Point() { super(); // do something else } }
In this case you'll have two constructors that are called before Point is fully instantiated (so the flow you mention in your OP isn't always that straightforward).
Xavier Silva's answer code is a good one, and also having getters and setters would be ideal too:
public class Point { public String name = "Henry"; public Point() { } public Point(String name) { this.name = name; //changes current value (Henry) to the one sent as an argument. } public void setName(String newName) { this.name = newName; } public String getName() { return this.name; } public static void main(String[] args) { Point pointA = new Point(); Point pointB = new Point("Other name"); System.out.println(pointA.name); // Henry System.out.println(pointB.name); // Other name pointA.setName("New name"); System.out.println(pointA.getName()); // New name } }
The final bit demonstrates getters and setters, the rest of the code is unmodified. | https://codedump.io/share/LrJu8DeG9cSD/1/how-does-the-flow-of-object-creation-work-in-java | CC-MAIN-2017-09 | refinedweb | 359 | 57.57 |
How to update an existing plot in Pythonista?
- Jeff_Burch
Matplotlib animation. I understand that Pythonista doesn't support the matplotlib animation APIs. Are there any work-arounds?
I have an IOT application where measurement data is multicast from the IOT device and received by my iPAD for graphing. I wish to build a simple "strip chart" plot that updates each time new data is received.
I can create a new plot with plt.show( ) each time but my console eventually is overloaded with a series of plot figures. How can I update an existing plot and have it rendered right away?
Unfortunately the fig.canvas.flush_events( ) API is not implemented in Pythonista. That's what I use on my desktop.
Here's a simple example that works great on the desktop. What modifications are needed to get this to work in Pythonista? The fig.canvas.flush_events() is a problem on Pythonista.
import numpy as np import time import matplotlib.pyplot as plt plt.ion() fig = plt.figure() tstart = time.time() # for profiling x = np.arange(0,2*np.pi,0.01) # x-array line, = plt.plot(x,np.sin(x)) for i in np.arange(1,200): line.set_ydata(np.sin(x+i/10.0)) # update the data plt.draw() fig.canvas.flush_events() plt.pause(0.001) #redraw figure, allow GUI to handle events print ('FPS:' , 200/(time.time()-tstart)) plt.ioff() plt.show() ``` | https://forum.omz-software.com/topic/3611/how-to-update-an-existing-plot-in-pythonista | CC-MAIN-2017-26 | refinedweb | 234 | 63.56 |
.
We’re going to do exactly that in this article, and it will work with plain JavaScript. Since I often use Gatsby for static sites though, we will also show how to do it in React. From there, the ideas should be usable regardless of what you want to do with your favicon and how.
Here’s a function that take an emoji as a parameter and returns a valid data URL that can be used as an image (or favicon!) source:
// Thanks to const faviconHref = emoji => `data:image/svg+xml,<svg xmlns=%22 width=%22256%22 height=%22256%22 viewBox=%220 0 100 100%22><text x=%2250%%22 y=%2250%%22 dominant-baseline=%22central%22 text-anchor=%22middle%22 font-size=%2280%22>${emoji}</text></svg>`
And here’s a function that targets the favicon
<link> in the
<head> and changes it to that emoji:
const changeFavicon = emoji => { // Ensure we have access to the document, i.e. we are in the browser. if (typeof window === 'undefined') return const link = window.document.querySelector("link[rel*='icon']") || window.document.createElement("link") link.type = "image/svg+xml" link.rel = "shortcut icon" link.href = faviconHref(emoji) window.document.getElementsByTagName("head")[0].appendChild(link) }
(Shout out to this StackOverflow answer for that nice little trick on creating the link if it doesn’t exist.)
Feel free to give those a try! Open up the DevTools JavaScript console, copy and paste the two functions above, and call
changeFavicon("💃"). You can do that right on this website and you’ll see the favicon change to that awesome dancing emoji.
Back to our clock/time project… If we want to show the emoji with the right emoji clock showing the correct time, we need to determine it from the current time. For example, if it’s 10:00 we want to show 🕙. If it’s 4.30 we want to show 🕟. There aren’t emojis for every single time, so we’ll show the best one we have. For example, between 9:45 to 10:14 we want to show the clock that shows 10:00; from 10:15 to 10:44 we want to show the clock that marks 10.30, etc.
We can do it with this function:
const currentEmoji = () => { // Add 15 minutes and round down to closest half hour const time = new Date(Date.now() + 15 * 60 * 1000) const hours = time.getHours() % 12 const minutes = time.getMinutes() < 30 ? 0 : 30 return { "0.0": "🕛", "0.30": "🕧", "1.0": "🕐", "1.30": "🕜", "2.0": "🕑", "2.30": "🕝", "3.0": "🕒", "3.30": "🕞", "4.0": "🕓", "4.30": "🕟", "5.0": "🕔", "5.30": "🕠", "6.0": "🕕", "6.30": "🕡", "7.0": "🕖", "7.30": "🕢", "8.0": "🕗", "8.30": "🕣", "9.0": "🕘", "9.30": "🕤", "10.0": "🕙", "10.30": "🕥", "11.0": "🕚", "11.30": "🕦", }[`${hours}.${minutes}`] }
Now we just have to call
changeFavicon(currentEmoji()) every minute or so. If we had to do it with plain JavaScript, a simple
setInterval would make the trick:
// One minute const delay = 60 * 1000 // Change the favicon when the page gets loaded... const emoji = currentEmoji() changeFavicon(emoji) // ... and update it every minute setInterval(() => { const emoji = currentEmoji() changeFavicon(emoji) }, delay)
The React partThe React part
Since my blog is powered by Gatsby, I want to be able to use this code inside a React component, changing as little as possible. It is inherently imperative, as opposed to the declarative nature of React, and moreover has to be called every minute. How can we do that?
Enter Dan Abramov and his amazing blog post. Dan is a great writer who can explain complex things in a clear way, and I strongly suggest checking out this article, especially if you want a better understanding of React Hooks. You don’t necessarily need to understand everything in it — one of the strengths of hooks is that they can be used even without fully grasping the internal implementation. The important thing is to know how to use it. Here’s how that looks:
import { useEffect } from "react" import useInterval from "./useInterval" const delay = 60 * 1000 const useTimeFavicon = () => { // Change the favicon when the component gets mounted... useEffect(() => { const emoji = currentEmoji() changeFavicon(emoji) }, []) // ... and update it every minute useInterval(() => { const emoji = currentEmoji() changeFavicon(emoji) }, delay) }
Finally, just call
useTimeFavicon() in your root component, and you are good to go! Wanna see it work? Here’s a deployed CodePen Project where you can see it and here’s the project code itself.
Wrapping upWrapping up
What we did here was cobble together three pieces of code together from three different sources to get the result we want. Ancient Romans would say Divide et Impera. (I’m Italian, so please indulge me a little bit of Latin!). That means “divide and conquer.” If you look at the task as a single entity, you may be a little anxious about it: “How can I show a favicon with the current time, always up to date, on my React website?” Getting all the details right is not trivial.
The good news is, you don’t always have to personally tackle all the details at the same moment: it is much more effective to divide the problem into sub-problems, and if any of these have already been solved by others, so much the better!
Sounds like web development, eh? There’s nothing wrong with using code written by others, as long as it’s done wisely. As they say, there is no need to reinvent the wheel and what we got here is a nice enhancement for any website — whether it’s displaying notifications, time updates, or whatever you can think of.
This should use
~=instead of
*=.
*=would match anything with the characters “icon” in it, such as
apple-touch-icon; but
~=is for matching space-separated lists of tokens, like
classand like
rel.
Also on that topic,
shortcut iconis an archaism from ancient versions of IE that didn’t support SVG or dynamic favicons anyway; so you might as well just write
icon.
(I’ve just overhauled the Stack Overflow answer this came from.)
Hey Carlo, fantastic use of object literals instead of a switch statement or endless if-elses. Your code is super clean and easy-to-understand. I love seeing the implementation with the useEffect and useInterval React Hooks!
I can definitely imagine using something like this myself, replacing at the least the static favicon.ico with a useEffect call!
This is the worst worst worst worst worst feature for usability I’ve ever seen.
Sounds like you’ve only taken a short walk around the internet.
Better avoid unnecessarily verbose code like
which can be rewritten as
Won’t
changeFaviconadd a new link each time instead of replacing the current link?
I wrote a React hook for to update the favicon a while back. It contains function to convert a canvas into an href for the favicon. Ans it has a function to set an emoji SVG favicon, like in this article. I published it on GitHub if anyone’s interested Check the demo page. | https://css-tricks.com/how-to-create-a-favicon-that-changes-automatically/ | CC-MAIN-2021-17 | refinedweb | 1,166 | 63.9 |
Report Design Tips and Tricks
SQL Server Technical Article
Fang Wang
Thanks to Robert Bruckner and Chris Hays for their help with the content.
March 2007
Revised May 2007
Applies to:
Microsoft SQL Server 2005 Reporting Services
Summary: This white paper covers best practices on report design and helps you avoid common mistakes when choosing a report layout and output format. Take advantage of existing product features to achieve the results you want. The paper includes report and code examples that implement functionality that is frequently requested. (32 printed pages)
Click here to download the Word document version of this white paper, ReportDesignTips.doc.
Contents
Introduction
Best Practices and Tips
Report Samples
For More Information
Introduction
Microsoft SQL Server Reporting Services is a comprehensive solution for creating, managing, and delivering both traditional, paper-oriented reports and interactive, Web-based reports from different types of data sources. Reporting Services provides a wide variety of options for report controls that you can use to display data and graphical elements in your reports. You can create reports for your application by using one of the design tools shipped with the product, including Report Designer—integrated with Microsoft Visual Studio 2005—and Report Builder, an ad hoc reporting tool that enables business users to create their own reports and explore corporate data, based on a report model. Reports can be rendered into a wide range of formats such as HTML, XML, CSV, Microsoft Office Excel, PDF, and image files.
Although Reporting Services incorporates commonly used functionality and makes it easy and flexible for users to create and manage reports that meet their specific requirements, report design can be a very challenging task given the complexity of business needs. Report authors face many decisions and tradeoffs when choosing the best way to design and deploy a report. This white paper answers the following questions and provides a set of report and code samples:
- What are the best practices for designing a report?
- How do I avoid common mistakes when choosing a report layout and picking an output format?
- How do I take advantage of existing features to achieve the results I want, when no direct support is present?
Note This document assumes that the reader is familiar with the Reporting Services product, report design concepts, and possesses a basic knowledge on how a report is processed and managed by the report server.
Best Practices and Tips
This section describes some common user scenarios and provides report authors and administrators with useful information to help the decision-making and troubleshooting processes.
Design Your Reports for Performance and Scalability
For their reporting platform, many organizations, from small-scale personal businesses to large enterprises, use Reporting Services as the main information delivery channel for daily operations. When handling large or complex reports, report authors and administrators often encounter questions such as:
- Can my report handle a large amount of data?
- How will the server be affected when many users view a report during peak hours?
- How do I troubleshoot a performance problem?
To achieve the best performance, there are a number of factors you should take into account when designing your reports.
Query Design
A report is processed in the following order: query execution, report processing, and report rendering. To reduce overall processing time, some of the first things you need to decide are which data to retrieve from the data source, and which calculations to include in the report. Reporting Services supports a wide range of data sources, including plain file systems, advanced database servers, and powerful data warehousing systems. The functionality of data sources and the structure of the report determine which operations should be done in the query, as opposed to which operations should be done inside of the report. Although the Reporting Services processing engine is capable of doing complex calculations such as grouping, sorting, filtering, data aggregation, and expression evaluations, it is usually the database system that is best optimized to process some or all of these data operations. With that said, always keep the following in mind:
- Optimize report queries.
Query execution is the first step of the reporting process. Having a good understanding of the performance characteristics of your database system is the first step to good query design. For example, you can use SQL Profiler to track the performance of your queries if you use SQL Server as the database server.
- Retrieve the minimum amount of data needed in your report.
Add conditions to your query to eliminate unnecessary data in the dataset results.
If the initial view of the report shows only aggregated data, consider using drill-through to display details. Drill-through reports are not executed until the user clicks the drill-through link, whereas show/hide reports or subreports are processed during the initial report processing, even if they are hidden.
Rendering Formats
Reports can be exported into various rendering formats through rendering extensions. Different rendering extensions provide different functionality. Their performance and scalability characteristics are also very different. You must understand the difference in their behavior before you can make the right choice to satisfy your business requirements. For more information, see Choose Appropriate Rendering Formats for Your Business Needs in this paper.
Pagination
Reports are rendered into pages based on where page breaks occur. Although the entire report is processed before the rendering phase, the performance of the rendering extension is significantly impacted by the page size. Also in some case (for example, HTML), how well the report is displayed in the client tool depends on the page size, as well as other factors. For information on the different types of page breaks and how they are used in various rendering formats, see Control Page Size.
Check the Execution Log to Assess Performance
Once a report is deployed on the report server, the administrator will want to monitor the activities associated with the report, including how frequently it is requested, what output formats are used, how many resources are consumed, and other performance information. The Check Reporting Services Log Files section describes how to use the execution log to gather this data. It also tells you how to use the information to optimize report execution.
Choose Appropriate Rendering Formats
Reporting Services supports seven rendering formats out of the box—HTML, XML, CSV, GDI, PDF, Image, and Office Excel. A rendering extension interface is also provided for third-party vendors to implement additional rendering extensions to support other formats.
The following table contains a brief description of the built-in rendering extensions. They are listed in the order of memory consumption, from least to the most.
Table 1. Reporting Services rendering extensions
Control Page Size
Controlling page size in the report not only changes the report layout, it also impacts how the report is handled by the Report Processing and Rendering engine. There are three types of page breaks: logical page breaks, soft page breaks, and physical page breaks.
Logical Page Breaks
Logical page breaks are defined in the report definition by using the PageBreakAtStart and PageBreakAtEnd properties in various report elements, including group, rectangle, list, table, matrix, and chart. Logical page breaks are considered in all rendering extensions except for data rendering extensions (XML and CSV).
Soft Page Breaks
Interactive rendering extensions (HTML and WinForm control) use the report's InteractiveHeight and InteractiveWidth properties to determine page size. The rendered pages can be slightly larger than the specified size; therefore, they are called soft pages. Soft pages were introduced to solve the issue of getting long pages in HTML when report authors did not include logical page breaks in the report definition to achieve reasonable page sizes. Very long pages are not only unreadable, but also cause performance problems in Internet Explorer. Soft page breaks can be disabled by setting the InteractiveHeight and InteractiveWidth to zero.
Physical Page Breaks
Set the PageHeight and PageWidth properties on the report controls where physical page breaks occur. Physical page breaks are used in physical page rendering extensions, such as PDF and Image. These rendering extensions respect the page size precisely during pagination. The PageHeight and PageWidth values defined in the report definition can be overridden by device information settings in the rendering extension. For details, see Use Device Info to Control the Behavior of a Rendering Extension.
Check Reporting Services Log Files
When you experience problems with report execution or need to interpret the performance characteristics of your reports, check the log files described in this section. Reporting Services uses these log files to record information about server operations and status.
Windows Application Log
The report server logs events to the Windows Application log of the local computer. Only information that can be useful for diagnosing a hardware or software problem is logged. The event log is not intended to be used as a tracing tool. The Reporting Services trace logs are used for that purpose. You can use the Windows Event Viewer to view and filter the events based on the event sources, types, and categories. For details on the event sources, types and categories, see Windows Application Log and Reporting Services Errors and Events in SQL Server Books Online.
Reporting
The level of detail of the information in trace files that the server logs can be configured by setting DefaultTraceSwitch in the configuration files. For the Report Server Web service, this option is in the Web.config file. The switch for the Report Server Windows service is in ReportingServicesService.config. Four trace levels are available:
1—error (any exception and restarts)
2—warning (for example, warnings related to low resources)
3—info (including informational messages)
4—verbose
By default, the trace level is set to 3.
- Error information
When a failure happens in the report server, check the trace log for error details if there is not enough information given in the UI. The trace files should contain the complete stack trace dump, which is helpful to diagnose the problem. One example of this would be subreport errors. When a subreport fails, it does not prevent the main report and other subreports from executing. A message is displayed in place of the subreport contents telling the user that an error occurred while executing the subreport. You may want to view the trace log to find out what the problem was.
The trace logs are located in \Microsoft SQL Server\<SQL Server Instance>\Reporting Services\LogFiles. New log files are created every time a Reporting Services component starts. They are created daily starting with the first entry that occurs after midnight. The logs can become very large, depending on the traffic on the Reporting Services component. Therefore, when looking up file contents, search for the entries with the corresponding timestamp.
Report Server Execution Log
The execution log is a database table that contains report execution information on a report server. It is an internal table residing in the Report Server catalog. The table name is ExecutionLog. Server administrators can use the information in the execution log to analyze report usage and execution performance.
The execution log records data about which reports are executed, when each report starts and ends, how much time is spent on each processing phase, who requested the report, and the export format. Following are fields you may find particularly interesting for performance tuning:
TimeDataRetrieval—Time (in milliseconds) spent executing the query and retrieving data.
TimeProcessing—Time spent processing the report.
TimeRendering—Time spent rendering the report.
Format—Requested rendering format.
ByteCount—Size of rendered report in bytes.
RowCount—Number of rows returned from queries.
The combination of the time spent in the three processing phases (query execution, report processing, and report rendering) determines how long report execution takes. Finding a good balance between what to do in the query and what is processed in the report is very important. These numbers can provide a great starting point for administrators and report authors to monitor and optimize report execution on the server.
To make it easier for users to view the execution log data, a sample SQL Server Integration Services (SSIS) package is shipped as part of the product. This package extracts the information from the execution log table and puts it into a separate database. For details, see Querying and Reporting on Report Execution Log Data in SQL Server Books Online.
Use Device Info to Control the Behavior of a Rendering Extension
Device information settings are a set of rendering parameters that are passed to a rendering extension. Through the use of device information, users can override the default settings specified in the report definition and change the behavior of a rendering extension. There are different sets of device information settings for different rendering extensions. For a complete list of settings, see Reporting Services Device Information Settings in SQL Server Books Online. Following are some of the most commonly used device information (DeviceInfo) settings.
HTML
The LinkTarget device information setting specifies the value of the Target attribute used to render hyperlinks in the report. The default behavior is to navigate within the same frame as the original report. You can target a specific frame by setting it to the name of the frame. It can also be set to _blank (opens up a new window), _self (in the current frame), _parent (in the immediate frameset parent) to _top (the top window without frame).
PDF/Image
If an explicit PageHeight or PageWidth is specified in the device information settings, it overrides the value defined in the report definition. Thus, different users can supply different values when they export the same report to PDF or Image files. StartPage and EndPage tell the rendering extensions to render a range of the pages starting from the StartPage to the EndPage.
Office Excel
If you want the report page header rather than the first row of the worksheet to be rendered as an Office Excel page header, set the SimplePageHeader device information setting to true. Since the functionality supported by an Office Excel page header is quite limited, doing this means that you might lose information in your page header.
By default, the Office Excel rendering extension tries to preserve Office Excel formulas whenever possible. Visual Basic .NET formulas are translated to Office Excel formulas. The references to report items on the current page are converted to the appropriate cell references. If you don't want this behavior, set OmitFormulas to true.
XML
The value of the XSLT parameter specifies a relative URL of an XSLT to apply to the report XML. Note that only XSL files that are published on the same report server can be used.
By default, the XML rendering extension uses the original value of the expression evaluation result when rendering a text box. The type of the text box is set to the type of the original value in the XML. You can use the UseFormattedValue device information setting to override this behavior. If it is set to true, the formatted value (which is always a string) is used instead. In this case, the type of the text box will be string.
Multivalue Parameters
Multivalue parameters are very useful to many users. This also seems to be one of the areas that customers need the most help with.
A multivalue parameter is a report parameter that can take a set of values. To define a multivalue parameter, simply add a parameter and mark it as multivalue when creating a report. There are two main scenarios where multivalue parameters are used.
Use a Multivalue Parameter Inside a Report
The properties exposed on a report parameter object include IsMultiValue, Count, Value, and Label. You can access the value and label of a multivalue parameter as zero-based arrays.
=Parameters!<paramName>.IsMultiValue—The parameter defined as multivalue.
=Parameters! <paramName>.Count—Returns the number of values in the array.
=Parameters! <paramName>.Value(0)—Returns the first selected value.
=Parameters! <paramName>.Label(0)—Returns the label for the first selected value.
=Join(Parameters! <paramName>.Value)—Creates a space separated list of values.
=Join(Parameters! <paramName>.Value, ", ")—Creates a comma-separated list of values.
When you open a Report Definition Language (RDL) file that was created in SQL Server 2000 Reporting Services in a SQL Server 2005 version of Report Designer, or publish it to a SQL Server 2005 report server, the file is automatically upgraded to SQL Server 2005 format. But is it possible to downgrade a RDL file so that it can be used with the 2000 version of the product again? There is no automated tool to do this, but you can modify the XML file manually to accomplish it.
To modify the RDL file
- Change the RDL namespace:
- Remove the following elements entirely if present in the RDL file:
- To remove all occurrences of interactive sort in the RDL file, remove all <UserSort> elements, including the inner contents.
Report Samples
Reporting Services supports a variety of report items and built-in functions for report authors to use when designing reports. While creating simple reports has been made quite easy, it can be problematic to find solutions for report display and/or calculations that are not very straightforward. This section covers some of these more complex scenarios to help report developers achieve the functionality they are looking for by using combinations of or alternatives to existing features.
If a dataset contains values for duplicate records, you don't want to count those values multiple times. Reporting Services supports a CountDistinct function, but there is not a similar function for Sum. You must do this by using custom code.
To calculate distinct sum by using custom code
- Value
Direct references to field values in page headers and footers are not currently possible. A common workaround is to add a hidden text-box in the report body, and refer to the text-box value from the page header or footer. Does this work for an image? Yes, but since the image data is a binary array, while the text-box value is a string, you need to use a Microsoft .NET base64 encoding mechanism to convert the data between binary and string types.
To add an image that uses the value
Reporting Services doesn't currently support using multiple datasets in one data region. To display an aggregate value from the dataset, specify the dataset scope in the aggregate function; for example, =Sum(Fields!Freight.Value, "DataSet2"). This syntax works even in a data region that is bound to a different dataset. If, however, you need to correlate the data from the two datasets in one data region, use the following two workarounds:
-
A frequently asked question is whether it is possible to change the report structure at run time. You can, of course, dynamically generate RDL files in your custom application. There are also other ways to modify the structural elements in the report definition to achieve this. This section describes various scenarios and solutions.
Dynamic Field Based on Parameter, Dynamic Grouping
How do I decide which field value to use dynamically—for example, based on a parameter in my report? How can I give my users the ability to dynamically select the fields on which to group within a report?
Use this syntax to refer to a field value in your expression:
It is not possible to change the filter operator at run time, but you can include all the conditions in the filter expression and leave the filter operator and the value as static (filter operator is =, and filter value is =true). This way, by dynamically switching the conditions in the filter expression by using IIF, you can achieve the effect of changing the filters based on user input.
To dynamically change the filter conditions based on parameter value
-:
Figure 15
Figure 16 shows how the report looks when the value of FilterOp is set to GreatThanOrEqual.
Figure 16 (Click on the picture for a larger image)
And Figure 17 shows when the user changes the parameter value to LessThan:
Figure 17 (Click on the picture for a larger image)
Dynamic Columns
I'd like to programmatically determine the columns to display based on what is returned by the dataset. Is that possible?
Yes, you can define a table with all the possible columns, then decide whether to show each column based on the value of the IsMissing property on Field. Here is an example that shows information from two different database tables based on user selection.
To add dynamic columns to your report
-
My matrix shows the percentage of the sales to a customer out of the total sales for each employee. In the subtotal cells, this is always going to be 100 percent, which is not useful. How can I get different contents in the subtotal cells?
Use the InScope function in your expression to do this. For example, suppose you have a row grouping (called "matrix1_CustomerID"), and a column grouping (called "matrix1_EmployeeID"). To show different contents in the column subtotal cells, set the cell text-box expression to the following:
Dynamic Page Breaks
For the logical page breaks defined in the RDL, the only options seem to be turning them on or off. There is no expression for them. Is it possible to dynamically change which groups have page breaks either before or after?
It is possible, although not very straightforward. You create a dummy group with page break at end equal to true. The expression for the group will be set according to a parameter value.
To add dynamic page breaks based on the parameter value
-:
Figure 22 (Click on the picture for a larger image)
Figure 23 shows the report when the page break is turned on.
Figure 23 (Click on the picture for a larger image)
And Figure 24 when it is turned off:
Figure 24 (Click on the picture for a larger image)
Resetting the Page Number on a Group
I have many pages and many groups in my report. I can use Globals!PageNumber to display the current page number in the page header, but I want to reset the page number back to one every time I enter a new group. How do I do this?
Resetting the page number on group breaks is not natively supported, but it can be achieved by tracking group breaks in a shared variable in custom code and subtracting off the page offset of the first page of the group from the current page number.
To reset the page number at the start of each:
Figure 27 shows one of the pages in the rendered report in which the group page number is reset when starting a new group.
Figure 27 (Click on the picture for a larger image)
Note Because this solution uses static variables, if two people run the report at the exact same moment, there is a slim chance that one will break the other's variable state.
Horizontal Tables
Is it possible to rotate a table in Reporting Services? Usually a table grows vertically, based on the number of detail rows. What if I want it to grow horizontally?
Try using a matrix instead of table. The table header becomes a static matrix row header. The report detail would be the column groupings (you'll need to group on a field that's unique across different rows if you want each data row to show up as a column). Put the footer in the subtotal column (you may need to use the InScope() function to display contents different from the detail information).
Figure 28 shows a vertical table and a horizontal table that display the same set of data.
Figure 28 (Click on the picture for a larger image)
It is more difficult to create a matrix with every other row shaded because every row in the matrix must be a group. There is currently no GroupNumber() function on which to base a green-bar calculation. However, GroupNumber can be (mostly) simulated by using the RunningValue function to return a running distinct count of group expression values.) | https://technet.microsoft.com/en-us/library/bb395166(v=sql.90).aspx | CC-MAIN-2017-51 | refinedweb | 4,001 | 51.78 |
Feedback
Getting Started
Discussions
Site operation discussions
Recent Posts
(new topic)
Departments
Courses
Research Papers
Design Docs
Quotations
Genealogical Diagrams
Archives
I'd be curious to know what people's opinions are on the matter of symbol visibility, and whether there are any new and fresh ideas in this space.
Now, you might think that symbol visibility would be a rather dull and pedestrian topic for a language theorist - few research papers on language design even mention the topic. However, I think that this is actually a very interesting area of language design.
Limiting the visibility of symbols has immense practical value, especially for large code bases: the main benefit is that it simplifies reasoning about the code. If you know that a symbol can only be referenced by a limited subset of the code, it makes thinking about that symbol easier.
There are a number of languages which have, in pursuit of the goal of overall simplicity, removed visibility options - for example Go does not support "protected" visibility. I think this is exactly the wrong approach - reducing overall cognitive burden is better achieved by having a rich language for specifying access, which allows the programmer to narrowly tailor the subset of code where the symbol is visible.
Here's a real-world example of what I mean: One of the code bases I work on is a large Java library with many subpackages that are supposed to be internal to the library, but are in fact public because Java has no way to restrict visibility above the immediate package level. In fact, many of the classes have a comment at the top saying "treat as superpackage private", but there's no enforcement of this in the language.
This could easily be solved if Java had something equivalent to C++'s "friend" declaration, the subpackages could then all be made private, and declare the library as a whole to be a friend.
However, I wonder if there's something that's even better...
i mostly heartily concur. especially since i just did some stuff in go. i say mostly because usability is a double-edged banana. things can (a) be poorly done in the language spec or (b) even if done well then the end-users can go crazy and perhaps make horribly complicated relationships that just make the code harder to grok for the next person down the line.
This is only an idea, I don't know if it is implemented anywhere. I would prefer to write only minimal visibility annotations within code, at most something like Node.js's export magic variable. Instead, visibility would be controlled by separate interface files (akin to ML's module interfaces), and you could have several interfaces for different clients - your library would have the full interface to all modules, "subclassing" libraries would have a partial interface, and client programs would have only a very limited API. This would also allow to abstract types.
export
The only thing that is problematic with this approach is that it would be quite cumbersome for the programmer, or at least I have not yet thought of a way to make it obvious which interface applies to which part of code.
Use modules & module signatures for information hiding.
Not all visibility relationships are expressible via hierarchical containment. Protected is especially useful in languages that support implementation specialization via inheritance.
This is expressible with module signatures, provided you have something that can model inheritance with modules (e.g. mixin composition of modules). Suppose you have class A, with some private and some protected and some public methods. Then you define B to inherit from A. You can model this with a module A, to which you apply a signature which hides the private methods but not the public and protected methods in A. Then you do mixin composition to obtain B, then you apply a signature which hides the protected methods from B to obtain the public version of B. This is of course a bit more manual than with a protected keyword, but I'm not convinced that protected is such a common pattern that it deserves its own keyword if it can be expressed using just signatures.
So we can have a bunch of things that can be wired together to achieve a goal. Or we can implement another tool that is a succinct way of getting that goal. When do we know we should add that to the system? Vs. trying to not do that, to keep things "simple"? What are people's experiences here? I always like the sound of e.g. go-lang's parsimony, but then whenever I go to use something like that it drives me freaking nuts. "Just give me protected!" I rant and rave at the screen...
I don't think there is an easy rule of thumb. You would have to look at a number of case studies and see whether protected makes them clearer and easier to express. Then you decide whether that is worth the added complexity.
Controlling access to information is useful for reasoning about programs. However, there are many means to achieve this, and I'm not at all fond of the 'symbol' based visibility model.
Symbol based visibility is problematic for several reasons. They greatly increase coupling within the 'class' or whatever unit of visibility is specified. This can hinder reuse, refactoring, decomposition of said unit into something finer grained. Using symbols for binding can also hinder composition and inductive reasoning: gluing subprograms together (even methods) based on hidden symbols is very ad-hoc, non-uniform. In some cases, it even hinders testing of subprograms. Further, symbol-based visibility is generally second-class, inflexible, difficult to extend or precisely graduate or attenuate.
My alternative of preference is the object capability model (cf. [1][2]). Attenuation can be modeled by transparently wrapping one object within another. But I also see value in typeful approaches. Linear types make it easy to reason about fan-in, fan-out, exclusivity. Modal types can make it feasible to reason about when and where access occurs, locality. Quantified (∀,∃) types with polymorphism can make it easy to reason about what kind of access a given subprogram is given.
Symbol visibility is simplistic, easy to compute compared to some of these other mechanisms. So performance is one area where it has an edge. More sophisticated designs, such as object capability model, would rely on deep partial evaluation and specialization for equivalent performance.
Symbol based visibility is problematic for several reasons.
To the extent that I understand the problems you're complaining about, I think they're solvable without getting rid of symbols or even the use of symbol visibility as the mechanism for information hiding. In particular, it sounds like many of your complaints would be solved with a way to bind to sub-theories rather than pulling in the entire theory of a set of symbols.
If binding is orthogonal to use of symbols, that does eschew one complaint (re: "gluing subprograms together (even methods) based on hidden symbols is very ad-hoc"). But I don't see how it helps with the others.
Could you give some examples of the problems you mention?
The problems I'm noting are mostly shallow in nature. Refactoring is hindered for obvious reasons: code that calls a method private to a specific code unit cannot (directly) be refactored into a separate code unit. Testing is hindered for obvious reasons: ensuring a 'private' method is visible to the testing framework. Flexibility is reduced for obvious reasons: our visibility policies aren't themselves programmable.
The remaining point, regarding composition, is simply an issue of symbol-based binding in general (be it public or private) - especially if the binding is recursive.
Anyhow, while the source problems are shallow, they do add up. And that aggregation becomes the real problem. Isolated examples don't much help with exposing this class of problem. Best I can recommend is that you read about some of the alternatives I mentioned (such as object capability model, linear types) and the motivations behind them.
These problems are of public/private/protected in particular. Modules like in OCaml don't have these problems because you have an explicit construct to selectively hide components: signature ascription (which is similar to casting an object to an interface).
For example if you have a module Foo that you give a public signature Bar with PublicFoo = Foo : Bar, then you can still test the hidden functions in Foo by simply testing on Foo instead of PublicFoo.
The same problems apply to, for example, Haskell's approach to hiding information within modules (based on export control - a form of symbol visibility management). It isn't just OO class public/private/etc..
I think refining and/or adapting interfaces is useful for the flexibility and testing concerns. It does not seem sufficient to address the other two concerns.
I tend to use a combination of the object capability model and multiple specifically tailored 'interfaces' per object. As Javascript doesn't have classes (only objects), an interface takes the form of a wrapper object.
The entity that creates an object can then distribute these interfaces on a need to know/do basis. In Javascript this scheme is particularly useful, as it protects (to an extent) against cross-site scripting attacks.
Being javascript, this approach is of course not statically typed. But it could be.
Matthew Flatt has a great paper on explicit phase separation in racket submodules.
I don't think that a variety of levels, such as private, protected, package, public, etc. justifies the complexity cost. I've worked on quite a few real-world systems where goofy things were done to workaround accessibility restrictions. Sometimes, to get the job done, you need to violate encapsulation or otherwise take dependencies are implementation details. I value making this explicit, but do not value making it difficult. For example, the use of a leading underscore in Python is preferable over explicit reflection in Java.
There are also different schools of thought on what the default visibility setting should be. Should you be required to export the public interface? Or hide auxiliary types & functions? I prefer the former, so long as the private bits are still accessible. With the Common.js module system for example, you can't get at internals even if you're willing to consciously violate public contracts.
Although Clojure offers a {:private true} metadata, it's pretty common to see all functions defined as public and the promised/versioned/reliable functionality to be manually copied from an implementation namespace to a separate publicly documented namespace. See clojure.core.async for an example of that. As a consumer of such a library, I like this pattern, but wish it was a little less verbose to accomplish myself. There are also some unaddressed issues with aliasing mutable values, such as thread-local vars.
So after the code was mangled to get the product out the door, and then the 3rd party vendor changed the library and thus broke the mangling, what happened / what happens? I think the root cause of these problems is not public/protected/private, I think it is e.g. not using open source code :-) I say that half really seriously.
the root cause of these problems is [...] not using open source code
Just because you can look at or change the source, doesn't mean you can deploy the change.
For example, you may know for a fact that you're deploying your code on to Linux machine running a particular version of the JVM. You don't expect to change operating systems and you're unable to change JVM versions without affecting any other services running on the machines you're deploying to. The Java base classes may not expose a public mechanism for facilities it can not reliably provide for Windows. However, you know that you can safely rely on the fact that a particular private method exists for an underlying native Linux API.
Another example I've encountered: The source code I was programming against existed at runtime as a dynamically loaded library, stored in read-only memory, on widely deployed consumer electronics.
the 3rd party vendor changed the library and thus broke the mangling, what happened / what happens?
Depends. When you knowingly violate a library contract, you need to have a contingency plan. You can 1) not upgrade 2) do feature detection, employing a fallback 3) plan with the 3rd party on a transition plan. Or any of an infinite number of other things.
Hell, I've had to work around accessor protections on C# code written by me!
The world of deployed software is complex.
These are great examples, thanks. So
(1) if the whole software and hardware stack were done sanely such that we didn't have to do all this hackery, what would it look like?
(2) if we assume we can't have a sane stack all the way through (ahem) then at our top level where we consume and interface with those other things, what would that best solution look like? so that (2a) it doesn't make the same mistakes and (2b) it somehow wrangles the mistakes of others? E.g. I mean what if we had a principled approach to wrangling the hacks?
p.s. you're hired!
I mean, it's certainly not *ideal*, but it is very much sane... from the operational perspective of any one rational actor. The net result is that the organism of engineering teams and the software community may seem chaotic, but it's built from individually sane decisions, at least mostly.
You use the word "hackery", but I want to be clear: It's only hackery if you perceive it as such. Going back to my original post, I don't think it's hackery to call private methods denoted by leading underscores in a Python codebase. It is, however, hackery to have to use reflection to do similarly in Java. The distinction as I see it: In Python, I think "Oh, this is a private method. Is it safe to use it? Yes." then I go ahead and use it. In Java, I think the same thing, but have to take perfectly sensible code and mangle it in to the exact same logic, but lifted in to the domain of reflection. It's one thing to encourage (or even require/enforce) some sort of assertion of my intention to violate encapsulation. It's an entirely separate thing to force me to dramatically switch interfaces to my language runtime to accomplish a task that is mechanically identical between the two. And it's *totally unacceptable* to disallow such behavior completely.
(1) if the whole software and hardware stack were done sanely such that we didn't have to do all this hackery, what would it look like?
Pretty much exactly the same, just significantly less in total (non-hackery included)!
what if we had a principled approach to wrangling the hacks?
I may be pretty far from what you were asking now, but hopefully this gives you some insight in to my perspective. I'll take your questions to be:
"How should we handle publishing, enforcing, and amending agreements between parties within software systems?"
I think that this is a very human question. In practice, this is most often solved with social (rather than technological) means. Honestly, I don't have any good technical answers. I'd just rather we discuss the problem holistically, rather than assume that there is a universal logical system that will solve the problem magically.
I've said before: How does a type system prove that your game is fun? It doesn't. Similarly, how does a function being marked as "public" guarantee that somebody upstream won't just delete the function and break the API? It doesn't!
it is very much sane... from the operational perspective of any one rational actor [..] it's built from individually sane decisions
This strikes me as analogous to: "every voice in my head is sane, therefore I'm sane" ;)
You found part of programming that is more like making sausage than math. A lot of attention used to be given to this sort of question at places like Apple in the late 80's and early 90's, where not breaking third party developers was a priority, and interface design was influenced by expected future need to make changes that don't instantly break apps depending on established api contracts.
It's hard to change what has been exposed, so good design often amounted to "don't expose it if you want to reserve option to change", and paid lip service to name and version schemes. But it's only easy to do linear versions, which isn't nearly granular enough for multiple complex interactions among devs who sinter together libraries with a graph of version inter-dependencies.
How should we handle publishing, enforcing, and amending agreements between parties within software systems?
Symbol names have something to do with publishing. Folks who love types dearly hope enforcement is done via types, even though this might require strong AI to represent complex interface contracts correctly in a verifiable way.
Amending agreements is a lot like mutation in concurrent code: don't do it when possible to avoid, because the chaos is costly to resolve. You should not change contracts without also changing names and/or types too, in a way that unambiguously tells consumers in responsible notification.
At a personal level, when updating old code, it's very dangerous to change the meaning of any old method, or a field in data structures. In the worst case, it might compile and build anyway, despite static type checking, and pass tests just well enough to let you ship before you find out what you did. Much safer is a scheme to add new methods and stop using old ones (if contracts permit this). Just treat code like immutable data in functional programming, and you'll usually be fine; let old deprecated code become unused and then garbage collected. But you may never know when devs consuming a library stop using old symbols.
In my daily work, if I ever change the meaning of a field, I also change its name so a build breaks if I miss a single place that needs fixing. Every single place a symbol is used must be examined to see if new behavior matches the expected contract at each point. The old name becomes a "todo: fix me" marker that must be studied everywhere it appears.
There isn't a nice answer. Making sausage is not for the squeamish.
It's hard to change what has been exposed, so good design often amounted to "don't expose it if you want to reserve option to change"
This is why I've drawn a distinction between visible (or accessible) and published. "Exposed" is too vague.
Let's go with the Apple example. It is trivial to see or call all the private bits of an Objective-C API. However, you need to go *looking* for the private stuff. Either with tools (object browsers, decompilers, etc) or with source (not so lucky in this example). Apple enforces the ban on the use of private APIs with automated validation tools on their end. They can only do this now because they control the app store, the primary distribution channel. This wasn't always true, so they had to try to discourage utilizing private functionality but not publishing their existence (primarily in documentation, including auto-complete) and by introducing some barrier to accidental use (not published in default headers).
Although distribution control enables enforcement, it isn't necessary for the approach to work. A warning could be raised during build, test, lint, or load time. That warning could be treated as an error at any of those times too, even if recognized earlier. For example, a network administrator may choose not to allow private API usage in deployed applications on company workstations for fear of future compatibility issues. However, you can bet your bottom dollar that administrator would want the ability to overrule such a restriction for a critical line of business application! Can always pay somebody to fix it; might even be worth it.
(Read my use of "exposed" as meaning exposed in the public contract, not merely discoverable when you poke around behind the facade. Otherwise merely existing means being exposed and there's no difference between existing and exposed.)
I stipulate your points; we seem to agree. Finding entry points, then making up your own interfaces so you can call them will very often work — up until someone patches private code with supposedly no third-party consumers. For example, nothing stops you from re-declaring C++ classes with private fields made public, but responsibility for the contract violation is clear when this happens.
To the extent Apple polices use of private interfaces, they are doing a service to third-party developers who might otherwise simply get burned when rules are broken. A bad quality experience for users reflects well on no one, so Apple has incentive to stop devs from burning themselves. I find it slightly amusing Apple gets cast as the bad guy here. You can't let people insert themselves wherever they want.
(For example, you can't stop a burglar from picking locks and taking up residence in a living room easy chair. But that doesn't mean they get to stay when you come home. Finding and picking the lock doesn't grandfather a new contract they write themselves without your consent. It would make a funny comic strip panel though.)
I think it's good to have tools letting you express what you wish was (publicly) visible for various sorts of use, using both general rules and explicit case-by-case white-listing as seems convenient. Additionally, I think devs should think about each entry point and decide (then document) who is expected to call what and when.
When you knowingly violate a library contract, you need to have a contingency plan. You can 1) not upgrade 2) do feature detection, employing a fallback 3) plan with the 3rd party on a transition plan. Or any of an infinite number of other things.
Right. Except that in practice, that doesn't work. What happens in practice is this: you provide a component A, some other party provides a component B using your A, and stupidly, relying on implementation details it is not supposed to rely on. And then there are plenty of clients Ci who use A and B together.
Now you release a new version of A -- and it breaks all Ci. Now, those clients couldn't care less that it was actually B who should be blamed for all their problems. You'll get all the fire, and you'll have to deal with it. And more often than not, you'll be forced to back out of some changes, or provide stupid workarounds to keep irresponsible legacy code working. Or not change the system to begin with.
That is not a scalable model of software development. Modularity only exists where it can be properly enforced.
and stupidly, relying on implementation details it is not supposed to rely on
An engineer takes a look at a library, estimates the implementation and assumes a number of invariants, needs a performant algorithm thus designs it against those invariants, tests it, and it works. There is a good case that the provider of component A shares the blame since he could have known that the concreteness of software development forces his users to break the abstraction. (A good module where it may be assumed that you cannot get away with a pristine abstraction exposes its innards. In a good manner, I am not claiming spaghetti code is good.)
And then, somewhere, a too high-brow attitude anyway. "Look, a compiler is a functor! Now everything is neat and explained." Software development must, and will, be messy since reality will always kick in. If A breaks B then we'll fix it again.
Ideally, the prohibition against reliance on encapsulated details would be tied to publication. i.e. the system will not let component B be published because it relies on internal details of component A and the publisher of A has selected a policy of not allowing such dependence.
But it would be hard to enforce this technically unless everything was going through a marketplace for publication. I suppose it could be enforced legally (the license specifies no dependence on internal details). Even if you choose firm language-level rejection of encapsulation violations, the effectiveness of such measures depend on the distribution method. Are you distributing a library as a header file and binary blob or running a web service?
I know a man who told me he fired a programmer because "his code looked like a painting." True story.
Generalizing from that sample size of one, I doubt a marketplace for publication will be accepted.
Not only do I think it's hard to enforce module encapsulation, I think the rewards from cracking open an encapsulated module are sometimes worth the cost of brittleness and voided warranties.
Ideally, the module system would make it easy to publish code that respects encapsulation boundaries, but would still make it possible to publish code that doesn't. Whoever installs a disrespectful module should have to manually resolve and maintain this situation quite a bit more than if they installed a respectful one, but only because that's essential complexity. If the module system hadn't made it possible, they'd still have spent that effort plus whatever effort they needed to work around the module system.
Three days ago, I probably wouldn't have said this. This thread's been food for thought.
I've also been thinking about legal annotations on code, some of which could talk about what's allowed at publication time, so our positions are very similar.
Releasing a new version of A doesn't necessarily break anyone. Deploying a new version of A does that. If B is locked to A version 1 and you change the relied upon internals in A version 2, then the Ci clients will only all fail if you force the new A upon them.
Different language ecosystems perform at different levels of poorly in this regard. I'd like to see more progress in the versioning, release management, deployment, and other software ecosystem aspects. However, there's also some fundamental differences between releasing software libraries and deploying services.
The blame will fall on the last system to change anything. If Team A deploys v2 of Service A to a shared cluster, they will break all the Ci clients. However, if Team A' deploys v2 of some Code Artifact A to a build repository, then Team B will get the public blame, and rightfully so, if they upgrade to A v2 and then redeploy without any validation.
My point is basically this: Depending on internals is going to happen from time to time. How you smoothly you can deal with the repercussions is what's most important to me.
Stepping back a bit (which is unhelpful in a specific situation, but helpful when planning for, say, language design), the problem seems to be a shear between A's declared interface (what it's claimed to do) and practical interface (what it actually does, for the purposes of B). We judge B by testing it, which is practical, and therefore if there's a shear between the declared and practical interfaces, the actual form of B will favor the practical over the declared. Testing is always the preferred criterion (what's that line about 'I've only proven it correct, I haven't tested it'?), so it's... impractical... to demand that B depend only on A's declared interface unless it's possible to test B using the declared interface. Language design affects the shape of B's practical dependencies on A, the shape of A's declared interface, and how well or badly matched those are to each other. Hmm.
I don't follow. Are you suggesting that testing needs more privileged access (to dependent components) than ordinary execution? If so, why? If not, how is testing relevant to the problem?
And how is Knuth's quote related? The problem of today's SE practice certainly isn't too much trust in proofs and too little testing -- it's too much trust in tests and too little reasoning. And FWIW, tractable reasoning is only enabled by proper (i.e., reliable) abstraction.
When an abstration leaks, no privilege is needed.
The client ultimately ("end user") cares that the software does what they want it to, full stop. In a showdown, actual behavior trumps abstruse mathematical contracts. It follows, logically, that the winning move for contracts is to not be in conflict with behavior.
I think there needs to be a mechanism for separating implementation from interface. I don't think hiding beyond that is necessary, although I don't agree with the OO approach of combining data-hiding with objects. I find the Ada way of having modules for data hiding and tagged types for object polymorphism as a much better system.
Having said that the object capability model is the way to go for an operating system. For real runtime data-hiding you need to manipulate the segment registers or page tables anyway.
I think the reasons for each are different and need to be kept separate, interface/implementation data hiding is about enabling structure in large projects and allowing teams of people to work together effectively and is a static source code thing. Capabilities are about security and need runtime enforcement to be secure, and are dynamic, as I should be able to remove a permission from a running program.
I like to give the hiding of implementation details a more concrete rubric: code with public contract.
Whatever is public is what is necessary for the consumer of the interface. Sometimes this does mean exposing a mechanism or implementation detail-- because it's necessary for proper use.
That which is enforced-private should be all those elements of the implementation irrelevant to the consumer.
Capability models then have a framework to sit in. In an operating system where there is a menagerie of consumers, a fine-grained and variable approach to interface consumption fits cleanly.
I try to avoid modifying the internals of libraries. I don't want to be tied to maintaining compatibility with future versions, so I stick strictly to the API, never use private-APIs. I would rather re-implement the functionality in the application than use a private-API and I would adjust development times, and prioritise features accordingly. As I prefer to use open-source libraries (even when developing on closed platforms like iOS), in the rare cases where library changes are required I have worked with the library developers to get the changes I need accepted into the library as an upstream patch, meaning I don't have to be responsible for future maintenance of that code.
I understand the temptation to use private APIs and break the abstractions, but in my hard won experience it is always a bad idea. | http://lambda-the-ultimate.org/node/4965 | CC-MAIN-2018-39 | refinedweb | 5,252 | 53.31 |
Use this procedure to mirror the global namespace, /global/.devices/node@nodeid/.
Become superuser on a node of the cluster.
Place the global namespace slice in a single-slice (one-way) concatenation.
Use the physical disk name of the disk slice (cNtXdY sZ).
Create a second concatenation.
Create a one-way mirror with one submirror.
The metadevice namespace is physically connected to more than one node (multihosted), enable the localonly property.
Perform the following steps to enable the localonly property of the raw-disk device group for the disk that is used to mirror the global namespace. You must enable the localonly property to prevent unintentional fencing of a node from its boot device if the boot device is connected to multiple nodes.
If necessary, use the scdidadm(1M) disk is mirrored.
Only the node whose disk is mirrored should remain in the node list for the raw-disk device group.
Specifies the cluster-unique name of the raw-disk device group
Specifies the name of the node or nodes to remove from the node list the localonly property is enabled.. | http://docs.oracle.com/cd/E19528-01/819-0420/babfggge/index.html | CC-MAIN-2014-35 | refinedweb | 181 | 65.52 |
Hi,
I am working on a listview with customized list adapter, in which i have progress bar in each list item, and those progress bar should increment as par the service result (service downloads some file from network and returns progress of download).. but the problem which i am facing is i am not able to get the reference of individual progress bar to set its progress.. so the question is which is the best way to update a particular view item in a list on an event.. even i could not found any of such example in APIDemos..
Could anyone please suggest me any solution on this, or the best or recommended way to update a view item in a list on an event.
Right now what i am doing is, i am keeping references of views in a Map<String listId, Object view> and on an event i recognize from key which progressbar to update and then using the stored object from Map regenerating an object of a ProgressBar and then setting the progress of a particular progress bar.
But because of this when i try to update (suppose)first progressbar with its stored reference then 8th progressbar which is currently out off focus gets update... and we come to know this when focus goes to that list item and the first list item goes out of focus, as the first view object gets reuse when it goes out of focus for the next viewitem in the list, currently its 8th listview item
My code is
public class CustomCursorAdapter extends SimpleCursorAdapter
{
private Map<String, Object> mViewInfo = new HashMap<String, Object>();
........
public void bindView(View view, Context context, Cursor cursor)
{
...........
mViewInfo.put(rowId, view);
..........
}
......
public void setProgress(String rowId, int totalSize, int downloadedSize) {
int progress = mCurrentProgress = (downloadedSize*100)/totalSize;
View mView = (View)mViewInfo.get(rowId);
ProgressBar mProgressBar = (ProgressBar)mView.findViewById(R.id.progressbar);
mProgressBar.setProgress(progress);
}
........
}
Thanks in advance
Tinky | http://www.anddev.org/view-layout-resource-problems-f27/how-to-update-a-view-item-in-a-listview-on-an-event-t4115.html | CC-MAIN-2016-30 | refinedweb | 321 | 54.46 |
Ui
Manager
Type: Class
How to get uiManager?
JavaScript
import uiManager from 'fontoxml-modular-ui/src/uiManager.js'
Allows registration of React components to be used in the Fonto application.
Methods
Custom Icon
Type: Function
Registers a custom icon defined as an SVGModule or string of SVG under a given name. Use the given name as the icon name when you want to use it.
Strongly recommended tips for sizing your icons:
Your icon should be easily viewable at a minimum-height of 14px. (this size is used when the icon is inline with other text; icon size 's' in FDS)
Define an SVG viewBox of 0 0 180 140, this has the same 18:14 aspect ratio as the existing Font-Awesome icons. This helps to automatically align different icons visually.
Remove the height and width attributes from the svg element if present.
Tip for using a custom color: to allow setting the icon color when you use your custom icon, use 'currentColor' whenever you want to render the primary color of your SVG: See for more on currentColor.
Eg.
JavaScript
// place this in the install.js of any package: uiManager.registerCustomIcon( 'swirl', '<svg version="1.1" xmlns="" viewBox="0 0 180 140"><path fill="transparent" stroke="currentColor" stroke-</svg>' ); // (Note: stroke is set to currentColor and the path stays within a viewBox of 0 0 180 140.) // (Note: the <?xml version="1.0" encoding="UTF-8"?> declaration is omitted: its optional.)
Do not include anything inside the svg that isn't used to draw the svg (shape/path). So do not include an svg
<title> element for example, that is only used to (automatically) display as a tooltip when hovering the svg. This is done by the browser natively and conflicts with the tooltipContent prop on an FDS Icon for example.
Tip for handling long svg strings: import and register them in your install.js file, like so:
JavaScript
import uiManager from 'fontoxml-modular-ui/src/uiManager.js'; import myCustomIconSVG from './my-custom-icon.svg'; export default function install () { uiManager.registerCustomIcon('myCustomIcon', myCustomIconSVG); };
Usage somewhere in React/JSX:
<Icon icon="swirl" />
Usage as a widget when configuring a CVK element:
createIconWidget('swirl')
Anywhere a icon name is expected, a custom icon name that is registered can be used.
Tip: you can even override existing (FontAwesome) icons by registering a custom icon under a pre-existing FontAwesome icon name.
Arguments
React Component
Type: Function
Registers a React component using the given name.
Component can be either the constructor of a class extending React. Component (or one of its subclasses), or a stateless component function.
Related links
Arguments | https://documentation.fontoxml.com/latest/uimanager-d01b1d92d618 | CC-MAIN-2021-31 | refinedweb | 438 | 56.96 |
Keychain & TouchID
Hi,
not sure if anyone else is interested, but here's an update of the Pythonista
keychainmodule drop-in replacement. Available as gist now, will be included in the Black Mamba later.
It's a proof of concept, will probably change API, add more things, ... Don't use this for serious work now.
What's this all about? iOS keychain is powerful and allows you to specify things like:
- when the password is accessible (unlocked device, after first unlock, ...),
- if the password should be synchronised to other devices as well,
- if the user presence is required whenever script wants to access password,
- etc.
Unfortunately, Pythonista
keychainmodule doesn't allow us to control this. It's just a simple text password storage. Module does use system keychain, but there're no options to control all these things. That's the reason why I started to work on this enhancement.
Why? The reason is quite simple. I want to store sensitive data, like our production keys on the iPad and I do not want to use Pythonista
keychainmodule. Because then any other script can silently retrieve my keychain items. One can say, which script? As the author of Black Mamba, I'm going to say Black Mamba for example, just not to offend authors of other scripts. Did you install Black Mamba? Did you read the source code? How can you be sure that I'm not silently calling
get_services,
get_passwordfor every service and then sending all these passwords to my server? Of course that I'm not doing this. It's just an example. But these things happen. Google for PyPI, npm, ... issues and you'll see. You can say that I shouldn't store production passwords, ... on my iPad within Pythonista when I'm using 3rd party modules. Yes, I shouldn't via
keychainmodule. I wanted to solve this somehow and thus here's this new module allowing me to set user presence requirement, disable syncing, ...
Here's an example:
gp = GenericPassword('s', 'a2') gp.set_password('hallo3', accessible=kSecAttrAccessibleWhenPasscodeSetThisDeviceOnly, access_control=kSecAccessControlTouchIDAny) p = get_password('s', 'a2') print(p)
What it does? It stores password
hallo3for the service
sand account
a2. Whenever you want to retrieve it, iOS system dialog appears (every single time) requiring you to place your finger on the touch ID sensor. Also you can control what happens if fingerprints are changed (you can get your password deleted), if you require same fingerprints set or any, if you require biometrics or passcode is enough, etc.
This is system level stuff. Even when you change code to this one ...
gp = GenericPassword('s', 'a2') gp.set_password('hallo3', accessible=kSecAttrAccessibleWhenPasscodeSetThisDeviceOnly, access_control=kSecAccessControlTouchIDAny) import keychain p = keychain.get_password('s', 'a2') print(p)
... you'll still be asked for your fingerprint. Also you'll be asked for your fingerprint if you'd like to change password for existing service & account which is already protected with biometrics, etc.
I'll finish this one even even if no one is interested. But as I would like to include this in the Black Mamba, comments are appreciated.
@zrzka I am very interested in this, I think the ability to lock 'secrets' behind TouchID (and I assume FaceID) will be great, so thanks for creating this.
I haven't had a chance to have play with this but I assume that you copy keychain.py in to site-packages and it will be used instead of the built in keychain module?
To keep the compatibility layer is a great thing but I think it would be beneficial if it either has a helper method to say set_password_with_touchid(.....) or at least optional parameters to set_password etc that allowed you to set accessible and access_control this way you don't have to interact with the GenericPassword class?
@zrzka This looks very interesting, and I'm sure that it'll give me a few ideas about how to improve Pythonista's built-in
keychainmodule.
But I'd appreciate if you could rename your module to something else (
better_keychain,
keychain2, or whatever). Conflicting module names are a pretty frequent support issue for Pythonista. All kinds of weird things can happen when people have modules in site-packages that have the same name as a built-in module.
@shaun-h yep, FaceID is biometrics as well. This shared gist is a proof of concept. If you'd like to play with it, rename it to
security.py, place in to
site-packages-3and then
import security. This will be included in BM as
blackmamba.framework.security. iOS keychain support comes from the
Security.frameworkand I'll put all wrappers in to the
blackmamba.framework& framework name. Do not use
keychain.py, just to avoid clashes. If you do not want to rewrite existing code, you can do
import security as keychain:)
I'll introduce more convenience funcs later when I'll finish it. Would like to add InternetPassword, probably certificates as well, ... Then will have to think again about classes, methods, ... again, refactor little bit and then convenience funcs. Do not want to refactor all these convenience funcs as well when the underlying GenericPassword will be modified for sure. Saving some time :)
@omz yup, will rename it in the gist as well. As I wrote, it will be
blackmamba.framework.security. It's just WIP for now.
Edit: Done.
BTW for
ctypesusers, found a way how to extract symbols. Example for
kSecClass:
load_framework('Security') def _symbol_ptr(name): return c_void_p.in_dll(c, name) def _str_symbol(name): return ObjCInstance(_symbol_ptr(name)).UTF8String().decode() kSecClass = _str_symbol('kSecClass')
And
kSecClassnow contains
class, which is
kSecClasssymbol value in Security framework. Was extracting them on Mac and I no longer need it now :)
(@zrzka You probably found this out already, but I'll explain it just in case, and for others reading the thread)
in_dllonly works for variables declared as
externin the headers, which is often the case for
NSStrings and other object constants. On the other hand, integer constants are almost always preprocessor macros or
staticvariables, which are not accessible at runtime. Those constants' values cannot be looked up using
in_dlland have to be copied from the appropriate headers.
Sometimes the values of
extern constvariables are written in the headers, so they could be copied from there, but it's better to use
in_dllwhere possible. That way, if Apple changes the value of a string constant in a new iOS version, the new value will be used automatically (since
in_dllloads it at runtime from the respective library/framework). If the value is copy-pasted from the headers, it may become incorrect in future iOS versions, and the code will break.
@dgelessus thanks for pointing this out. I'll add one more -
ObjCInstancewith symbol pointer works in this case because of toll-free bridging. One can be confused how
CFStringRefcan be used with
ObjCInstanceand then treated as
NSString. Here's the documentation quote:.
You can find all supported types at the end of the linked page.
Okay, some news, Friday, something to play with over weekend. Did update the gist.
Exceptions
They're pretty self explanatory, but ...
KeychainUserInteractionNotAllowedError- this exception is raised whenever you try to get keychain item which is protected and
authentication_uiis set to
.FAIL
KeychainItemNotFoundError- this exception is raised whenever you try to get existing keychain item which is protected and
authentication_uiis set to
.SKIP
KeychainUserCanceledError- this exception is raised whenever you try to get keychain item which is protected and you tap on the Cancel button (system dialog)
Enums
AuthenticationPolicy
It controls how user should authenticate to gain access. It's
IntFlag(Python 3.6) and you can combine them.
Accessibility
It controls when the keychain item is available. You can't combine them. Basically it's about always, after first unlock, when unlocked and combined with this device only. Special value
WHEN_PASSCODE_SET_THIS_DEVICE_ONLYsays that the item will be stored when you have a passcode, fingerprints, ... Whenever you remove fingerprints and/or passcode, item will be automatically deleted.
AuthenticationUI
ALLOW- is allowed (default) and if item is protected, UI will appear
SKIP- if item is protected, it behaves like it doesn't exist
FAIL- if item is protected,
KeychainUserInteractionNotAllowedErroris raised
Classes
AccessControl
Here you can combine accessibility & authentication policy.
GenericPasswordAttributes
Almost all available attributes of generic password keychain items. You can get it via
GenericPasswordinstance
get_attributesmethod. Or you can get a list of them via
GenericPasswordclass
query_itemsmethod.
GenericPassword
Class for manipulation with generic passwords.
Goals
- Provide a way to protect password items
- Prepare for other keychain item classes (like internet password)
- Raise for errors, silent errors are evil
- Provide Python enums, classes, ... to hide CF* stuff
- Provide compatibility layer with Pythonista
keychainmodule
If you open the gist, you'll see lot of stuff there. You should use only and only things defined in
__all__. Nothing else. At least now :)
Tried to hide complexity, but still pretty complex :) Will think about it more. Enjoy :)
Examples
First line of any example below should be
from security import *.
Pythonista way (compatibility)
set_password('s', 'a', 'p') assert get_password('s', 'a') == 'p' delete_password('s', 'a') assert get_password('s', 'a') is None
GenericPassword way
p = GenericPassword('s', 'a') p.set_password('p') assert p.get_password() == 'p' p.delete() try: p.get_password() except KeychainItemNotFoundError: pass else: assert False
Password attributes
p = GenericPassword('s', 'a') p.comment = 'Comment' p.label = 'Label' p.description = 'Description' p.is_invisible = False p.is_negative = False p.generic = b'custom data' # not a password p.accessibility = Accessibility.WHEN_PASSCODE_SET_THIS_DEVICE_ONLY p.set_password('p') a = p.get_attributes() assert a.comment == p.comment assert a.label == p.label assert a.description == p.description assert a.is_invisible == p.is_invisible assert a.is_negative == p.is_negative assert a.generic == p.generic assert a.accessibility is p.accessibility
Protect with user presence (pass code, touch id, ...)
p = GenericPassword('s', 'a') p.access_control = AccessControl( Accessibility.WHEN_PASSCODE_SET_THIS_DEVICE_ONLY, AuthenticationPolicy.USER_PRESENCE ) p.set_password('p') # System UI will appear, if you hit Cancel, KeychainUserCanceledError is raised assert p.get_password() == 'p'
Access protected with prompt
assert p.get_password(prompt='Zrzka wants your password') == 'p'
Disable authentication UI and auto fail for these items
# We have lost our finger, just ask system to fail automatically if it's protected try: p.get_password( prompt='Zrzka wants your password', authentication_ui=AuthenticationUI.FAIL ) except KeychainUserInteractionNotAllowedError: print('Ooops, my finger is lost and I cannot retrieve my password') else: assert False
Skip protected items
# We have lost our finger, just ask system to skip all these items try: p.get_password( prompt='Zrzka wants your password', authentication_ui=AuthenticationUI.SKIP ) except KeychainItemNotFoundError: pass else: assert False
Query for services & account
attrs = GenericPassword.query_items() for x in attrs: print(f'{x.service} {x.account} {x.creation_date}')
Query for accounts for specific service
attrs = GenericPassword.query_items(service='service') for x in attrs: print(f'{x.service} {x.account} {x.creation_date}')
Skip protected accounts
attrs = GenericPassword.query_items(authentication_ui=AuthenticationUI.SKIP) for x in attrs: print(f'{x.service} {x.account} {x.creation_date}')
Use prompt for protected accounts
attrs = GenericPassword.query_items(prompt='Your finger boy!') for x in attrs: print(f'{x.service} {x.account} {x.creation_date}')
Two more things ...
- all methods / funcs can show system UI except
delete, this one is not protected,
- you should never call
get_attributes,
get_data,
get_password,
set_data,
set_password,
query_itemson the main thread if
authentication_uiis set to
.ALLOW(default in iOS),
- you can if you explicitly pass
.FAILor
.SKIP.
Enhanced & documented gist part of the Black Mamba as the
blackmamba.framework.securitypackage. Basically did add
InternetPasswordsupport, polished it little bit, tried to document everything what can be used, etc.
Several notes:
- Not yet released, just in the master branch.
- Because not yet released, you have to use latest documentation to see documentation for this module.
very interesting.
I just tried it on my iPhone X, but get an error like "failed to get keychain item". any suggestion? | https://forum.omz-software.com/topic/4601/keychain-touchid/9 | CC-MAIN-2019-35 | refinedweb | 1,957 | 51.24 |
#include <mw/gulutil.h>
Link against: egul.lib
Provides static functions for manipulating colours.
See also: TDisplayMode
Brightens or darkens a 24-bit colour by a percentage.
If the percentage given is less than 100%, a darker colour will be returned. The algorithm brightens or darkens each of the R, G and B channels equally.
Creates a CFbsBitmap containing a colour gradient.
To create a gradient, the end colour aEndColor must be different to the start colour aStartingColor.
Gets the colours to use for a control's border.
Lighter and darker tones in the border are derived from the specified TRgb background colour using an algorithm operating on the RGB value of this color or a lookup table, depending on the display mode aMode. It sets the values of the aBorderColors members iBack, iLight, iMidlight, iMid, and iDark.
Creates a medium dark version of the colour.
This function darkens the colour 50% less than RgbDarkerColor(). | http://devlib.symbian.slions.net/belle/GUID-C6E5F800-0637-419E-8FE5-1EBB40E725AA/GUID-023BEA0C-7C45-3B89-9755-298EBF94D3B2.html | CC-MAIN-2021-25 | refinedweb | 155 | 67.25 |
Webservice::InterMine::Query::Roles::Runnable - Composable behaviour for runnable queries
This module provides composable behaviour for running a query against a webservice and getting the results.
Returns a results iterator for use with a query.
The following options are available:.
The two json formats allow low-level access to the data-structures returned by the webservice.
In preference to using the iterator, it is recommended you use Webservice::InterMine::Query::Roles::Runnable::count instead.
The number of results to return. Leave undefined for "all" (default).
The first result to return (starting at 0). The default is 0.
Whether to return the column headers at the top of TSV/CSV results. The default is false. There are two styles - friendly: "Gene > pathways > name" and path: "Gene.pathways.name". The default style is friendly if a true value is entered and it is not "path".
Possible values: (inflate|instantiate|raw|perl)
What to do with JSON results.
The results can be returned as inflated objects,
full instantiated Moose objects,
a raw json string,
or as a perl data structure.
(default is
perl). as well..
A synonym for results_iterator. See Webservice::InterMine::Query::Roles::Runnable::results_iterator.
returns the results from a query in the result format specified.
This method supports all the options of results_iterator, but returns a list (in list context) or an array-reference (in scalar context) of results instead.
Return all rows of results. This method takes the same options as
results, but any start and size arguments given are ignored. Note that the server code limits result-sets to 10,000,000 rows in size, no matter what.
Return the first result (row or object). This method takes the same options as
results, but any size arguments given are ignored. May return
undef if there are no results.
Return one result (row or result object), throwing an error if more than one is received.
A convenience method that returns the number of result rows a query returns.
Alias for get_count
Get the url for a webservice resource.
get the url to use to upload queries to the webservice.
Save this query in the user's history in the connected webservice. For queries this will be saved into query history, and templates will be saved into your personal collection of private templates.
Alex Kalderimis
<dev@intermine.org>
Please report any bugs or feature requests to
dev@intermine.org.
You can find documentation for this module with the perldoc command.
perldoc Webservice::InterMine::Query::Roles::Runnable
You can also look for information at:
This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself. | http://search.cpan.org/~intermine/Webservice-InterMine/lib/Webservice/InterMine/Query/Roles/Runnable.pm | CC-MAIN-2017-26 | refinedweb | 442 | 59.7 |
Purpose: This demo shows how to construct and manipulate a complementary pair of neurons.
Comments: These are leaky integrate-and-fire (LIF) neurons. The neuron tuning properties have been selected so there is one ‘on’ and one ‘off’ neuron.
Usage: Grab the slider control and move it up and down to see the effects of increasing or decreasing input. One neuron will increase for positive input, and the other will decrease. This can be thought of as the simplest population to give a reasonable representation of a scalar value.
Output: See the screen capture below
import nef net=nef.Network('Two Neurons') # Create the network net.make_input('input',[-0.45]) # Create a controllable input # with a starting value of -.45 net.make('neuron',neurons=2,dimensions=1, # Make 2 neurons representing max_rate=(100,100),intercept=(-0.5,-0.5), # 1 dimension, with a maximum encoders=[[1],[-1]],noise=3) # firing rate of 100, with a # tuning curve x-intercept of # -0.5, encoders of 1 and -1 # (i.e. the first responds more # to positive values and the # second to negative values), # and a noise of variance 3 net.connect('input','neuron') # Connect the input to the neuron net.add_to_nengo() | http://ctnsrv.uwaterloo.ca/docs/html/demos/twoneurons.html | CC-MAIN-2017-47 | refinedweb | 200 | 50.94 |
Introduction: Nabito - the Open Socket: Smart Meter for EV Charging
What is it?
Nabito - the open socket is essentially a smart electricity meter with electricity metering, user authorization, billing capabilities and user management. Currently it's in a very early stage but some core functionalities are already in place.
The control box consists of easy-to-get-online parts and is designed to be an intelligent and yet inexpensive electric socket solution for public and private parking lots for slow charging of electric vehicles. It runs on OrangePi One single-board computer (SCB).
The total cost of this solution is around €50 ($60).
Nabito - the open socket is currently designed for charging on ordinary sockets, in continental Europe it's 230V and 16A, i.e. cca. 3.3kW continuous, this is roughly equivalent or slightly higher than US level 1 charging.
Specs:
- Single Phase
- Voltage: 230 V AC
- Max. current: 16 A
- Power: 3.3 kW
- Size: 190x140x70mm
- Interface: RJ45 LAN connection
- IP compliance: IP55
Future control boxes will be able to handle higher currents (level 2 charging) as well.
DISCLAIMER: The following build guide is rather crude at the moment (missing wiring diagrams, missing some assembly steps, etc.), I wanted to get it out there as soon as possible, will work on improving it if I see the project is meaningful and has traction. I thought this little control box will be easy to document, but to my surprise there are many small steps that I have taken when building this, so please, if this build guide does not cover everything you need to know or if you have any questions, send me a mail. Thanks for understanding.
Step 1: Why Make It? - the Story
Compare the two pictures above. Not everybody has a private garage to charge their EV securely during night. In fact vast number of people live in apartment buildings and leave their cars in the streets during the night.
I explain all the reasons why I'm making this project in the following blog post:
Step 2: What Does It Do?
1. User authentication
Read the QR code with your mobile phone. It will direct you to the web interface.
The web user interface allows users to sign up, log in and use the control box. An admin interface is under construction. Admin can approve, disapprove users.
2. On/Off switching
With a mains relay and a contactor it can switch the outlet socket on/off based on user interaction.
3. Energy metering
The control box measures AC current and logs power usage. Standard metering function.
The energy metering is done per user.
4. Billing
Bills are created for individual users based on their energy usage. Monthly bills will be created later for admin convenience.
Step 3: Components
HW stack:
- OrangePi One $9.99
- OrangePi heat sink kit $1.40
- Arduino Uno Starter Kit $7.00
- CT sensor $4.78
- Power source for OrangePi $4.90
- Mains relay $0.98
- Micro SD card 16GB, class 10, you need a fast card for the OS to work ok, bought locally for approx $9.60
- IP54 220V Euro socket - bought at a local hardware store for approx. $4
- Junction box: Scame 685.007 190x140x70mm - - bought at a local hardware store for approx. $7
- Mains plug & cable - taken from an old PC plug, free :-) (otherwise $3-$4)
- Small parts: 3.5mm jack female, 10uF capacitor, 2x 10kOhm resistors, total max $2
- Contactor that would carry higher amperages is planned to be a part of the product as well, because the small mains relay would probably pose a safety risk. The contactor is not in the pictures yet, will add it later. $5.50 - not included in the total price at the moment
- LAN cable - I used a spare cable that was lying around in my office
- socket inner receptacle - I used one from an old el. socket, not included in the total HW cost
Total HW cost: $60.16 (€52.28)
SW stack:
- Armbian Linux (Ubuntu based), open source, $0 (all glory to Linus Torvalds + 20k people who worked on Linux kernel + the kind Armbian people who maintain linux images for ARM processors)
- Postgres DB, open source, $0
- Git, open source (more glory to Linus), $0
- Ruby on Rails (RVM, Ruby, Gems), open source, $0
- Nabito-app:, open source, $0
- Arduino script for CT sensor monitoring, $0
Total SW stack cost: $0 (*THUMBS_UP*)
Step 4: The Server: OrangePi One, SD Card, Armbian, Web App
- Download Armbian linux image for OrangePi One
You can download the desktop version, if you want to mess around with desktop environment. For this project you only need the server version, because we'll be running an HTTP server and serve it via LAN, no HDMI output is currently required/supported
- Use this guide if you want to learn more about Armbian
- Burn the img file onto the micro SD card with Etcher
- Insert the SD card into the OrangePi One
- Connect the LAN cable
- Connect the power source and boot the server
- Find out the OrangePi's IP address on your local router admin interface (let's say it's gonna be 192.168.1.10)
- Connect to the Orange from terminal:
ssh root@192.168.1.10
- Do the initial setup the system asks you
- Create an OS user "nabito" or name your user as you like
- Add user "nabito" to sudoers
To enable any sudo command without password:
As root:
visudo -f /etc/sudoers.d/nabito
write line:
%sudo ALL=(ALL) NOPASSWD:ALL
into the file and save the file
- Install the GPIO library
- Set up the relay script:
as root:
cd /usr/local/bin
git clone
Once your Linux system is up and running, it's time to prepare the environment for the Nabito application, which runs on Ruby on Rails.
- Install postgresql database
$ sudo apt-get update
$ sudo apt-get install postgresql postgresql-contrib
- configure database user nabito
- Install RVM
- Install Ruby version 2.4.0 using RVM
- Install Rails, version 5.0.2
- Clone the nabito application into ~/www/
$ mkdir ~/www; cd ~/www;
$ git clone
- $ cd nabito-app;
$ bundle install
- $ rake db:setup
- $rails server --binding=0.0.0.0 # to run the server
- Test if the application loads up:
- Server part done!
Step 5: The Current Sensor: Arduino Sketch
- Connect the Arduino to your PC/notebook via the USB port
- Download Arduino development IDE from arduino.cc
- add library from
the monitoring logic is based on
- load the sketch below into your arduino
- check the values in the serial monitor
#include "EmonLib.h"// Include Emon Library EnergyMonitor emon1; // Create an instance void setup() { Serial.begin(9600); emon1.current(0, 30); // Current: input pin, calibration. } void loop() { double Irms = emon1.calcIrms(1480); // Calculate Irms only Serial.print(Irms); // Irms Serial.println(";0.0;0.1;0.4;1.2;2.5;"); //dummy values for now delay(100); }
Step 6: Wiring: Mains Cables
Wire the mains cables like in the first picture.
The black cable denotes the line, the yellow/green denotes neutral.
Yes, I know, the ground is not present in the box and it's a safety issue, in the next versions of Nabito, the control box will be properly grounded, I promise.
Step 7: Wiring: Arduino and the CT Sensor
Wire the Arduino with the CT sensor according to the following manual:
You need:
- Arduino (you can use any Arduino: Uno, Nano, Mega, whichever you like, as long as it has an ADC)
- 10uF capacitor
- 2x 10kOhm resistors
- 3 jumper cables
- 3.5mm female jack socket
- CT sensor 30A/1V
I soldered the wires directly with the resistors and the capacitor, I know it's nasty and a clean soldering job with on a PCB would be nicer, but for prototype purposes, this is enough for now.
Step 8: Wiring: Arduino and OrangePi
Connect Arduino to the OrangePi via the USB port, this way it serves as a serial port and a power supply for the Arduino. This is not ideal, I know, UART connection is probably better for a number of reasons, but for simplicity's sake, I used the USB for basic wiring config.
Step 9: Wiring Everything Together
- Clamp the CT sensor on the mains line going out of the mains relay
- Connect the LAN cable and power source for OrangePi.
- Screw in the junction box lid
- And you're done wiring/assembling!
Step 10: Running and Testing
- Connect Nabito to mains power supply and your network via the LAN cable
connect a small load (e.g. a table lamp) to the outlet socket
Figure out Nabito's IP from your router
- ssh into Nabito
- start your application:
$ cd ~/www/nabito-app
$ rails server --binding=0.0.0.0
- point your browser to Nabito_IP:3000
- Login with admin user:
login: admin@example.com
pass: changeme
- go to socket 1 and press the button "switch on"
- the lamp should switch on
- leave it on for a couple of minutes
- then switch it off thru the web GUI
- go to -> Menu -> Billing
you should see a record of your last energy use while the lamp was on
Step 11: The Conclusion, Issues and Product Roadmap
My main reason for doing this project was to see, if you can create a relatively cheap smart meter with secure user authorization. The moment I realized you can get OrangePi SCBs for as cheap as 8Euros, I started experimenting with it.
It turns out OrangePi can run an HTTP server (Ruby on Rails + Puma + Nginx) rather smoothly and can log energy use number no problem. I had the control box running for 30 days in a row and everything worked fine.
The fact that you can have a smart meter control box for €50 ($60) is quite cool in my view.
Issues:
- currently tested on relatively small loads (max 2kW): lamps, soldering iron, kettle
- the mains relay is rated at 10A, to take advantage of higher currents, we'll need to add a heavier-duty contactor, I have one from Aliexpress for 5Euros, which is single-phase, rated at 25A, will test it and update this guide afterwards
- the OrangePi processor can get quite hot (50-60 degrees Celsius), since this is a closed box, there is no air ventilation, this needs to be considered if you would install the box in a place where it gets a lot of direct sunlight or in very warm climates. Beware of fire hazard, be safe!
- the web GUI functionality is currently quite basic, so expect bugs and report them to me via Github ;)
- each control box functions as a standalone isolated HTTP server, which is not ideal if you were to install e.g. a couple of them in a private residential parking area, a client-server, or even better client-cloud config would be better, but we are not there yet ;)
- no outside LED indicators if the OrangePi is working ok, if it's successfully connected to the interwebs
- if the internet connection is down, the out socket won't work
Opportunities:
- serve higher currents, up to 60A probably
- serve more phases (2-phase, 3-phase)
- include a proper chargin outlet (CHADEMO/CSS/Tesla plug, etc.), but this would raise the cost of the control box substantiallyt, additional logic for communication with the EV would be required
- client-server, client-cloud configurations
- RFID/NFC/ user authorization with a swipe card or mobile NFC, mobile BLE authorization
- WIFI (use e.g. OrangePi Zero: it has no HDMI, which is good since HDMI output is not used in Nabito, has wifi, can be powered with ordinary mobile phone charger)
- more precise energy metering (ready-made AC monitoring module)
- develop GUI functionalities: better graphs, user management, custom APIs for billing engines, monthly bills via emails
- LDAP/Active directory integration for large companies that want to enable mass adoption on their workplace premises
- build custom Armbian images, which would contain the whole SW stack and be just ready to go
- the CPU won't be doing much most of the time, so you could mine bitcoins at some very small rate or do some other blockchain fancy stuff (anything possible,really, it's just a linux box :-P )
- since the box is quite cheap and open source, it could be mass-adopted, with small business installing and managing it locally
More EV related theory and applications on the Systems distributed website!
Be the First to Share
Recommendations
2 Discussions
2 years ago
Very cool project!
Reply 2 years ago
Thanks! | https://www.instructables.com/id/Nabito-the-Open-Socket-Cheap-Smart-Meter-for-EV-Ch/ | CC-MAIN-2020-29 | refinedweb | 2,086 | 57 |
The new line numbers inserted when using syntax coloring code tags is causing me problems when copying and pasting code. Here are some examples:
I copy and paste the code to my editor. This is how it looks when it's selected in my browser:[INDENT][IMG][/IMG]
[/INDENT]And this is how my code looks when it's in the editor:
1.
#include <stdio.h>
2.
#include <string.h>
3.
4.
int main(void) {
5.
6.
char mystring[] = "this is a test.\n";
7.
char *p = strchr(mystring, '\n');
8.
if (p != NULL) *p = '\0';
9.
printf("%s no newlines should occur.\n", mystring);
10.
11.
}
Sometimes, instead of line numbers I get pound symbols (#) instead. I'm using Firefox 2.0.
I don't have this problem at all. I tried copying/pasting from both Firefox and IE into Windows Notepad. Perhaps it's a problem with your editor?
In the meantime, a workaround would be to click the Reply w/ Quote button and then copy the post contents from the editor window.
Since the majority of the code posted within the forums is just for discussion purposes, I thought the line numbers would be a great aid towards talking about a particular piece of code. ie. "You have an error on line #5"
Perhaps it's a problem with your editor?
Nah, before I was using Xcode, but using TextEdit didn't seem to make a difference. To me, it seems to be somewhat on the browser end of things.
Just tried Safari now - no difference.
I'll let you know how things are on Linux when I next boot onto Debian.In the meantime, a workaround would be to click the Reply w/ Quote button and then copy the post contents from the editor window. Thanks; it seems like I'll be needing to do this whenever I need to try out code in my compiler from now on.
I have a mac as well so I am going to see if it works for me.
Perhaps it's a problem with your editor?
Same issue with my editor. DevShed had a similar issue and a better workaround.
[imo]Even though the better solution was to leave it as it was.[/imo]
Dave, are you in Linux?
Nope.
Although it may first seem like a great idea...Since the majority of the code posted within the forums is just for discussion purposes, I thought the line numbers would be a great aid towards talking about a particular piece of code. ie. "You have an error on line #5" That actually happens about 0.5% of the time. IMO it's not worthwhile to inconvenience 99.5% of the rest.
0.5% of the time you discuss the code snippets and 99.5% of the time you copy them into your editor and compile them? I suspect that's not the case with the vast majority of traffic to the programming forums.
I use colors and other tools available [when available] because that is far more effective than looking at a printout from 1972. So yes, that is the case.
BTW, I found this , FWIW.
Still investigating a solution. In the meantime, I've confirmed the problem is not with the web browser but rather with your editor. When using Firefox 2 and Windows Vista, I was able to copy/paste just fine into Notepad but I got line numbers when copy/pasting into Zend PHP Studio.
FireFox1.5 into VEdit -- line numbers.
FireFox1.5 into Notepad -- line numbers
Firefox1.5 into Wordpad -- line numbers
IE7 into VEdit -- one line, no formatting
IE7 into Notepad -- one line, no formatting
IE7 into Wordpad -- no formatting
Opera9 -- seems to work.
I see no reason for line numbers either. I doubt it's worth the trouble to track down the solution. And it would be too presumptuous to require a specific browser.
It would also be beneficial IMO to put back the label "code" and the lines above and below the code. Don't have to change the code format, just the head and tail.
I went ahead and implemented the same workaround DevShed uses that Dave suggested. Of course it's not that elegant ... but let's see.
I like that change -- now we can turn line numbers on and off as we please. Hope you keep this feature. :)
Thanks Ancient!
I second that. It wasn't that the line numbers were bad, but if they interfere with my ease of copying and pasting, then it becomes more a nuisance. This solution is great. :)
I went ahead and implemented this in our code snippet library as well. Here is an example:
You'll notice that with the line numbers, when code is too long for the line, it very cleanly wraps around with the line number indicating it's just a wrap around. I think this is a lot nicer than the messy horizontal scrollbars, especially with very long snippets.
I think this will work. Can you tell how many line are in the code block? If so, could you please add the toggle to the top of the block also if the block is greater than, say, 30 lines? If not, just add the toggle to the top. Many times people post 2 or more (window) pages of code and if we see something we wish to comment on at the top, we have to scroll all the way down, then all the way back up. This would also delineate the block nicely.
I actually went ahead and decided to allow the code to be as long as it needs to be. I found the scrollbars incredibly annoying. I prefer to read code the same way that I do everyday in my IDE ... by utilizing the full height of my monitor, as opposed to being forced to only see a couple of lines of code at a time. I think it has something to do with me not having a photographic memory.
The new line numbers inserted when using syntax coloring code tags is causing me problems when copying and pasting code.
Click on the link "toggle plain text" which appears below the code snippet and then try copy-pasting. It works out to be fine. | http://www.daniweb.com/community-center/daniweb-community-feedback/threads/69244/code-line-numbers-causing-paste-problems | crawl-003 | refinedweb | 1,048 | 83.66 |
11 February 2011 21:21 [Source: ICIS news]
HOUSTON (ICIS)--Methanex’s Titan plant in ?xml:namespace>
A Methanex official confirmed that the methanol producer’s 850,000 tonne/year Titan plant in Point Lisas,
The plant went down in late January for undisclosed reasons. A company executive said the outage would last three weeks, which would have ended around 11-12 February.
After jumping by almost that much during the previous week, the spot price range on Friday closed at 102-104 cents/gal.
In addition to the Methanex Titan restart, prospects for more capacity were boosted this week when Petrobras announced that it would build a new methanol plant in
( | http://www.icis.com/Articles/2011/02/11/9434844/methanex-restarts-titan-methanol-plant-in-trinidad.html | CC-MAIN-2015-18 | refinedweb | 112 | 67.69 |
I am using SharpDX to play Sound via XAudio2 and got problems looping an xWMA file on a certain range. the values LoopBegin and LoopLength seem to be completely ignored when using xWMA files; it always loops the entire soundfile.
However on WAV files these values work like expected.
i was already reading
but i believe i have met all criteria with the values to PlayBegin, PlayLength, LoopBegin, LoopLength etc.
anything specific about xwma data that i am missing here?
i am using this function to set the values in samples since all my files have 44100KhZ
static int MilliSecondsToSamples(double millis)
{
return (int) (44100.0 * millis / 1000.0);
} | http://www.gamedev.net/topic/645345-looping-xwma-sounds-with-xaudio2/ | CC-MAIN-2015-18 | refinedweb | 109 | 64.61 |
Hadoop Interview Questions and Answers
This blog post on Hadoop interview questions and answers if one of our most important article on Hadoop Blog. Intervews are very critical part of ones career and it is important to know correct answers of the questions that are asked in the interview to gain enough knowledge and confidence. The Hadoop Interview Questions were prepared by the industry Experts at DataFlair. We have divided the whole post in 10 parts:
- Basic Questions And Answers for Hadoop HDFS Interview
- HDFS Hadoop Interview Question and Answer for Freshers
- Frequently Asked Question in Hadoop HDFS Interview
- HDFS Hadoop Interview Questions and Answers for Experienced
- Advanced Questions for Hadoop Interview
- Basic MapReduce Hadoop Interview Questions and Answers
- MapReduce Hadoop Interview Question and Answer for Freshers
- MapReduce Hadoop Interview questions and Answers for Experienced
- Top Interview Quetions for Hadoop MapReduce
- Advanced Interview Questions and Answers for Hadoop MapReduce
Hadoop Interview Questions for HDFS
These Hadoop Interview Questions and Answers for HDFS are from different components of HDFS. If you want to become a Hadoop Admin or Hadoop developer, then DataFlair is an appropriate place.
We were fully alert while framing these Hadoop Interview questions. Do comment your thoughts in comment section below.
In this section of Hadoop interview questions and answers, we have covered Hadoop interview questions and answers in detail. We have covered HDFS Hadoop interview questions and answers for freshers, HDFS Hadoop interview questions and answers for experienced as well as some advanced Hadoop interview questions and answers.
Basic Questions And Answers for Hadoop Interview
1) What is Hadoop HDFS – Hadoop Distributed File System?
Hadoop distributed file system-HDFS is the primary storage system of Hadoop. HDFS stores very large files running on a cluster of commodity hardware. It works on the principle of storage of less number of large files rather than the huge number of small files. HDFS stores data reliably even in the case of hardware failure. It also provides high throughput access to the application by accessing in parallel.
Components of HDFS:
- NameNode – It works as Master in Hadoop cluster. Namenode stores meta-data i.e. number of blocks, replicas and other details. Meta-data is present in memory in the master to provide faster retrieval of data. NameNode maintains and also manages the slave nodes, and assigns tasks to them. It should deploy on reliable hardware as it is the centerpiece of HDFS.
- DataNode – It works as Slave in Hadoop cluster. In Hadoop HDFS, DataNode is responsible for storing actual data in HDFS. It also performs read and writes operation as per request for the clients. DataNodes can deploy on commodity hardware.
Read about HDFS in detail.
2) What are the key features of HDFS?
The various Features of HDFS are:
- Fault Tolerance – In Apache Hadoop HDFS, Fault-tolerance is working strength of a system in unfavorable conditions. Hadoop HDFS is highly fault-tolerant, in HDFS data is divided into blocks and multiple copies of blocks are created on different machines in the cluster. If any machine in the cluster goes down due to unfavorable conditions, then a client can easily access their data from other machines which contain the same copy of data blocks.
- High Availability – HDFS is highly available file system; data gets replicated among the nodes in the HDFS cluster by creating a replica of the blocks on the other slaves present in the HDFS cluster. Hence, when a client wants to access his data, they can access their data from the slaves which contains its blocks and which is available on the nearest node in the cluster. At the time of failure of a node, a client can easily access their data from other nodes.
- Data Reliability – HDFS is a distributed file system which provides reliable data storage. HDFS can store data in the range of 100s petabytes. It stores data reliably by creating a replica of each and every block present on the nodes and hence, provides fault tolerance facility.
- Replication – Data replication is one of the most important and unique features of HDFS. In HDFS, replication data is done to solve the problem of data loss in unfavorable conditions like crashing of the node, hardware failure and so on.
- Scalability – HDFS stores data on multiple nodes in the cluster, when requirement increases we can scale the cluster. There are two scalability mechanisms available: vertical and horizontal.
- Distributed Storage – In HDFS all the features are achieved via distributed storage and replication. In HDFS data is stored in distributed manner across the nodes in the HDFS cluster.
Read about HDFS Features in detail.
3) What is the difference between NAS and HDFS?
- Hadoop distributed file system (HDFS) is the primary storage system of Hadoop. HDFS designs to store very large files running on a cluster of commodity hardware. While Network-attached storage (NAS) is a file-level computer data storage server. NAS provides data access to a heterogeneous group of clients.
- HDFS distribute data blocks across all the machines in a cluster. Whereas NAS, data stores on a dedicated hardware.
- Hadoop HDFS is designed to work with MapReduce Framework. In MapReduce Framework computation move to the data instead of Data to computation. NAS is not suitable for MapReduce, as it stores data separately from the computations.
- Hadoop HDFS runs on the cluster commodity hardware which is cost effective. While a NAS is a high-end storage device which includes high cost.
4) List the various HDFS daemons in HDFS cluster?
The daemon runs in HDFS cluster are as follows:
- NameNode – It is the master node. It is responsible for storing the metadata of all the files and directories. It also has information about blocks, their location, replicas and other detail.
- Datanode – It is the slave node that contains the actual data. DataNode also performs read and write operation as per request for the clients.
- Secondary NameNode – Secondary NameNode download the FsImage and EditLogs from the NameNode. Then it merges EditLogs with the FsImage periodically. It keeps edits log size within a limit. It stores the modified FsImage into persistent storage. which we can use FsImage in the case of NameNode failure.
5) What is NameNode and DataNode in HDFS?
NameNode – It works as Master in Hadoop cluster. Below listed are the main function performed by NameNode:
- Stores metadata of actual data. E.g. Filename, Path, No. of blocks, Block IDs, Block Location, No. of Replicas, and also Slave related configuration.
- It also manages Filesystem namespace.
- Regulates client access request for actual file data file.
- It also assigns work to Slaves (DataNode).
- Executes file system namespace operation like opening/closing files, renaming files/directories.
- As Name node keep metadata in memory for fast retrieval. So it requires the huge amount of memory for its operation. It should also host on reliable hardware.
DataNode – It works as Slave in Hadoop cluster. Below listed are the main function performed by DataNode:
- Actually, stores Business data.
- It is actual worker node, so it handles Read/Write/Data processing.
- Upon instruction from Master, it performs creation/replication/deletion of data blocks.
- As DataNode store all the Business data, so it requires the huge amount of storage for its operation. It should also host on Commodity hardware.
These were some general Hadoop interview questions and answers. Now let us take some Hadoop interview questions and answers specially for freshers.
HDFS Hadoop Interview Question and Answer for Freshers
6) What do you mean by metadata in HDFS?
In Apache Hadoop HDFS, metadata shows the structure of HDFS directories and files. It provides the various information about directories and files like permissions, replication factor. NameNode stores metadata Files which are as follows:
- FsImage – FsImage is an “Image file”. It contains the entire filesystem namespace and stored as a file in the namenode’s local file system. It also contains a serialized form of all the directories and file inodes in the filesystem. Each inode is an internal representation of file or directory’s metadata.
- EditLogs – EditLogs contains all the recent modifications made to the file system of most recent FsImage. When namenode receives a create/update/delete request from the client. Then this request is first recorded to edits file.
If you face any doubt while reading the Hadoop interview questions and answers drop a comment and we will get back to you.
7) What is Block in HDFS?
This one is very important Hadoop interview questions and answers asked in most of the interviews.
Block is a continuous location on the hard drive where data is stored. In general, FileSystem store data as a collection of blocks. In a similar way, HDFS stores each file as blocks, and distributes it across the Hadoop cluster. HDFS client does not have any control on the block like block location. NameNode decides all such things.
The default size of the HDFS block is 128 MB, which we can configure as per the requirement. All blocks of the file are of the same size except the last block, which can be the same size or smaller.
If the data size is less than the block size,Mb block only for 1MB data.
The major advantages of storing data in such block size are that it saves disk seek time.
Read about HDFS Data Blocks in Detail.
8) Why is Data Block size set to 128 MB in Hadoop?
Because of the following reasons Block size is 128 MB:
- To reduce the disk seeks (IO). Larger the block size, lesser the file blocks. Thus, less number of disk seeks. And block can transfer within respectable limits and that to of metadata. Managing this huge number of blocks and metadata will create huge overhead. Which is something which we don’t want? So, the block size is set to 128 MB.
On the other hand, block size can’t be so large. Because the system will wait for a very long time for the last unit of data processing to finish its work.
9) What is the difference between a MapReduce InputSplit and HDFS block?
Tip for these type of Hadoop interview questions and and answers: Start with the definition of Block and InputSplit and answer in a comparison language and then cover its data representation, size and example and that too in a comparison language.
By definition-
- Block- Block in Hadoop is the continuous location on the hard drive where HDFS store data..
Example-
Consider an example, where we need to store the file in HDFS. HDFS stores files as blocks. Block is the smallest unit of data that can store or retrieved from the disk. The default size of the block is 128MB. HDFS break files into blocks and stores these blocks on different nodes in the cluster. We have a file of 130 MB, so HDFS will break this file into 2 blocks.
Now, if one wants to perform MapReduce operation on the blocks, it will not process, as the 2nd block is incomplete. InputSplit solves this problem. InputSplit will form a logical grouping of blocks as a single block. As the InputSplit include create InputSplits.
Read InputSplit vs HDFS Blocks in Hadoop in detail.
10) How can one copy a file into HDFS with a different block size to that of existing block size configuration?
By using bellow commands one can copy a file into HDFS with a different block size:
–Ddfs.blocksize=block_size, where block_size is in bytes.
So, consider an example to explain it in detail:, can issue the following command:
Hadoop fs –Ddfs.blocksize=33554432-copyFromlocal/home/dataflair/test.txt/sample_hdfs.
Now, you can check the HDFS block size associated with this file by:
hadoop fs –stat %o/sample_hdfs/test.txt
You can also check it by using the NameNode web UI for seeing the HDFS directory.
These are very common type of Hadoop interview questions and answers faced during the interview of a fresher.
Frequently Asked Question in Hadoop Interview
11) Which one is the master node in HDFS? Can it be commodity hardware?
Name node is the master node in HDFS. The NameNode stores metadata and works as High Availability machine in HDFS. It requires high memory (RAM) space, so NameNode needs to be a high-end machine with good memory space. It cannot be a commodity as the entire HDFS works on it.
12) In HDFS, how Name node determines which data node to write on?
Answer these type of Hadoop interview questions answers very shortly and to the point.
Namenode contains Metadata i.e. number of blocks, replicas, their location, and other details. This meta-data is available in memory in the master for faster retrieval of data. NameNode maintains and manages the Datanodes, and assigns tasks to them.
13) What is a Heartbeat in Hadoop?
Heartbeat is the signals that NameNode receives from the DataNodes to show that it is functioning (alive).
NameNode and DataNode do communicate using Heartbeat. If after a certain time of heartbeat DataNode does not send any response to NameNode, then that Node is dead. So, NameNode in HDFS will create new replicas of those blocks on other DataNodes.
Heartbeats carry information about total storage capacity. It also, carry the fraction of storage in use, and the number of data transfers currently in progress.
The default heartbeat interval is 3 seconds. One can change it by using dfs.heartbeat.interval in hdfs-site.xml.
14) Can multiple clients write into an Hadoop HDFS file concurrently?
Multiple clients cannot write into an Hadoop HDFS file at same time. Apache Hadoop follows single writer multiple reader models. When HDFS client opens a file for writing, then NameNode grant a lease. Now suppose, some other client wants to write into that file. It asks NameNode for a write operation in Hadoop. NameNode first checks whether it has granted the lease for writing into that file to someone else or not. If someone else acquires the lease, then it will reject the write request of the other client.
Read HDFS Data Write Operation in detail.
15) How data or file is read in Hadoop HDFS?
To read from HDFS, the first client communicates to namenode for metadata. The Namenode responds with details of No. of blocks, Block IDs, Block Location, No. of Replicas. Then, the client communicates with Datanode where the blocks are present. Clients start reading data parallel from the Datanode. It read on the basis of information received from the namenodes.
Once an application or HDFS client receives all the blocks of the file, it will combine these blocks to form a file. To improve read performance,.
Read HDFS Data Read Operation in detail.
16) Does HDFS allow a client to read a file which is already opened for writing?
Yes, the client can read the file which is already opened for writing. But, the problem in reading a file which is currently open for writing, lies in the consistency of data. HDFS does not provide the surety that the data which it has written into the file will be visible to a new reader. For this, one can call the hflush operation. It will push all the data in the buffer into write pipeline. Then the hflush operation will wait for acknowledgments from the datanodes. Hence, by doing this, the data that client has written into the file before the hflush operation visible to the reader for sure.
If you encounter any doubt or query in the Hadoop interview questions, feel free to ask us in the comment section below and our support team will get back to you.
17) Why is Reading done in parallel and writing is not in HDFS?
Client read data parallelly because by doing so the client can access the data fast. Reading in parallel makes the system fault tolerant. But the client does not perform the write operation in Parallel. Because writing in parallel might result in data inconsistency.
Suppose, you have a file and two nodes are trying to write data into a file in parallel. Then the first node does not know what the second node has written and vice-versa. So, we can not identify which data to store and access.
Client in Hadoop writes data in pipeline anatomy. There are various benefits of a pipeline write:
- More efficient bandwidth consumption for the client – The client only has to transfer one replica to the first datanode in the pipeline write. So, each node only gets and send one replica over the network (except the last datanode only receives data). This results in balanced bandwidth consumption. As compared to the client writing three replicas into three different datanodes.
- Smaller sent/ack window to maintain – The client maintain a much smaller sliding window. Sliding window record which blocks in the replica is sending to the DataNodes. It also records which blocks are waiting for acks to confirm the write has been done. In a pipeline write, the client appears to write data to only one datanode.
18) What is the problem with small files in Apache Hadoop?
Hadoop is not suitable the small number of large files for storing large datasets. It is not suitable for a large number of small files. A large number of many small files overload NameNode since it stores the namespace of HDFS.
Solution –
- HAR (Hadoop Archive) Files – HAR files deal with small file issue. HAR has introduced a layer on top of HDFS, which provides interface for file accessing. Using Hadoop archive command we can create HAR files. These file runs a MapReduce job to pack the archived files into a smaller number of HDFS files. Reading through files in as HAR is not more efficient than reading through files in HDFS.
- Sequence Files – Sequence Files also deal with small file problem. In this, we use the filename as key and the file contents as the value. Suppose we have 10,000 files, each of 100 KB, we can write a program to put them into a single sequence file. Then one can process them in a streaming fashion.
19) What is throughput in HDFS?
The amount of work done in a unit time is known as Throughput. Below are the reasons due to HDFS provides good throughput:
- Hadoop works on Data Locality principle. This principle state that moves computation to data instead of data to computation. This reduces network congestion and therefore, enhances the overall system throughput.
- The HDFS is Write Once and Read Many Model. It simplifies the data coherency issues as the data written once, one can not modify it. Thus, provides high throughput data access.
20) Comparison between Secondary NameNode and Checkpoint Node in Hadoop?
Secondary NameNode download the FsImage and EditLogs from the NameNode. Then it merges EditLogs with the FsImage periodically. Secondary NameNode stores the modified FsImage into persistent storage. So, we can use FsImage in the case of NameNode failure. But it does not upload the merged FsImage with EditLogs to active namenode. While.
The above 7-20 Hadoop interview questions and answers were for freshers, However experienced can also go through these Hadoop interview questions and answers for revising the basics.
HDFS Hadoop Interview questions and Answers for Experienced
21) What is a Backup node in Hadoop?.. One Backup node is supported by the NameNode at a time. No checkpoint nodes may be registered if a Backup node is in use.
22) How does HDFS ensure Data Integrity of data blocks stored in HDFS?
Data Integrity ensures the correctness of the data. But, it is possible that the data will get corrupted during I/O operation on the disk. Corruption can occur due to various reasons network faults, or buggy software. Hadoop HDFS client software implements checksum checking on the contents of HDFS files.
In Hadoop, when a client creates an HDFS file, it computes a checksum of each block of the file. Then stores these checksums in a separate hidden file in the same HDFS namespace. When a client retrieves file contents it first checks. Then it verifies that the data it received from each Datanode matches the checksum. Checksum stored in the associated checksum file. And if not, then the client can opt to retrieve that block from another DataNode that has a replica of that block.
23) What do you mean by the NameNode High Availability in hadoop?
In Hadoop 1.x, NameNode is a single point of Failure (SPOF). If namenode fails, all clients would be unable to read, write file or list files. In such event, the whole Hadoop system would be out of service until new namenode is up.
Hadoop 2.0 overcomes SPOF. Hadoop 2.x provide support for multiple NameNode. High availability feature gives an extra NameNode (active standby NameNode) to Hadoop architecture. This extra NameNode configured for automatic failover. If active NameNode fails, then Standby Namenode takes all its responsibility. And cluster work continuously.
The initial implementation of namenode high availability provided for single active/standby namenode. However, some deployment requires high degree fault-tolerance. Hadoop 3.x enable this feature by allowing the user to run multiple standby namenode. The cluster tolerates the failure of 2 nodes rather than 1 by configuring 3 namenode & 5 journal nodes.
Read about HDFS NameNode High Availability.
24) What is Fault Tolerance in Hadoop HDFS?
Fault-tolerance in HDFS is working strength of a system in unfavorable conditions. Unfavorable conditions are the crashing of the node, hardware failure and so on. HDFS control faults by the process of replica creation. When client stores a file in HDFS, Hadoop framework divide the file into blocks. Then client distributes data blocks across different machines present in HDFS cluster. And, then create the replica of each block is which replica of the block is present.
Read about HDFS Fault Tolerance in detail.
25) Describe HDFS Federation.
In Hadoop 1.0, HDFS architecture for entire cluster allows only single namespace.
Limitations-
- Namespace layer and storage layer are tightly coupled. This makes alternate implementation of namenode difficult. It also restricts other services to use block storage directly.
- A namespace is not scalable like datanode. Scaling in HDFS cluster is horizontally by adding datanodes. But we can’t add more namespace to an existing cluster.
- There is no separation of the namespace. So, there is no isolation among tenant organization that is using the cluster.
In Hadoop 2.0, HDFS Federation overcomes this limitation. It supports too many NameNode/ Namespaces to scale the namespace horizontally. In HDFS federation isolate different categories of application and users to different namespaces. This improves Read/ write operation throughput adding more namenodes.
Read about HDFS Federation in detail.
26) What is the default replication factor in Hadoop and how will you change it?
The default replication factor is 3. One can change replication factor in following three ways:
- By adding this property to hdfs-site.xml:
<property> <name>dfs.replication</name> <value>5</value> <description>Block Replication</description> </property>
- One can also change the replication factor on a per-file basis using the command:
hadoop fs –setrep –w 3 / file_location
- One can also change replication factor for all the files in a directory by using:
hadoop fs –setrep –w 3 –R / directoey_location
27) Why Hadoop performs replication, although it results in data redundancy?
In HDFS, Replication provides the fault tolerance. Replication is one of the unique features of HDFS. Data Replication solves the issue of data loss in unfavorable conditions. Unfavorable conditions are the hardware failure, crashing of the node practical cluster. As the very first reason to deploy HDFS was to store huge data sets. Also, one can change the replication factor to save HDFS space. Or one can also use different codec provided by the Hadoop to compress the data.
28) What is Rack Awareness in Apache Hadoop?
In Hadoop, Rack Awareness improves the network traffic while reading/writing file. In Rack Awareness NameNode chooses the DataNode which is closer to the same rack or nearby rack. NameNode achieves Rack information by maintaining the rack ids of each DataNode. Thus, this concept chooses Datanodes based on the Rack information.
HDFS NameNode makes sure that all the replicas are not stored on the single rack or same rack. It follows Rack Awareness Algorithm to reduce latency as well as fault tolerance.
Default replication factor is 3. Therefore according to Rack Awareness Algorithm:
- The first replica of the block will store on a local rack.
- The next replica will store on another datanode within the same rack.
- And the third replica stored on the different rack.
In Hadoop, we need Rack Awareness for below reason: It improves-
- Data high availability and reliability.
- The performance of the cluster.
- Network bandwidth.
Read about HDFS Rack Awareness in detail.
29) Explain the Single point of Failure in Hadoop?
In Hadoop 1.0, NameNode is a single point of Failure (SPOF). If namenode fails, all clients would unable to read/write files. In such event, whole Hadoop system would be out of service until new namenode is up.
Hadoop 2.0 overcomes this SPOF by providing support for multiple NameNode. High availability feature provides an extra NameNode to hadoop architecture. This feature provides automatic failover. If active NameNode fails, then Standby-Namenode takes all the responsibility of active node. And cluster continues to work.
The initial implementation of Namenode high availability provided for single active/standby namenode. However, some deployment requires high degree fault-tolerance. So new version 3.0 enable this feature by allowing the user to run multiple standby namenode. The cluster tolerates the failure of 2 nodes rather than 1 by configuring 3 namenode & 5 journalnodes.
30) Explain Erasure Coding in Apache Hadoop?
For several purposes, HDFS, by default, replicates each block three times. Replication also provides the very simple form of redundancy to protect against the failure of datanode. But replication is very expensive. 3 x replication scheme results in 200% overhead in storage space and other resources.
Hadoop 2.x introduced a new feature called “Erasure Coding” to use in the place of Replication. It also provides the same level of fault tolerance with less space store and 50% storage overhead.
Erasure Coding uses Redundant Array of Inexpensive Disk (RAID). RAID implements Erasure Coding through striping. In which it divides logical sequential data (such as a file) into the smaller unit (such as bit, byte or block). After that, it stores data on different disk.
Encoding – In this, RAID calculates and sort Parity cells for each strip of data cells. Then, recover error through the parity. Erasure coding extends a message with redundant data for fault tolerance. Its codec operates on uniformly sized data cells. In Erasure Coding, codec takes a number of data cells as input and produces parity cells as the output.
There are two algorithms available for Erasure Coding:
- XOR Algorithm
- Reed-Solomon Algorithm
Read about Erasure Coding in detail
31) What is Balancer in Hadoop?
Data may not always distribute uniformly across the datanodes in HDFS due to following reasons:
- A lot of deletes and writes
- Disk replacement
Data Blocks allocation strategy tries hard to spread new blocks uniformly among all the datanodes. In a large cluster, each node has different capacity. While quite often you need to delete some old nodes, also add new nodes for more capacity.
The addition of new datanode becomes a bottleneck due to below reason:
- When Hadoop framework allocates all the new blocks and read from new datanode. This will overload the new datanode.
HDFS provides a tool called Balancer that analyzes block placement and balances across the datanodes.
These are very common type of Hadoop interview questions and answers faced during the interview of an experienced professional.
32) What is Disk Balancer in Apache Hadoop?
Disk Balancer is a command line tool, which distributes data evenly on all disks of a datanode. This tool operates against a given datanode and moves blocks from one disk to another.
Disk balancer works by creating and executing a plan (set of statements)balnecr dfs.disk.balancer.enabled must be set true in hdfs-site.xml.
When we write new block in hdfs, then, datanode uses volume choosing the policy to choose the disk for the block. Each directory is the volume in hdfs terminology. Thus, two such policies are: Round-robin and Available space
- Round-robin distributes the new blocks evenly across the available disks.
- Available space writes data to the disk that has maximum free space (by percentage).
Read about HDFS Disk Balancer in detail.
33) What is active and passive NameNode in Hadoop?
In Hadoop 1.0, NameNode is a single point of Failure (SPOF). If namenode fails, then all clients would be unable to read, write file or list files. In such event, whole Hadoop system would be out of service until new namenode is up.
Hadoop 2.0 overcomes this SPOF. Hadoop 2.0 provides support for multiple NameNode. High availability feature provides an extra NameNode to Hadoop architecture for automatic failover.
- Active NameNode – It is the NameNode which works and runs in the cluster. It is also responsible for all client operations in the cluster.
- Passive NameNode – It is a standby namenode, which has similar data as active NameNode. It simply acts as a slave, maintains enough state to provide a fast failover, if necessary.
If Active NameNode fails, then Passive NameNode takes all the responsibility of active node. The cluster works continuously.
34) How is indexing done in Hadoop HDFS?
Apache Hadoop has a unique way of indexing. Once Hadoop framework store the data as per the Data Bock size. HDFS will keep on storing the last part of the data which will say where the next part of the data will be. In fact, this is the base of HDFS.
35) What is a Block Scanner in HDFS?
Block scanner verify whether the data blocks stored on each DataNodes are correct or not. When Block scanner detects corrupted data block, then following steps occur:
- First of all, DataNode report NameNode about the corrupted block.
- After that, NameNode will start the process of creating a new replica. It creates new replica using the correct replica of the corrupted block present in other DataNodes.
- When the replication count of the correct replicas matches the replication factor 3, then delete corrupted block
36) How to perform the inter-cluster data copying work in HDFS?
HDFS use distributed copy command to perform the inter-cluster data copying. That is as below:
hadoop distcp hdfs</span><span class="co1">://<source NameNode> hdfs://<target NameNode</span></span>>
DistCp (distributed copy) is a tool also used for large inter/intra-cluster copying. It uses MapReduce to affect its distribution, error handling and recovery and reporting. This distributed copy tool enlarges a list of files and directories into the input to map tasks.
37) What are the main properties of hdfs-site.xml file?
hdfs-site.xml – It specifies configuration setting for HDFS daemons in Hadoop. It also provides default block replication and permission checking on HDFS.
The three main hdfs-site.xml properties are:
- dfs.name.dir gives you the location where NameNode stores the metadata (FsImage and edit logs). It also specifies where DFS should locate, on the disk or onto the remote directory.
- dfs.data.dir gives the location of DataNodes where it stores the data.
- fs.checkpoint.dir is the directory on the file system. Hence, on this directory secondary NameNode stores the temporary images of edit logs.
38) How can one check whether NameNode is working or not?
One can check the status of the HDFS NameNode in several ways. Most usually, one uses the jps command to check the status of all daemons running in the HDFS.
39) How would you restart NameNode?
NameNode is also known as Master node. It stores meta-data i.e. number of blocks, replicas, and other details. NameNode maintains and manages the slave nodes, and assigns tasks to them.
By following two methods, you can restart NameNode:
- First stop the NameNode individually using ./sbin/hadoop-daemons.sh stop namenode command. Then, start the NameNode using ./sbin/hadoop-daemons.sh start namenode command.
- Use ./sbin/stop-all.sh and then use ./sbin/start-all.sh command which will stop all the demons first. Then start all the daemons.
The above Hadoop interview questions and answers were for experienced but freshers can also refer these Hadoop interview questions and answers for in depth knowledge. Now let’s move forward with some advanced Hadoop interview questions and answers.
Advanced Questions for Hadoop Interview
40) How NameNode tackle Datanode failures in Hadoop?
HDFS has a master-slave architecture in which master is namenode and slave are datanode. HDFS cluster has single namenode that manages file system namespace (metadata) and multiple datanodes that are responsible for storing actual data in HDFs and performing the read-write operation as per request for the clients.
NameNode receives Heartbeat and block report from Datanode. Heartbeat receipt implies that the datanode is alive and functioning properly and block report contains a list of all blocks on a datanode. When NameNode observes that DataNode has not sent heartbeat message after a certain amount of time, the datanode is marked as dead. The namenode replicates the blocks of the dead node to another datanode using the replica created earlier. Hence, NameNode can easily handle Datanode failure.
41) Is Namenode machine same as DataNode machine as in terms of hardware in Hadoop?
NameNode is highly available server, unlike DataNode. NameNode manages the File System Namespace. It also maintains the metadata information. Metadata information is the number of blocks, their location, replicas and other details. It also executes file system execution such as naming, closing, opening files/directories.
Because of the above reasons, NameNode requires higher RAM for storing the metadata for millions of files. Whereas, DataNode is responsible for storing actual data in HDFS. It performs read and write operation as per request of the clients. Therefore, Datanode needs to have a higher disk capacity for storing huge data sets.
42) If DataNode increases, then do we need to upgrade NameNode in Hadoop?
Namenode stores meta-data i.e. number of blocks, their location, replicas. In Hadoop, meta-data is present in memory in the master for faster retrieval of data. NameNode manages and maintains the slave nodes, and assigns tasks to them. It regulates client’s access to files.
It also executes file system execution such as naming, closing, opening files/directories. During Hadoop installation, framework determines NameNode based on the size of the cluster. Mostly we don’t need to upgrade the NameNode because it does not store the actual data. But it stores the metadata, so such requirement rarely arise.
43) Explain what happens if, during the PUT operation, HDFS block is assigned a replication factor 1 instead of the default value 3?
Replication factor can be set for the entire cluster to adjust the number of replicated block. It ensures high data availability.
The cluster will have n-1 duplicated blocks, for every block that are present in HDFS. So, if the replication factor during PUT operation is set to 1 in place of the default value 3. Then it will have a single copy of data. If one set replication factor 1. And if DataNode crashes under any circumstances, then an only single copy of the data would lose.
44) What are file permissions in HDFS? how does HDFS check permissions for files/directory?
Hadoop distributed file system (HDFS) implements a permissions model for files/directories.
For each files/directory, one can manage permissions for a set of 3 distinct user classes: owner, group, others
There are also 3 different permissions for each user class: Read (r), write (w), execute(x)
- For files, the w permission is to write to the file and the r permission is to read the file.
- For directories, w permission is to create or delet the directory. The r permission is to list the contents of the directory.
- The X permission is to access a child of the directory.
HDFS check permissions for files or directory:
- We can also check the owner’s permissions, if the user name matches the owner of directory.
- If the group matches the directory’s group, then Hadoop tests the user’s group permissions.
- Hadoop tests the “other” permission, when owner and the group names doesn’t match.
- If none of the permissions checks succeed, then client’s request is denied.
45) How one can format Hadoop HDFS?
One can format HDFS by using bin/hadoop namenode –format command.
$bin/hadoop namenode –format$ command formats the HDFS via NameNode.
Formatting implies initializing the directory specified by the dfs.name.dir variable. When you run this command on existing files system, then, you will lose all your data stored on your NameNode.
Hadoop NameNode directory contains the FsImage and edit files. This hold the basic information’s about Hadoop file system. So, this basic information includes like which user created files etc.
Hence, when we format the NameNode, then it will delete above information from directory. This information is present in the hdfs-site.xml as dfs.namenode.name.dir. So, formatting a NameNode will not format the DataNode.
NOTE: Never format, up and running Hadoop Filesystem. You will lose data stored in the HDFS.
46) What is the process to change the files at arbitrary locations in HDFS?
HDFS doesn’t support modifications at arbitrary offsets in the file or multiple writers. But a single writer writes files in append-only format. Writes to a file in HDFS are always made at the end of the file.
47) Differentiate HDFS & HBase.
Data write process
- HDFS- Append method
- HBase- Bulk incremental, random write
Data read process
- HDFS- Table scan
- HBase- Table scan/random read/small range scan
Hive SQL querying
- HDFS- Excellent
- HBase- Average
Read about HBase in detail.
These are some advanced Hadoop interview questions and answers for HDFS that will help you in answer many more interview questions in the best manner
48) What is meant by streaming access?
HDFS works on the principle of “write once, read many”. Its focus is on fast and accurate data retrieval. Steaming access means reading the complete data instead of retrieving a single record from the database.
49) How to transfer data from Hive to HDFS?
One can transfer data from Hive by writing the query:
hive> insert overwrite directory ‘/’ select * from emp;
Hence, the output you receive will be stored in part files in the specified HDFS path.
50) How to add/delete a Node to the existing cluster?
To add a Node to the existing cluster follow:
Add the host name/Ip address in dfs.hosts/slaves file. Then, refresh the cluster with
$hadoop dfsamin -refreshNodes
To delete a Node to the existing cluster follow:
Add the hostname/Ip address to dfs.hosts.exclude/remove the entry from slaves file. Then, refresh the cluster with $hadoop dfsamin -refreshNodes
$hadoop dfsamin -refreshNodes
51) How to format the HDFS? How frequently it will be done?
These type of Hadoop Interview Questions and Answers are also taken very short and to the point. Giving very lengthy answer here is unnecessary and may lead to negative points.
$hadoop namnode -format.
Note: Format the HDFS only once that to during initial cluster setup.
52) What is the importance of dfs.namenode.name.dir in HDFS?
dfs.namenode.name.dir contains the fsimage file for namenode.
We should configure it to write to atleast two filesystems on physical hosts/namenode/secondary namenode. Because if we lose FsImage file we will lose entire HDFS file system. Also there is no other recovery mechanism if there is no FsImage file available.
Number 40-52 were the advanced Hadoop interview question and answer to get indepth knowledge in handling difficult Hadoop interview questions and answers.
This was all about the Hadoop Interview Questions and Answers
These questions are frequently asked Hadoop interview questions and answers. You can read here some more Hadoop HDFS interview questions and answers.
After going through these top Hadoop Interview questions and answers you will be able to confidently face a interview and will be able to answer Hadoop Interview questions and answers asked in your interview in the best manner. These Hadoop Interview Questions are suggested by the experts at DataFlair
Key –
Q.1 – Q.5 Basic Hadoop Interview Questions
Q.6 – Q.10 HDFS Hadoop interview questions and answers for freshers
Q. 11- Q. 20 Frequently asked Questions in Hadoop Interview
Q.21 – Q39 were the HDFS Hadoop interview questions and answer for experienced
Q.40 – Q.52 were the advanced HDFS Hadoop interview questions and answers
These, Drop a comment and our support team will be happy to help you. Now let’s jump to our second part of Hadoop Interview Questions i.e. MapReduce Interview Questions and Answers.
Hadoop Interview Questions and Answers for MapReduce
It is difficult to pass the Hadoop interview as it is fast and growing technology. To get you through this tough path the MapReduce Hadoop interview questions and answers will serve as the backbone. This section contains the commonly asked MapReduce Hadoop interview questions and answers.
In this section on MapReduce Hadoop interview questions and answers, we have covered 50+ Hadoop interview questions and answers for MapReduce in detail. We have covered MapReduce Hadoop interview questions and answers for freshers, MapReduce Hadoop interview questions and answers for experienced as well as some advanced Mapreduce Hadoop interview questions and answers.
These MapReduce Hadoop Interview Questions are framed by keeping in mind the need of an era, and the trending pattern of the interview that is being followed by the companies. The interview questions of Hadoop MapReduce are dedicatedly framed by the company experts to help you to reach your goal.
Basic MapReduce Hadoop Interview Questions and Answers
53) What is MapReduce in Hadoop?
Hadoop MapReduce is the data processing layer.. The output from the map is the input to Reducer. Then, Reducer combines tuples (key-value) based on the key. And then, modifies the value of the key accordingly.
Read about Hadoop MapReduce in Detail.
54) What is the need of MapReduce in Hadoop?
In Hadoop, when we have stored the data in HDFS, how to process this data is the first question that arises?
- Time-consuming – By using single machine we cannot analyze the data (terabytes) as it will take a lot of time.
- Costly – All the data (terabytes) in one server or as database cluster which is very expensive. And also hard to manage.
MapReduce overcome these challenges
- Time-efficient – If we want to analyze the data. We can write the analysis code in Map function. And the integration code.
- Cost-efficient – It distributes the data over multiple low config machines.
55) What is Mapper in Hadoop?
Mapper task processes each input record (from RecordReader) and generates a key-value pair. This key-value pairs generated by mapper is completely different from the input pair. The Mapper store intermediate-output on the local disk. Thus, it does not store its output on HDFS. It is temporary data and writing on HDFS will create unnecessary multiple copies. Mapper only understands key-value pairs of data. So before passing data to the mapper, it, first converts the data into key-value pairs.
Mapper only understands key-value pairs of data. So before passing data to the mapper, it, first converts the data into key-value pairs. InputSplit and RecordReader convert data into key-value pairs. Input split is the logical representation of data. RecordReader communicates with the InputSplit and converts the data into Kay-value pairs. Hence
- Key is a reference to the input value.
- Value is the data set on which to operate.
Number of maps depends on the total size of the input. i.e. the total number of blocks of the input files. Mapper= {(total data size)/ (input split size)} If data size= 1 Tb and input split size= 100 MB Hence, Mapper= (1000*1000)/100= 10,000
Mapper= {(total data size)/ (input split size)} If data size= 1 Tb and input split size= 100 MB Hence, Mapper= (1000*1000)/100= 10,000
If data size= 1 Tb and input split size= 100 MB HenceMapper= (1000*1000)/100= 10,000
Mapper= (1000*1000)/100= 10,000
Read about Mapper in detail.
56) What is Reducer in Hadoop?
Reducer takes the output of the Mapper (intermediate key-value pair) as the input. After that, it runs a reduce function on each of them to generate the output. Thus the output of the reducer is the final output, which it stored in HDFS. Usually, in Reducer, we do aggregation or summation sort of computation. Reducer has three primary phases-
- Shuffle- The framework, fetches the relevant partition of the output of all the Mappers for each reducer via HTTP.
- Sort- The framework groups Reducers inputs by the key in this Phase. Shuffle and sort phases occur simultaneously.
- Reduce- After shuffling and sorting, reduce task aggregates the key-value pairs. In this phase, call the reduce (Object, Iterator, OutputCollector, Reporter) method for each <key, (list of values)> pair in the grouped inputs.
With the help of Job.setNumreduceTasks(int) the user set the number of reducers for the job.
Hence, right number of reducers is 0.95 or 1.75 multiplied by (<no. of nodes>*<no. of maximum container per node>)
Read about Reducer in detail.
57) How to set mappers and reducers for MapReduce jobs?
One can configure JobConf to set number of mappers and reducers.
- For Mapper – job.setNumMaptasks()
- For Reducer – job.setNumreduceTasks()
These were some general MapReduce Hadoop interview questions and answers. Now let us take some Mapreduce Hadoop interview questions and answers specially for freshers.
MapReduce Hadoop Interview Question and Answer for Freshers
58) What is the key- value pair in Hadoop MapReduce?
Hadoop MapReduce implements a data model, which represents data as key-value pairs. Both input and output to MapReduce Framework should be in Key-value pairs only. In Hadoop, if MapReduce generate in following way:
- InputSplit – It is the logical representation of data. InputSplit represents the data which individual Mapper will process.
- RecordReader – It converts the split into records which are in form of Key-value pairs. That is suitable for reading by the mapper.
By Default RecordReader uses TextInputFormat for converting data into a key-value pair.
- Key – It is the byte offset of the beginning of the line within the file.
- Value – It is the contents of the line, excluding line terminators. For
For Example, file content is- on the top of the crumpetty Tree
Key- 0
Value- on the top of the crumpetty Tree
Read about MapReduce Key-value pair in detail.
59) What is the need of key-value pair to process the data in MapReduce?
Hadoop MapReduce works on unstructured and semi-structured data apart from structured data. One can read the Structured data like the ones stored in RDBMS by columns.
But handling unstructured data is feasible using key-value pairs. The very core idea of MapReduce work on the basis of these pairs. Framework map data into a collection of key-value pairs by mapper and reducer on all the pairs with the same key.
In most of the computations- Map operation applies on each logical “record” in our input. This computes a set of intermediate key-value pairs. Then apply reduce operation on all the values that share the same key. This combines the derived data properly.
Thus, we can say that key-value pairs are the best solution to work on data problems on MapReduce.
60) What are the most common InputFormats in Hadoop? – For all file-based InputFormat it is the base class . It also specifies input directory where data files are present. FileInputFormat also read all files. And, then divides these files into one or more InputSplits.
- TextInputFormat – It is the default InputFormat of MapReduce. It uses each line of each input file as a separate record. Thus, performs no parsing.
Key- byte offset.
Value- It is the contents of the line, excluding line terminators.
- KeyValueTextInputFormat –’).
Key- Everything up to tab character.
Value- Remaining part of the line after tab character.
- SequenceFileInputFormat – It reads sequence files.
Key & Value- Both are user-defined.
Read about Mapreduce InputFormat in detail.
61) Explain InputSplit in Hadoop?
InputFormat creates InputSplit. InputSplit is the logical representation of data. Further Hadoop framework divides InputSplit into records. Then mapper will process each record. The size of split is approximately equal to HDFS block size (128 MB). In MapReduce program, Inputsplit is user defined. So, the user can control split size based on the size of data.
InputSplit in mapreduce has a length in bytes. It also has set of storage locations (hostname strings). It use storage location to place map tasks as close to split’s data as possible. According to the inputslit size each Map tasks process. So that the largest one gets processed first, this minimize the job runtime. In MapReduce, important thing is that InputSplit is just a reference to the data, not contain input data.
By calling ‘getSplit()’ client who is running job calculate the split for the job . And then send to the application master and it will use their storage location to schedule map tasks. And that will process them on the cluster. In MapReduce, split is send to the createRecordReader() method. It will create RecordReader for the split in mapreduce job. Then RecordReader generate record (key-value pair). Then it passes to the map function.
Read about MapReduce InputSplit in detail.
62) Explain the difference between a MapReduce InputSplit and HDFS block.
Tip for these type of Mapreduce Hadoop interview questions and and answers: Start with the definition of Block and InputSplit and answer in a comparison language and then cover its data representation, size and example and that too in a comparison language.
By definition-
- Block – It is the smallest unit of data that the file system store..
Read more differences between MapReduce InputSplit and HDFS block.
63) What is the purpose of RecordReader in hadoop?
RecordReader in Hadoop uses the data within the boundaries, defined by InputSplit. It creates key-value pairs for the mapper. The “start” is the byte position in the file. Thus at ‘start the RecordReader should start generating key-value pairs. And the “end” is where it should stop reading records.
RecordReader in MapReduce job load data from its source. And then, converts the data into key-value pairs suitable for reading by the mapper. RecordReader communicates with the InputSplit until it does not read the complete file. The MapReduce framework defines RecordReader instance by the InputFormat. By, default RecordReader also uses TextInputFormat for converting data into key-value pairs.
TextInputFormat provides 2 types of RecordReader : LineRecordReader and SequenceFileRecordReader.
LineRecordReader in Hadoop is the default RecordReader that TextInputFormat provides. Hence, each line of the input file is the new value and the key is byte offset.
SequenceFileRecordReader in Hadoop reads data specified by the header of a sequence file.
Read about MapReduce RecordReder in detail.
64) What is Combiner in Hadoop?
In MapReduce job, Mapper Generate large chunks of intermediate data. Then pass it to reduce for further processing. All this leads to enormous network congestion. Hadoop MapReduce framework provides a function known as Combiner. It plays a key role in reducing network congestion.
The Combiner in Hadoop is also known as Mini-reducer that performs local aggregation on the mapper’s output. This reduces the data transfer between mapper and reducer and increases the efficiency.
There is no guarantee of execution of Ccombiner in Hadoop i.e. Hadoop may or may not execute a combiner. Also if required it may execute it more than 1 times. Hence, your MapReduce jobs should not depend on the Combiners execution.
Read about MapReduce Combiner in detail.
65) Explain about the partitioning, shuffle and sort phase in MapReduce?
Partitioning Phase – Partitioning specifies that all the values for each key are grouped together. Then make sure that all the values of a single key go on the same Reducer. Thus allows even distribution of the map output over the Reducer.
Shuffle Phase – It is the process by which the system sorts the key-value output of the map tasks. After that it transfer to the reducer.
Sort Phase – Mapper generate the intermediate key-value pair. Before starting of Reducer, map reduce framework sort these key-value pairs by the keys. It also helps reducer to easily distinguish when a new reduce task should start. Thus saves time for the reducer.
Read about Shuffling and Sorting in detail
66) What does a “MapReduce Partitioner” do?
Partitioner comes int the picture, if we are working on more than one reducer. Partitioner controls the partitioning of the keys of the intermediate map-outputs. By hash function, the key (or a subset of the key) is used to derive the partition. Partitioning specifies that all the values for each key grouped together. And it make sure that all the values of single key goes on the same reducer. Thus allowing even distribution of the map output over the reducers. It redirects the Mappers output to the reducers by determining which reducer is responsible for the particular key.
The total number of partitioner is equal to the number of Reducer. Partitioner in Hadoop will divide the data according to the number of reducers. Thus, single reducer process the data from the single partitioner.
Read about MapReduce Partitioner in detail.
67) If no custom partitioner is defined in Hadoop then how is data partitioned before it is sent to the reducer?
So, Hadoop MapReduce by default uses ‘HashPartitioner’.
It uses the hashCode() method to determine, to which partition a given (key, value) pair will be sent. HashPartitioner also has a method called getPartition.
HashPartitioner also takes key.hashCode() & integer>MAX_VALUE. It takes these code to finds the modulus using the number of reduce tasks. Suppose there are 10 reduce tasks, then getPartition will return values 0 through 9 for all keys.
Public class HashPartitioner<k, v>extends Partitioner<k, v> { Public int getpartitioner(k key, v value, int numreduceTasks) { Return (key.hashCode() & Integer.Max_VALUE) % numreduceTasks; } }
These are very common type of MapReduce Hadoop interview questions and answers faced during the interview of a Fresher.
68) How to write a custom partitioner for a Hadoop MapReduce job?
This is one of the most common MapReduce Hadoop interview question and answer
It stores the results uniformly across different reducers, based on the user condition.
By setting a Partitioner to partition by the key, we can guarantee that records for the same key will go the same reducer. It also ensures that only one reducer receives all the records for that particular key.
By the following steps, we can write Custom partitioner for a Hadoop MapReduce job:
- Create a new class that extends Partitioner Class.
- Then, Override method getPartition, in the wrapper that runs in the MapReduce.
- By using method set Partitioner class, add the custom partitioner to the job. Or add the custom partitioner to the job as config file.
69) What is shuffling and sorting in Hadoop MapReduce?
Shuffling and Sorting takes place after the completion of map task. Shuffle and sort phase in Hadoop occurs simultaneously.
- Shuffling- Shuffling is the process by which the system sorts the key-value output of the map tasks and transfer it to the reducer. Shuffle phase is important for reducers, otherwise, they would not have any input. As shuffling can start even before the map phase has finished. So this saves some time and completes the task in lesser time.
- Sorting- Mapper generate the intermediate key-value pair. Before starting of reducer, mapreduce framework sort these key-value pair by the keys. It also helps reducer to easily distinguish when a new reduce task should start. Thus saves time for the reducer.
Shuffling and sorting are not performed at all if you specify zero reducer (setNumReduceTasks(0))
Read about Shuffling and Sorting in detail.
70) Why aggregation cannot be done in Mapper?
Mapper task processes each input record (From RecordReader) and generates a key-value pair. The Mapper store intermediate-output on the local disk.
We cannot perform aggregation in mapper because:
- Sorting takes place only on the Reducer function. Thus there is no provision for sorting in the mapper function. Without sorting aggregation is not possible.
- To perform aggregation, we need the output of all the Mapper function. Thus, which may not be possible to collect in the map phase. Because mappers may be running on different machines where the data blocks are present.
- If we will try to perform aggregation of data at mapper, it requires communication between all mapper functions. Which may be running on different machines. Thus, this will consume high network bandwidth and can cause network bottlenecking.
71) Explain map-only job?
MapReduce is the data processing layer of Hadoop. It is the framework for writing applications that process the vast amount of data stored in the HDFS. It processes the huge amount of data in parallel by dividing the job into a set of independent tasks (sub-job). In Hadoop, MapReduce have 2 phases of processing: Map and Reduce.
In Map phase we specify all the complex logic/business rules/costly code. Map takes a set of data and converts it into another set of data. It also break individual elements into tuples (key-value pairs). In Reduce phase we specify light-weight processing like aggregation/summation. Reduce takes the output from the map as input. After that it combines tuples (key-value) based on the key. And then, modifies the value of the key accordingly.
Consider a case where we just need to perform the operation and no aggregation required. Thus, in such case, we will prefer “Map-Only job” in Hadoop. In Map-Only job, the map does all task with its InputSplit and the reducer do no job. Map output is the final output.
This we can achieve by setting job.setNumreduceTasks(0) in the configuration in a driver. This will make a number of reducer 0 and thus only mapper will be doing the complete task.
Read about map-only job in Hadoop Mapreduce in detail.
72) What is SequenceFileInputFormat in Hadoop MapReduce?
SequenceFileInputFormat is an InputFormat which reads sequence files. Sequence files are binary files that stores sequences of binary key-value pairs. These files are block-compress. Thus, Sequence files provide direct serialization and deserialization of several arbitrary data types.
Here Key and value- Both are user-defined.
SequenceFileAsTextInputFormat is variant of SequenceFileInputFormat. It converts the sequence file’s key value to text objects. Hence, by calling ‘tostring()’ it performs conversion on the keys and values. Thus, this InputFormat makes sequence files suitable input for streaming.
SequenceFileAsBinaryInputFormat is variant of SequenceFileInputFormat. Hence, by using this we can extract the sequence file’s keys and values as an opaque binary object.
The above 58 – 72 MapReduce Hadoop interview questions and answers were for freshers, However experienced can also go through these MapReduce Hadoop interview questions and answers for revising the basics.
MapReduce Hadoop Interview questions and Answers for Experienced
73) What is KeyValueTextInputFormat in Hadoop?
KeyValueTextInputFormat- It treats each line of input as a separate record. It breaks the line itself into key and value. Thus, it uses the tab character (‘/t’) to break the line into a key-value pair.
Key- Everything up to tab character.
Value- Remaining part of the line after tab character.
Consider the following input file, where → represents a (horizontal) tab character:
But→ his face you could not see
Account→ of his beaver hat Hence,
Output:
Key- But
Value- his face you could not see
Key- Account
Value- of his beaver hat
74) Differentiate Reducer and Combiner in Hadoop MapReduce?
Combiner- The combiner is Mini-Reducer that perform local reduce task. It run on the Map output and produces the output to reducer input. Combiner is usually used for network optimization.
Reducer-.
75) Explain the process of spilling in MapReduce?
Map). Size of the buffer is 100 MB by default. We can also change it by using mapreduce.task.io.sort.mb property.
Now, spilling is a process of copying the data from the memory buffer to disc. It takes place when the content of the buffer reaches a certain threshold size. So, background thread by default starts spilling the contents after 80% of the buffer size has filled. Therefore, for a 100 MB size buffer, the spilling will start after the content of the buffer reach a size of 80MB.
76) What happen if number of reducer is set to 0 in Hadoop?
If we set the number of reducer to 0:
- Then no reducer will execute and no aggregation will take place.
- In such case we will prefer “Map-only job” in Hadoop. In map-Only job, the map does all task with its InputSplit and the reducer do no job. Map output is the final output.
In between map and reduce phases there is key, sort, and shuffle phase. Sort and shuffle phase are responsible for sorting the keys in ascending order. Then grouping values based on same keys. This phase is very expensive. If reduce phase is not required we should avoid it. Avoiding reduce phase would eliminate sort and shuffle phase as well. This also saves network congestion. As in shuffling an output of mapper travels to reducer,when data size is huge, large data travel to reducer.
77) What is Speculative Execution in Hadoop?
MapReduce breaks jobs into tasks and run these tasks parallely rather than sequentially. Thus reduces.
Hadoop framework doesn’t try to fix and diagnose slow running task. It tries to detect them and run backup tasks for them. This process is called Speculative execution in Hadoop. These backup tasks are called Speculative tasks in Hadoop.
First of all Hadoop framework launch all the tasks for the job in Hadoop MapReduce. Then it launch speculative tasks for those tasks that have been running for some time (one minute). And the task that have not made any much progress, on average, as compared with other tasks from the job.
If the original task completes before the speculative task. Then it will kill speculative task . On the other hand, it will kill the original task if the speculative task finishes before it.
Read about Speculative Execution in detail.
78) What counter in Hadoop MapReduce?
Counters in MapReduce are useful Channel for gathering statistics about the MapReduce job. Statistics like for quality control or for application-level. They are also useful for problem diagnosis.
Counters validate that:
- Number of bytes read and write within map/reduce job is correct or not
- The number of tasks launches and successfully run in map/reduce job is correct or not.
- The amount of CPU and memory consumed is appropriate for our job and cluster nodes.
There are two types of counters:
- Built-In Counters – In Hadoop there are some built-In counters for every job. These report various metrics, like, there are counters for the number of bytes and records. Thus, this allows us to confirm that it consume the expected amount of input. Also make sure that it produce the expected amount of output.
- User-Defined Counters – Hadoop MapReduce permits user code to define a set of counters. These are then increased as desired in the mapper or reducer. For example, in Java, use ‘enum’ to define counters.
Read about Counters in detail.
79) How to submit extra files(jars,static files) for MapReduce job during runtime in Hadoop?
MapReduce framework provides Distributed Cache to caches files needed by the applications. It can cache read-only text files, archives, jar files etc.
An application which needs to use distributed cache to distribute a file should make sure that the files are available on URLs.
URLs can be either hdfs:// or http://.
Now, if the file is present on the hdfs:// or. Then, user mentions it to be cache file to distribute. This framework will copy the cache file on all the nodes before starting of tasks on those nodes. The files are only copied once per job. Applications should not modify those files.
80) What is TextInputFormat in Hadoop?
TextInputFormat is the default InputFormat. It treats each line of the input file as a separate record. For unformatted data or line-based records like log files, TextInputFormat is useful. By default, RecordReader also uses TextInputFormat for converting data into key-value pairs. So,
- Key- It is the byte offset of the beginning of the line.
- Value- It is the contents of the line, excluding line terminators.
File content is- on the top of the building
so,
Key- 0
Value- on the top of the building
TextInputFormat also provides below 2 types of RecordReader-
- LineRecordReader
- SequenceFileRecordReader
Top Interview Quetions for Hadoop MapReduce
81) How many Mappers run for a MapReduce job?
Number of mappers depends on 2 factors:
- Amount of data we want to process along with block size. It is driven by the number of inputsplit. If we have block size of 128 MB and we expect 10TB of input data, we will have 82,000 maps. Ultimately InputFormat determines the number of maps.
- Configuration of the slave i.e. number of core and RAM available on slave. The right number of map/node can between 10-100. Hadoop framework should give 1 to 1.5 cores of processor for each mapper. For a 15 core processor, 10 mappers can run.
In MapReduce job, by changing block size we can control number of Mappers . By Changing block size the number of inputsplit increases or decreases.
By using the JobConf’s conf.setNumMapTasks(int num) we can increase the number of map task.
Mapper= {(total data size)/ (input split size)}
If data size= 1 Tb and input split size= 100 MB
Mapper= (1000*1000)/100= 10,000
82) How many Reducers run for a MapReduce job?
Answer these type of MapReduce Hadoop interview questions answers very shortly and to the point.
With the help of Job.setNumreduceTasks(int) the user set the number of reduces for the job. To set the right number of reducesrs use the below formula:
0.95 Or 1.75 multiplied by (<no. of nodes> * <no. of maximum container per node>).
As the map finishes, all the reducers can launch immediately and start transferring map output with 0.95. With 1.75, faster nodes finsihes first round of reduces and launch second wave of reduces .
With the increase of number of reducers:
- Load balancing increases.
- Cost of failures decreases.
- Framework overhead increases.
These are very common type of MapReduce Hadoop interview questions and answers faced during the interview of an experienced professional.
83) How to sort intermediate output based on values in MapReduce?
Hadoop MapReduce automatically sorts key-value pair generated by the mapper. Sorting takes place on the basis of keys. Thus, to sort intermediate output based on values we need to use secondary sorting.
There are two possible approaches:
- First, using reducer, reducer reads and buffers all the value for a given key. Then, do an in-reducer sort on all the values. Reducer will receive all the values for a given key (huge list of values), this cause reducer to run out of memory. Thus, this approach can work well if number of values is small.
- Second, using MapReduce paradigm. It sort the reducer input values, by creating a composite key” (using value to key conversion approach) . i.e.by adding a part of, or entire value to, the natural key to achieve sorting technique. This approach is scalable and will not generate out of, memory errors.
We need custom partitioner to make that all the data with same key (composite key with the value) . So, data goes to the same reducer and custom comparator. In Custom comparator the data grouped by the natural key once it arrives at the reducer.
84) What is purpose of RecordWriter in Hadoop?
Reducer takes mapper output (intermediate key-value pair) as an input. Then, it runs a reducer function on them to generate output (zero or more key-value pair). So, the output of the reducer is the final output.
RecordWriter writes these output key-value pair from the Reducer phase to output files. OutputFormat determines, how RecordWriter writes these key-value pairs in Output files. Hadoop provides OutputFormat instances which help to write files on the in HDFS or local disk.
85) What are the most common OutputFormat in Hadoop?
Reducer takes mapper output as input and produces output (zero or more key-value pair). RecordWriter writes these output key-value pair from the Reducer phase to output files. So, OutputFormat determines, how RecordWriter writes these key-value pairs in Output files.
FileOutputFormat.setOutputpath() method used to set the output directory. So, every Reducer writes a separate in a common output directory.
Most common OutputFormat are:
- TextOutputFormat – It is the default OutputFormat in MapReduce. TextOutputFormat writes key-value pairs on individual lines of text files. Keys and values of this format can be of any type. Because TextOutputFormat turns them to string by calling toString() on them.
- SequenceFileOutputFormat – This OutputFormat writes sequences files for its output. It is also used between MapReduce jobs.
- SequenceFileAsBinaryOutputFormat – It is another form of SequenceFileInputFormat. which writes keys and values to sequence file in binary format.
- DBOutputFormat – We use this for writing to relational databases and HBase. Then, it sends the reduce output to a SQL table. It accepts key-value pairs, where the key has a type extending DBwritable.
Read about outputFormat in detail.
86) What is LazyOutputFormat in Hadoop?
FileOutputFormat subclasses will create output files (part-r-nnnn), even if they are empty. Some applications prefer not to create empty files, which is where LazyOutputFormat helps.
LazyOutputFormat is a wrapper OutputFormat. It make sure that the output file should create only when it emit its first record for a given partition.
To use LazyOutputFormat, call its SetOutputFormatClass() method with the JobConf.
To enable LazyOutputFormat, streaming and pipes supports a – lazyOutput option.
87) How to handle record boundaries in Text files or Sequence files in MapReduce InputSplits?
InputSplit’s RecordReader in MapReduce will “start” and “end” at a record boundary.
In SequenceFile, every 2k bytes has a 20 bytes sync mark between the records. And, the sync marks between the records allow the RecordReader to seek to the start of the InputSplit. It contains a file, length and offset. It also find the first sync mark after the start of the split. And, the RecordReader continues processing records until it reaches the first sync mark after the end of the split.
Similarly, Text files use newlines instead of sync marks to handle record boundaries.
88) What are the main configuration parameters in a MapReduce program?
The main configuration parameters are:
- Input format of data.
- Job’s input locations in the distributed file system.
- Output format of data.
- Job’s output location in the distributed file system.
- JAR file containing the mapper, reducer and driver classes
- Class containing the map function.
- Class containing the reduce function..
89) Is it mandatory to set input and output type/format in MapReduce?
No, it is mandatory.
Hadoop cluster, by default, takes the input and the output format as ‘text’.
TextInputFormat – MapReduce default InputFormat is TextInputFormat. It treats each line of each input file as a separate record and also performs no parsing. For unformatted data or line-based records like log files, TextInputFormat is also useful. By default, RecordReader also uses TextInputFormat for converting data into key-value pairs.
TextOutputFormat- MapReduce default OutputFormat is TextOutputFormat. It also writes (key, value) pairs on individual lines of text files. Its keys and values can be of any type.
90) What is Identity Mapper?
Identity Mapper is the default Mapper provided by Hadoop. When MapReduce program has not defined any mapper class then Identity mapper runs. It simply passes the input key-value pair for the reducer phase. Identity Mapper does not perform computation and calculations on the input data. So, it only writes the input data into output.
The class name is org.apache.hadoop.mapred.lib.IdentityMapper
91) What is Identity reducer?
Identity Reducer is the default Reducer provided by Hadoop. When MapReduce program has not defined any mapper class then Identity mapper runs. It does not mean that the reduce step will not take place. It will take place and related sorting and shuffling will also take place. But there will be no aggregation. So you can use identity reducer if you want to sort your data that is coming from the map but don’t care for any grouping.
The above MapReduce Hadoop interview questions and answers i.e Q. 73 – Q. 91 were for experienced but freshers can also refer these MapReduce Hadoop interview questions and answers for in depth knowledge. Now let’s move forward with some advanced MapReduce Hadoop interview questions and answers.
Advanced Interview Questions and Answers for Hadoop MapReduce
92) What is Chain Mapper?
We can use multiple Mapper classes within a single Map task by using Chain Mapper class. The Mapper classes invoked in a chained (or piped) fashion. The output of the first becomes the input of the second, and so on until the last mapper. The Hadoop framework write output of the last mapper to the task’s output.
The key benefit of this feature is that the Mappers in the chain do not need to be aware that they execute in a chain. And, this enables having reusable specialized Mappers. We can combine these mappers to perform composite operations within a single task in Hadoop.
Hadoop framework take Special care when create chains. The key/values output by a Mapper are valid for the following mapper in the chain.
The class name is org.apache.hadoop.mapred.lib.ChainMapper
This is one of the very important Mapreduce Hadoop interview questions and answers
93) What are the core methods of a Reducer?
Reducer process the output the mapper. After processing the data, it also produces a new set of output, which it stores in HDFS. And, the core methods of a Reducer are:
- setup()- Various parameters like the input data size, distributed cache, heap size, etc this method configure. Function Definition- public void setup (context)
- reduce() – Reducer call this method once per key with the associated reduce task. Function Definition- public void reduce (key, value, context)
- cleanup() – Reducer call this method only once at the end of reduce task for clearing all the temporary files. Function Definition- public void cleanup (context)
94) What are the parameters of mappers and reducers?
The parameters for Mappers are:
- LongWritable(input)
- text (input)
- text (intermediate output)
- IntWritable (intermediate output)
The parameters for Reducers are:
- text (intermediate output)
- IntWritable (intermediate output)
- text (final output)
- IntWritable (final output)
95) What is the difference between TextinputFormat and KeyValueTextInputFormat class?
TextInputFormat – It is the default InputFormat. It treats each line of the input file as a separate record. For unformatted data or line-based records like log files, TextInputFormat is also useful. So,
- Key- It is byte offset of the beginning of the line within the file.
- Value- It is the contents of the line, excluding line terminators.
KeyValueTextInputFormat – It is like TextInputFormat. The reason is’). so,
- Key- Everything up to tab character.
- Value- Remaining part of the line after tab character.
For example, consider a file contents as below:
AL#Alabama
AR#Arkansas
FL#Florida
So, TextInputFormat
Key value
0 AL#Alabama 14
AR#Arkansas 23
FL#Florida
So, KeyValueTextInputFormat
Key value
AL Alabama
AR Arkansas
FL Florida
These are some of the advanced MapReduce Hadoop interview Questions and answers
96) How is the splitting of file invoked in Hadoop ?
InputFormat is responsible for creating InputSplit, which is the logical representation of data. Further Hadoop framework divides split into records. Then, Mapper process each record (which is a key-value pair).
By running getInputSplit() method Hadoop framework invoke Splitting of file . getInputSplit() method belongs to Input Format class (like FileInputFormat) defined by the user.
97) How many InputSplits will be made by hadoop framework?
InputFormat is responsible for creating InputSplit, which is the logical representation of data. Further Hadoop framework divides split into records. Then, Mapper process each record (which is a key-value pair).
MapReduce system use storage locations to place map tasks as close to split’s data as possible. By default, split size is approximately equal to HDFS block size (128 MB).
For, example the file size is 514 MB,
128MB: 1st block, 128Mb: 2nd block, 128Mb: 3rd block,
128Mb: 4th block, 2Mb: 5th block
So, 5 InputSplit is created based on 5 blocks.
If in case you have any confusion about any MapReduce Hadoop Interview Questions, do let us know by leaving a comment. we will be glad to solve your queries.
98) Explain the usage of Context Object.
With the help of Context Object, Mapper can easily interact with other Hadoop systems. It also helps in updating counters. So counters can report the progress and provide any application-level status updates.
It contains configuration details for the job.
99) When is it not recommended to use MapReduce paradigm for large scale data processing?
For iterative processing use cases it is not suggested to use MapReduce. As it is not cost effective, instead Apache Pig can be used for the same.
100) What is the difference between RDBMS with Hadoop MapReduce?
Size of Data
- RDBMS- Traditional RDBMS can handle upto gigabytes of data.
- MapReduce- Hadoop MapReduce can hadnle upto petabytes of data or more.
Updates
- RDBMS- Read and Write multiple times.
- MapReduce- Read many times but write once model.
Schema
- RDBMS- Static Schema that needs to be pre-defined.
- MapReduce- Has a dynamic schema
Processing Model
- RDBMS- Supports both batch and interactive processing.
- MapReduce- Supports only batch processing.
Scalability
- RDBMS- Non-Linear
- MapReduce- Linear
101) Define Writable data types in Hadoop MapReduce.
Hadoop reads and writes data in a serialized form in the writable interface. The Writable interface has several classes like Text, IntWritable, LongWriatble, FloatWritable, BooleanWritable. Users are also free to define their personal Writable classes as well.
102) Explain what does the conf.setMapper Class do in MapReduce?
Conf.setMapperclass sets the mapper class. Which includes reading data and also generating a key-value pair out of the mapper.
Number 40-52 were the advanced HDFS Hadoop interview question and answer to get indepth knowledge in handling difficult Hadoop interview questions and answers.
This was all about the Hadoop Interview Questions and Answers
These questions are frequently asked MapReduce Hadoop interview questions and answers. You can read here some more Hadoop MapReduce interview questions and answers.
After going through these MapReduce Hadoop Interview questions and answers you will be able to confidently face a interview and will be able to answer MapReduce Hadoop Interview questions and answers asked in your interview in the best manner. These MapReduce Hadoop Interview Questions are suggested by the experts at DataFlair
Key –
Q.53 – Q57 Basic MapReduce Hadoop interview questions and answers
Q.58 – Q72 MapReduce Hadoop interview questions and answer for Freshers
Q.73 -Q. 80 Hadoop MapReduce Interview Questions for Experienced
Q.81 – Q.91 Top questions asked in Hadoop Interview
Q.92 – Q.102 were the advanced MapReduce Hadoop interview questions and answers
These MapReduce for Mapreduce, Drop a comment and our support team will be happy to help you.
Hope the tutorial on Hadoop interview questions and answers was helpful to you.
See Also –
- Hadoop Interview Questions and Answers for freshers and experienced
- Top 50 Hadoop Interview Questions and Answers
- Top 50+ Hadoop HDFS Interview Questions and Answers
- Top 60 Hadoop MapReduce Interview Questions and Answers
superb work done !!!!!!!!!!
very good job | https://data-flair.training/blogs/hadoop-interview-questions-and-answers-by-experts/ | CC-MAIN-2018-34 | refinedweb | 13,301 | 58.89 |
package DB_Cache;
use strict;
=head1 DB_Cache: caching of SELECT calls with DBI
By Alex Gough, 2000. You may use and redistribute
this code under the same terms as perl itself.
=head1 Description
A small package which reduces the number of times a database
needs to be queried when multiple execute methods are called
on a statement handle when some of them may be the same.
=head1 Synopsis
use strict;
use DBI;
use DB_Cache;
my $dbh = DBI->connect('DBI:mysql:testcache','cache', 'test');
my $sth = $dbh->prepare('SELECT * FROM foo WHERE col1 = ?');
my $cached_sth = DB_Cache->new($sth);
while (my $href = $cached_sth->fetchrow_hashref('value')) {
foreach (keys %$href) {
print "$_:${$href}{$_}\n";
}
}
while (my $ref = $cached_sth->fetchrow_arrayref('other value')) {
print join ':', @{$ref};
print "\n";
}
$cached_sth->finish;
$dbh->disconnect;
=head1 Waffle
Cached statment handles are created by handing a prepared
statment handle to this package using the new method.
The first time a query is made, the statment is executed
and the first relevant row returned (undef otherwise).
Other rows are fetched by the package and stored for later
use. These can be requested by repeated fetch calls to the
cached handle, undef is returned once all rows have been
returned.
The next time the handle is queried, the rows are returned
again, without the database being contacted again.
Three fetching methods are provided, fetchrow_array,
fetchrow_arrayref and fetchrow_hashref. The first two
both use the same call to the database, while
fetchrow_hashref uses its own call, so if the database
changes between a hash and array fetch, they will have
different contents. Of course, if the database is likely
to change, you do not really want to be caching.
=cut
=head2 new
$cached_handle = DB_Cache->new( [prepared statement handle] );
=cut
sub new {
return bless {_statement=> $_[1], # a DBI sth
_entries=>{}, # holds cached results
_results=>{}, # holds number of results returned
_current =>{}, # holds current place in fetch queue
}, $_[0];
}
=head2 fetchrow_array
@row = $cached_handle->fetchrow_array( [bind values] );
=cut
sub fetchrow_array { # this just uses fetchrow_arrayref then dereferen
+ces
my $self = shift;
my $ref = $self->fetchrow_arrayref(@_);
return @{$ref} if ref($ref);
return ();
}
=head2 fetchrow_hashref
$hash_ref = $cached_handle->fetchrow_hashref( [bind values] );
=cut
sub fetchrow_hashref {
my $self = shift;
my $ref = $self->_fetchrow('hash', @_);
return $ref if ref($ref);
return undef;
}
=head2 fetchrow_arrayref
$array_ref = $cached_handle->fetchrow_arrayref( [bind values] );
=cut
sub fetchrow_arrayref {
my $self = shift;
my $ref = $self->_fetchrow('array', @_);
return $ref if ref($ref);
return undef;
}
=head2 renew
$cached_handle->renew( [bind values] );
Makes another query to the database for the given bind values.
Refetches either or both of hash and array types, if either is
currently in use. Returns result of (array getting , hash getting).
=cut
sub renew {
my $self = shift;
my @args = @_;
my ($args, $arv, $hrv);
$args = join '', map {s/\?/??/g;$_} @args; # the keywords to be exec
+uted by the statement
if (exists $self->{_entries}{hash}{$args}) {
delete $self->{_entries}{hash}->{$args};
$hrv = $self->_fetchrow('hash', @_);
$self->{_current}{hash}{$args} = $self->{_results}{$args};
}
if (exists $self->{_entries}{array}{$args}) {
delete $self->{_entries}{array}->{$args};
$arv = $self->_fetchrow('array', @_);
$self->{_current}{array}{$args} = $self->{_results}{$args};
}
return ($arv, $hrv);
}
=head2 finish
$cached_handle->finish;
This acts like a normal finish call, and frees up the memory being
used to cache any values. Will not hurt anything which is currently
referenced though.
=cut
sub finish {
my $self = shift;
$self->{_statement}->finish;
%{$self} = ();
return 1;
}
# This acts like an execute an fetch rolled into one. The first time
+this is called
# with a certain set of arguments it will attempt to fetch all relevan
+t rows from
# the database and store them away as appropriate. If the execute fai
+ls (nothing to
# fetch) it will return a non-reference. Takes 'array' or 'hash' as f
+irst arg.
sub _fetchrow {
my $self = shift;
my $ref_type = shift;
my @args = @_;
my $args;
$args = join '', map {s/\?/??/g;$_} @args; # the keywords to be exec
+uted by the statement
if (exists $self->{_entries}{$ref_type}{$args}) { # have already got
+ this, release from cache
my $count = $self->{_current}{$ref_type}{$args}--;
if ($count == 0) { # the last matching item has already been retur
+ned
$self->{_current}{$ref_type}{$args} = $self->{_results}{$args};
+# reset counter
return undef;
}
return $self->{_entries}{$ref_type}{$args}[-$count];
}
my $r = $self->{_statement}->execute(@_);
return $r if $r < 1; # ie, nothing to fetch
$self->{_results}{$args} = $r;
$self->{_current}{$ref_type}{$args} = $r-1; # so the next fetched it
+em is #2
$self->{_entries}{$ref_type}{$args} = [];
# fill up array of references to rowhashes or rowarrays, as reqd.
if ($ref_type eq 'hash') {
while (my $ref = $self->{_statement}->fetchrow_hashref) {
push @{$self->{_entries}{$ref_type}{$args}}, $ref;
}
}
elsif ($ref_type eq 'array') {
while (my $ref = $self->{_statement}->fetchrow_arrayref) {
push @{$self->{_entries}{$ref_type}{$args}}, $ref;
}
}
else {warn "No reference type supplied"; return undef};
return $self->{_entries}{$ref_type}{$args}[0]; # return the first fe
+tched value
}
1; # for the grace of God, | http://www.perlmonks.org/index.pl?node_id=43856 | CC-MAIN-2015-06 | refinedweb | 802 | 51.82 |
The Append Data tool allows you to append features to an existing hosted layer in your ArcGIS Enterprise organization. Append Data allows you to update or modify existing datasets.
A large chain restaurant collects monthly sales records for each of its locations. To avoid maintaining datasets for each location each month, the company wants to use one annual sales layer for each location. The Append Data tool enables it to append data to a master dataset at the end of each month when its newly collected sales records are available.
Ecological Marine Unit studies are being conducted to better understand ocean floor patterns of the Atlantic Ocean over time. The studies are conducted using data gathered from various environmental organizations that have collected information from the Atlantic Ocean in the past. Because each organization maintains its own datasets, the Append Data tool can be leveraged to append multiple ocean floor datasets into one layer..
Usage notes
The input layer is defined using the Choose layer to append to drop down. The input layer must be a hosted point, line, area, or table feature layer.
The append layer is defined using the Choose layer to append drop down. The append layer can be a point, line, area, or table big data file share dataset or feature layer.
It is required that the input layer and append layer have the same geometry type.
If time is enabled on the input layer, the two layers must have the same time type. To learn more about feature layer time settings, see Configure time settings. To learn more about big data file share time settings, see Time in Understanding a big data file share manifest.
You can optionally use the Append all features or define a subset filter by clicking the Query button
. Only features that match the condition will be appended. For example, if you have a field named temperature, you could append all features with temperature values greater than 10 with the condition temperature is greater than 10.
The Choose how to append fields field mapping table appears after the input layer and append layer are selected. It will automatically populate with input layer fields and their matching values from the append layer.
By default, input layer fields will be appended with null values when they do not have a matching field in the append layer. Optionally, you can use the Choose how to append fields field mapping table to append custom values of the following two types:
- Append Field—Match an input layer field with an append layer field of a different name but the same type.
- Expression—Calculate input layer field values for append features. To learn more about using Arcade expressions and Append Data see, Arcade expressions with Append Data.
For example, human migration researchers want to append datasets with the input layer and append layer schemas shown below. Both layers have a field in common named Country of type text and both have two additional fields with the same data type, but unique names. The input layer has Pop_ and Pop_Density fields, and the append layer has Population and area_km2 fields. The researchers want to match the Country field to the Country field, append the Population field to the Pop_ field, and calculate the population density for the Pop_Density field using a calculation.
By default, the field mapping table will match Country fields based on the field name and field type. The Pop_ and Pop_Density fields have no match in the append layer and will be appended with null values.
Use the Choose how to append fields field mapping table to match the input layer Pop_ field with the append layer Population field by selecting Population from the drop down list next to Pop_ under Append Value. Use the Expression option from the drop down list next to Pop_Density to calculate values for the appending features by using the append layer fields Population and area_km2 to build the Arcade expression $feature["Population"]/$feature["area_km2"].
The fields of the input layer are never modified. Any append layer fields that are not matched in the Choose how to append fields field mapping table will be excluded from the appended results.
Limitations
- The features you append must have the same geometry type as the feature layer you append to.
- The layer you append to must be an existing layer on your hosting server. If you want to append to a different layer, you must create a hosted layer of that dataset first. To do this, use the Copy To Data Store tool, or share a layer to your portal. To learn more about sharing layers, see Introduction to sharing web layers. Once your dataset is a hosted layer, you can run the Append Data tool to append features to it.
- The Choose how to append fields field mapping table does not allow fields to be added, removed, or renamed in the input layer.
ArcGIS API for Python example
The Append Data tool is available through ArcGIS API for Python.
This example appends a big data file share of earthquakes in the Atlantic ocean to a feature layer of earthquakes.
# Import the required ArcGIS API for Python modules
import arcgis
from arcgis.gis import GIS
from arcgis.geoanalytics import manage_data
# Connect to your ArcGIS Enterprise portal and check that GeoAnalytics is supported
portal = GIS("", "gis_publisher", "my_password", verify_cert=False)
if not portal.geoanalytics.is_supported():
print("Quitting, GeoAnalytics is not supported")
exit(1)
# Find the big data file share dataset you're interested in using for analysis
search_result = portal.content.search("", "Big Data File Share")
# Look through search results for a big data file share with the matching name
bd_file = next(x for x in search_result if x.title == "bigDataFileShares_NaturalDisaters")
# Look through the big data file share for Earthquakes_atlantic
eq_atlantic = next(x for x in bd_file.layers if x.properties.name == "Earthquakes_atlantic")
# Find a feature layer named "Earthquakes" in your ArcGIS Enterprise portal
earthquakes = portal.content.search("Earthquakes", "Feature Layer")
earthquakes_layer = layer_result[0].layers[0]
# Set the tool environment settings
arcgis.env.process_spatial_reference = 32618
arcgis.env.verbose = False
append_data_result = manage_data.append_data(earthquakes_layer, eq_atlantic)
# Visualize the tool results if you are running Python in a Jupyter Notebook
processed_map = portal.map('World', 1)
processed_map.add_layer(append_data_result)
This example appends a big data file share of earthquakes in the Atlantic ocean to a feature layer of earthquakes.
Similar tools
Use the ArcGIS GeoAnalytics Server Append Data tool to append features to layers on your hosting server. Other tools may be useful in solving similar but slightly different problems.
Map Viewer analysis tools
Select and copy data to a new feature layer in your portal using the ArcGIS GeoAnalytics Server Copy To Data Store tool.
Calculate values for features in a new or existing field using the ArcGIS GeoAnalytics Server Calculate Field tool.
ArcGIS Desktop analysis tools
To run this tool from ArcGIS Pro, your active portal must be Enterprise 10.6.1 or later. You must sign in using an account that has privileges to perform GeoAnalytics Feature Analysis.
Perform similar append operations in ArcGIS Pro with the Append geoprocessing tool. | https://enterprise.arcgis.com/en/portal/10.7/use/geoanalytics-append-data.htm | CC-MAIN-2022-05 | refinedweb | 1,183 | 54.12 |
More quiet command-calling: adding an inspection dimension inside AutoCAD using .NET
I'm still a little frazzled after transcribing the 18,000 word interview with John Walker (and largely with two fingers - at such times the fact that I've never learned to touch-type is a significant cause of frustration, as you might imagine). I'm also attending meetings all this coming week, so I've gone for the cheap option, once again, of dipping into my magic folder of code generated and provided by my team.
The technique for this one came from a response sent out by Philippe Leefsma, from DevTech EMEA, but he did mention a colleague helped him by suggesting the technique. So thanks to Philippe and our anonymous colleague for the initial idea of calling -DIMINSPECT.
The problem solved by Philippe's code is that no API is exposed via .NET to enable the "inspection" capability of dimensions inside AutoCAD. The protocol is there in ObjectARX, in the AcDbDimension class, but this has - at the time of writing - not been exposed via the managed API. One option would be to create a wrapper or mixed-mode module to call through to unmanaged C++, but the following approach simply calls through to the -DIMINSPECT command (the command-line version of DIMINSPECT) to set the inspection parameters.
I've integrated - and extended - the technique shown in this previous post to send the command as quietly as possible. One problem I realised with the previous implementation is that UNDO might leave the user in a problematic state - with the NOMUTT variable set to 1. This modified approach does a couple of things differently:
- Rather than using the NOMUTT command to set the system variable, it launches another custom command, FINISH_COMMAND
- This command has been registered with the NoUndoMarker command-flag, to make it invisible to the undo mechanism (well, at least in terms of automatic undo)
- It sets the NOMUTT variable to 0
- It should be possible to share this command across others that have the need to call commands quietly
- It uses the COM API to create an undo group around the operations we want to consider one "command"
- The implementation related to the "quiet command calling" technique is wrapped up in a code region to make it easy to hide
One remaining - and, so far, unavoidable - piece of noise on the command-line is due to our use of the (handent) function: it echoes entity name of the selected dimension.
Here's the C# code:
using Autodesk.AutoCAD.ApplicationServices;
using Autodesk.AutoCAD.DatabaseServices;
using Autodesk.AutoCAD.EditorInput;
using Autodesk.AutoCAD.Runtime;
using Autodesk.AutoCAD.Interop;
namespace InspectDimension
{
public class Commands
{
[CommandMethod("INSP")]
static public void InspectDim()
{
Document doc =
Application.DocumentManager.MdiActiveDocument;
Database db = doc.Database;
Editor ed = doc.Editor;
PromptEntityOptions peo =
new PromptEntityOptions("\nSelect a dimension: ");
peo.SetRejectMessage(
"\nEntity must be a dimension."
);
peo.AddAllowedClass(typeof(Dimension), false);
PromptEntityResult per = ed.GetEntity(peo);
if (per.Status != PromptStatus.OK)
return;
Transaction tr =
db.TransactionManager.StartTransaction();
using (tr)
{
Dimension dim =
tr.GetObject(per.ObjectId, OpenMode.ForRead)
as Dimension;
if (dim != null)
{
string shape = "Round";
string label = "myLabel";
string rate = "100%";
string cmd =
"-DIMINSPECT Add (handent \"" +
dim.Handle + "\"" + ") \n" +
shape + "\n" + label + "\n" +
rate + "\n";
SendQuietCommand(doc, cmd);
};
tr.Commit();
}
}
#region QuietCommandCalling
const string kFinishCmd = "FINISH_COMMAND";
private static void SendQuietCommand(
Document doc,
string cmd
)
{
// Get the old value of NOMUTT
object nomutt =
Application.GetSystemVariable("NOMUTT");
// Add the string to reset NOMUTT afterwards
AcadDocument oDoc =
(AcadDocument)doc.AcadDocument;
oDoc.StartUndoMark();
cmd += "_" + kFinishCmd + " ";
// Set NOMUTT to 1, reducing cmd-line noise
Application.SetSystemVariable("NOMUTT", 1);
doc.SendStringToExecute(cmd, true, false, false);
}
[CommandMethod(kFinishCmd, CommandFlags.NoUndoMarker)]
static public void FinishCommand()
{
Document doc =
Application.DocumentManager.MdiActiveDocument;
AcadDocument oDoc =
(AcadDocument)doc.AcadDocument;
oDoc.EndUndoMark();
Application.SetSystemVariable("NOMUTT", 0);
}
#endregion
}
}
Here are the results of running the INSP command and selecting a dimension object. First the command-line output:
Command: INSP
Select a dimension: <Entity name: -401f99f0>
Command:
And now the graphics, before and after calling INSP and selecting the dimension:
In the end the post didn't turn out to be quite as quick to write as expected, but anyway - so it goes. I'm now halfway from Zürich to Washington, D.C., and had the time to kill. I probably won't have the luxury when I'm preparing my post for the middle of the week, unless I end up suffering from jet-lag. :-)
Thanks for another great example.
For this type of problem, I generally prefer a non-command based solution, like the one at this URL:
Posted by: Tony Tanzillo | September 16, 2008 at 12:46 AM
Interesting - thanks, Tony.
Kean
Posted by: Kean | September 16, 2008 at 04:10 AM
Thank you for your example!
Speaking of quiet command execution, shouldn't the echo parameter in SendStringToExecute() just replace the NOMUTT variable use?
I have also a more general question: I'am trying to keep my code free of Interop references (this is maybe another point to discuss) and I don't find a way to replace the StartUndoMark() and EndUndoMark() methods. Is there another way to do that without COM referncing?
Posted by: Massimo Cicognani | September 16, 2008 at 11:10 AM
You mean the quiet flag should prevent all information being displayed on the command-line by the command? I can see a case where you would want the command itself to be hidden, but information to be presented.
There's no way I know to avoid this particular use of the COM API. I personally don't see a problem with this - there's no significant performance penalty and I see no great risk of the COM API being deprecated.
Regards,
Kean
Posted by: Kean | September 16, 2008 at 12:58 PM
Thank you Kean,
my personal concern is to make a reference to 'version dependant' type-libs... i.e. What happens if my application is executed with a different version of AutoCAD, maybe 2008 or 2010, anf I got a reference to 'AutoCAD 2009' typelibrary?
Maybe I'm a little O.T. here... sorry about that
Posted by: Massimo Cicognani | September 16, 2008 at 05:05 PM
It's easy to invoke COM methods without having to reference the Interop assemblies, using late binding:
Document doc = Autodesk.AutoCAD.ApplicationServices.Application.DocumentManager.MdiActiveDocument;
object oAcadDoc = doc.AcadDocument;
oAcadDoc.GetType().InvokeMember("StartUndoGroup", BindingFlags.InvokeMethod, null, oAcadDoc, null);
Posted by: Tony Tanzillo | September 16, 2008 at 06:13 PM
Oops...
Sorry, in that example, the method name should have been "StartUndoMark", not "StartUndoGroup" :o
Posted by: Tony Tanzillo | September 16, 2008 at 06:17 PM
Hi Kean...
Regarding the need to reference the AutoCAD COM interop assemblies, Massimo makes a very legitimate point.
Becuase the COM interop assemblies are version-specific/dependent, it's wise to avoid referencing them if possible.
Regards,
Posted by: Tony Tanzillo | September 16, 2008 at 06:22 PM
Fair enough - I hadn't been thinking about that aspect, admittedly.
Kean
P.S. We probably should have called in StartUndoGroup - that's much more consistent. :-)
Posted by: Kean | September 16, 2008 at 11:52 PM
Thank you all,
Tony, late binding is certainly a solution but isn't my favourite... in the first place for performance, and it's a nightmare to code 'on the blind'... (well 20 years ago was the only way to code, but I got used to Intellisense... I'm getting lazy...)
I'm targeting multiple platforms from 2007 and above, both 32 and 64 bit, .NET was a natural choice and I hoped COM objects and methods were all exposed.
It's reasonable to think that future versions will expose more objects and methods?
Posted by: Massimo Cicognani | September 17, 2008 at 10:44 AM
If you're an ADN member you can submit the enhancement request via the ADN website.
Regards,
Kean
Posted by: Kean | September 17, 2008 at 12:49 PM
"late binding is certainly a solution but isn't my favourite... in the first place for performance, and it's a nightmare to code 'on the blind'..."
If you're calling COM methods that often then you might have a point, but for occasional calls to COM methods when there is no managed equivalent (which is the case here), the performance hit of late binding is of little consequence.
The COM api's are themselves implemented in ObjectARX which means that you already have managed APIs to do most of what the COM apis do, without having to depend on COM interop.
In this case, since you are issuing commands to get something done, you can just as easily use the UNDO/Begin and Undo/End subcommands to achieve the same thing.
Posted by: Tony Tanzillo | September 17, 2008 at 05:50 PM
Hi Kean,
First I would like to agree with Massimo regarding the use of version dependant libraries....I am personally unfamiliar with the process of late binding, so some things are proving difficult to avoid. I also loathe the practice of sending strings to the command line, since there are a number of ways user settings can impact the command line operation (for instance EXPERT).
That said I was going to suggest that in your post when you mention you could not suppress the 'handent' return value from the command line, if you call (princ) at the end of the lisp it should be quiet.
Posted by: David Osborne | September 17, 2008 at 08:18 PM
Thanks - I originally did try (princ) (I had the same instinct as you), but that stopped the results of (handent) being picked up by the command.
Kean
Posted by: Kean | September 17, 2008 at 10:51 PM
I certainly don't want to give the impression that I think you don't know your stuff, but I thought about it again (and looked closer at the command string) and noticed that only the handent function is actually a lisp command that you are calling to use the return val as an argument to the -diminspect command. Therefore obviously you can't just stick a (princ) in the middle of the command, so my next idea is, did you try using (vl-cmdf ...) to execute the entire command string as a lisp command, and put the (princ) statement after the vl-cmdf statement?
Posted by: David Osborne | September 19, 2008 at 01:12 AM
That's certainly an option - even using the standard (command) function would probably do it.
It just depends on how important that last piece of echoing is to you - I'd tend not to introduce the additional overhead of sending a string containing (command) unless it was really worth it.
And that will depend on your users, of course.
Kean
Posted by: Kean | September 19, 2008 at 05:51 PM
The reason I would use (vl-cmdf) is that it evaluates all of the arguments prior to executing the command, and the (command) method always returns 'nil' but (vl-cmdf) doesn't.
Posted by: David Osborne | September 22, 2008 at 09:14 PM | http://through-the-interface.typepad.com/through_the_interface/2008/09/more-quiet-comm.html | crawl-002 | refinedweb | 1,842 | 51.89 |
Opened 7 years ago
Closed 4 years ago
Last modified 4 years ago
#14043 closed Bug (fixed)
Incorrect and/or confusing behaviour with nullable OneToOneField
Description
Attempting to "null" out a nullable OneToOneField before deleting the related object fails to prevent a cascading delete (i.e., both objects are still deleted whereas it seems only the related object ought to be deleted).
Example code:
# Note: using Django trunk ###### MODELS ###### class Person(models.Model): age = models.PositiveIntegerField() def die(self): self.soul.become_ghost() self.delete() class Soul(models.Model): person = models.OneToOneField(Person, null=True) is_alive = models.BooleanField(default=True) def become_ghost(self): self.person = None self.is_alive = False self.save() ###### TESTCASE (INTERACTIVE) ###### # Type a few commands in "python manage.py shell" >>> from app.models import Person, Soul >>> >>> bob = Person.objects.create(age=34) >>> bobs_soul = Soul.objects.create(person=bob) # Let's see what's happening in MySQL (switching programs...) mysql> select * from app_person; +----+-----+ | id | age | +----+-----+ | 2 | 34 | +----+-----+ 1 row in set (0.00 sec) mysql> select * from app_soul; +----+-----------+----------+ | id | person_id | is_alive | +----+-----------+----------+ | 2 | 2 | 1 | +----+-----------+----------+ 1 row in set (0.00 sec) # Okay, that looks good; let's kill him (switching programs again...) >>> bob.die() # Back to MySQL mysql> select * from app_person; Empty set (0.00 sec) mysql> select * from app_soul; Empty set (0.00 sec) ### Huh!??! Why is app_soul being deleted?
Attachments (1)
Change History (17)
comment:1 follow-up: 5 Changed 6 years ago by
comment:2 Changed 6 years ago by
comment:3 Changed 6 years ago by
Ugh, selecting "accept ticket" makes me the ticket owner. What's the way to say "I confirm the ticket is legit" without marking me as responsible for it ?
comment:4 Changed 6 years ago by
@gsakkis - You use the triage stage, not the action. Confusing, I know... :-)
comment:5 Changed 6 years ago by
Why do an update if it is going to be deleted right after ?
Ugh, scratch that, I missed the
self.save() in
become_ghost(), that's where the UPDATE comes from; nothing to do with
self.delete().
Back to the topic, it comes down to Django's indiscriminate ON DELETE CASCADE behavior, regardless of whether the ForeignKey/OneToOneField is nullable. IMO it would make more sense to treat nullable keys as ON DELETE SET NULL but that would most likely be backwards incompatible at this point. Changing to design decision needed, just in case.
comment:6 Changed 6 years ago by
Changed 6 years ago by
Moved tests from select_related_onetoone to one_to_one_regress
comment:7 Changed 6 years ago by
comment:8 Changed 6 years ago by
Looks like this is solved here. I'm adding a little more information for anyone who is trying to work around this in a pre-1.3 version of Django. The following modification fixed a similar problem on my model.
class Person(models.Model): age = models.PositiveIntegerField() def die(self): self.soul.become_ghost() # Update self here, so that self.soul is unavailalbe in the object passed to delete() # If self is left alone, then self.soul points to a python object representing a relationship # that was deleted from the database in the call to become_ghost(). self = Person.objects.get(pk=self.pk) # OR, the following to work more generally: # self = self.__class__.objects.get(pk=self.pk) self.delete() class Soul(models.Model): person = models.OneToOneField(Person, null=True) is_alive = models.BooleanField(default=True) def become_ghost(self): self.person = None self.is_alive = False self.save()
I'd suggest it as a documentation change for earlier versions, but it's so hack-y.
comment:9 Changed 6 years ago by
The provided test case doesn't validate the problem described. If you revert the change to related.py, the test case doesn't fail.
comment:10 Changed 6 years ago by
comment:11 Changed 5 years ago by
Milestone 1.3 deleted
comment:11 Changed 5 years ago by
Change UI/UX from NULL to False.
comment:12 Changed 5 years ago by
Change Easy pickings from NULL to False.
comment:13 Changed 4 years ago by
I can confirm that this does still happen in master. I tested with this in one_to_one_regress:
def test_nullable_o2o_delete(self): u = UndergroundBar.objects.create(place=self.p1) self.p1.delete() self.assertTrue(UndergroundBar.objects.filter(pk=u.pk).exists()) self.assertIsNone(UndergroundBar.objects.get(pk=u.pk).place)
The attached patch isn't correct at all, it is explicitly breaking the link before delete and then confirming that delete doesn't cascade after that. That isn't the problem, the problem is that OneToOneField that is nullable should be set to None on delete, not cascaded.
Confirmed here too, it's definitely non obvious, if not outright a bug. Also the SQL being issued on
bob.delete()hints that something is fishy:
Why do an update if it is going to be deleted right after ? | https://code.djangoproject.com/ticket/14043 | CC-MAIN-2017-13 | refinedweb | 809 | 60.61 |
1 ";.i- f-
WEAtHER FORECAST.
Fair to-day ' and to-morrow; gentle
fcbuth winds.
Highest temperature yesterday, 40; lowest, 30,
Details weather report 111 be found on the editorial
Deg&
A HAPPY BLENDING.
The amalgamated SUN AND HERALD
preserves the best traditions of each.
In combination they cover a wide field
and make a greater newspaper than
either has ever been on its own.
AND THE NEW YORK HERALD
VOL. LXXXV1T. NP. 165-DAlLY.
NEW YORK, THURSDAY, FEBRUARY 12, 1920.-
Copvrlght. ltM, bp Th BuH-RtraUt Corporation.
Entered a lecond clui matter, I'm! Office, New Vorlf, N. T,
PRICE TWO CENTS
IN NEW YORK CITY AND SUBURBS.
THREE CENTS
ON TRAIN8 AND ELSEWHERS.
CURB DEMANDS
PREMIER TELLS
I BRITISH LABOR
i
t
Jloyd Gcorgo Gives Warn
ing Nation Will Fight
for Its Liberty.
HITS NATIONALIZATION
Plan Failed in Russia, He
Says, Answering Attack
in Commons.
HINEHS SEEK CONTROL"
Plan Is Monaco to Country, He
Declares Prohibition Up
in Both House's.
London. Feb. 11. Lloyd George in
the Home of Commona to-day, in tho
course of a debate on the labor amend
Bent criticising the Government's re
fusal to accept tho miners' proposal
for the nationalization of the coal
mines of England, declared that if any
ittempt was made by labor to put
pressure on tho country by violence,
wch action would bo a challenge to
the whole fabric of freo government.
"The nation lias ever fought for lib
erty and will fight for it again," the
Premier exclaimed.
William Brace, president of the
Bouth Wales Miners Federation, be
tin the debate by moving an amend
ment expressing regret for "the ab
lence In the King's "speech of any
proposal to nationalize the coal mines
of the country along lines recom
mended by the majority of the mem
bers of the royal commission on tho
coal Industry, which was appointed to
advise the Government as tojtho best
methods of reorganizing tho Indus
fry" lllnrrs Disappointed. He Says.
Mr. Brace contended that the min
ers had been led to suppose that the
Government would accept the recom
mendations of the majority of the coal
commission. He declared that n,Uon
alliation would not mean bureaucratic
control. Tho Government might delay
nationalization, but, he predicted, it
could not prevent nationalization
coming.
In outlining his scheme for national
ization, Mr. Brace nald there would be
a committee to manage each pit, and
a committee for each lof tho fourteen
districts Into which Great Britain would
be divided. Finally there would be a
body with a president of mines as
chairman, to supervise all the coal fields
of the country. The miners, the offi
cials and the general public would be
represented, and each would be in the
minority.
-Mr. Brace declared that his plan was
not one of confiscation but of fair pur
chase, The Government would give the
shareholders bonds for their present
snares. He asked that a tribunal be
established to tlx a fair price for such
shares, and he would favor even a gen
'erous price.
Premier Lloyd George, In replying,
armed that It would' be Impossible to
have nationalization without bureau
cracy. Premier Quotes Trotsky
It would be baseless, he said, to es
tablish another system unless Mr. Brace
nas able to prove that It would bo
better than tho existing system. Ho
declared there was no guarantee that
under the plan proposed by the mem
ber the present output would be In
creased. The Premier ridiculed Mr.
Brace's idea that the miners would work
tarder for the State than for private
Interests.
Mr. Lloyd George created something
ol a scene by quoting from Leon Trotzky
to show that the Bolshevik experiment
of nationalization in Russia had failed
and that the Bolshevlkl had been obliged
to resort to conscription of labor. This
brought forth excited shouts of "Thanks
t your fighting r
Mr. Brace's scheme, the Premier con
tended, would discourage the develop
ment of the mining Industry, while it
as Impossible to eliminate the specu
lative Incentive except by confiscation,
ana that was a dangerous game, tq be
tfn. Tho Premier argued that what,
tw Miners Federation really wanted
"as full control of the coal Industries.
nd that to hand It over thus would
be disastrous to the community and a
misfortune to the miners, themselves.
Referring to the address of William
J'Unn, a Labor member, demanding- the
Rationalisation of all Industries, th
premier declared that If any attempt
ere made to convince tho country by
Wolence It would be a challenge to the
"hole fabric of free government On
uch an Issup, declared the Premier, "we
U fight him to the death."
Will Fight Soviet Doctrine.
Buch action, declared Mr. Lloyd
worge, would not be a strike for wages
'! betterment of conditions of labor,
out for the establishment of a Soviet,
nd that would rceir. the end of con.
"tutlonal government;
This nation has ever fought for 11b
ty and will fight for it again,- Mr.
oya George declared.
The drink question was briefly dl.
i"ed In both Houses of Parliament.
Mrl Curzon toul tho Lords that the
"' on this subject to be Introduced
u!u contain provision for shorter
"ours of sale. Tho experiment of State
nagement certainly would not be
"opped, he said.
In the House of Commons Sir Donald
clean (Liberal) said "The fact that
America has gone dry Is nn economic
Jrt of the gravest Importance to Great
Britain.- He declared the British ex
penditure for drink absolutely staggered
ilT:.. Th9 country spent more than
l,000,OOO for drink In 1914, he said.
L Continued on ffourih Past,
KOLCHAK KILLED BY
HIS OWN TROOPS TO
PREVENT HIS RESCUE
Soviet Appeal That,' His Life
Bo' Spared Is Becclvcd
Too Late.
"HOISTED ON BAYONETS"
Anti-Bol8hovik Loader in Si
beria Had Picturesque Ca
roor in Russian Navy.
London, Feb. 11. Admiral Kolchalt
was put to death by his own troops to
prevent his rescue by whlto troops
.moving in the direction of Irkutsk for
that purpose, according to a Copen
hagen despatch to the Herald, a labor
newspaper. Tho Moscow Soviot sent
a wireless message asking his captors
to spare bis life, but tho appeal was
too late.
The Moscow wireless service on Jan
uary 31 transmitted an extract from
an article from the official Bolshevik
organ Pravia, which said : "Only a few
days ago Supreme Ruler Kolchak was
hoisted on his soldiers' bayonets."
For a year Admiral Kolchak, as head
of the All Russian Government, had
loomed larger In Russian affairs than
any other Individual. As the principal
foe of the Bolshevlkl In the east, his
campaigns were watched with great In
terest. There are many who doubted
the sincerity of Admiral Kolchak's dem
ocratic protestations. The failure of his
government last month marked the end
of an Ineffectual struggle for a year
by the Siberian army against the forces
of the Soviet Government.
For many months during their retreat
the Kolchak army offered practically no
resistance. At Omsk 40,000 troops sur
rendered without firing a shot ancKvast
quantities of war material supplied by
the British were lost. Without adequate
organization from the beginning, with
incompetent and Ignorant, sometimes
traitorous staffs, Kolchak's military
reglmo has been regarded as Impotent
elnce the tide turned strongly against
htm In the late spring of 1913.
Kolchak was referred to repeatedly as
having reactionary tendencies. Certain
It Is that among his followers were
TO REORGANIZE
P. Rr R.
Will Bo Divided Into Four Ilo-
gions "When Returned
to Owners.
VICE-PRESIDENT IN EACH
Eastern Division Will Extend
From New York to Altoona
and Washington.
Philadelphia. Feb. 11. Radical
changes in the operation of the Penn
sylvania Railroad system with a re
organization affecting many of the
higher officers, was announced to
night by Samuel Rea, president, to be
come effective when tho railroads are
turned back to their private owners.
The system -will be divided into four
regions eastern, central, northwest
ern and southwestern with each in
charge of a vice-president. The re
spective headquarters will bo at Phil
adelphia. Pittsburg, Chicago and St.
Louis.
The separation In organization that
has existed since 1870 between the lines
east and west of Pittsburg Is to be aban
doned, the announcement said, and the
Bystem will become a unit In all that
concerns Its service to the puouc. in-
tead of havlntr a dividing line as at pres.
ent at Pittsburg, one of the busiest rail
road centres In the country, the whole
territory between Altoona; Pa., on the
east, Buffalo on the north and Columbus
and Crestline. Ohio, on the west, will
comprlao the cenjr-il region.
The eastern region will extend rrom
New York to Altoona and to Washing
ton on the south. The northwestern
region will extend from Columbus and
Crestline kto Chicago, and the southwest
ern will be bounded roughly by Colum
bus, Cincinnati and St. Louis.
BLOCKS MELTING OF
BRITISH SILVER COIN
Chamberlain Bill Reduces
Standard of Fineness.
Special CabU, Copyright. 1X0, b Ths Scn
xi Nxw yoix utnxtn.
London.. Feb. 11. Austen Chamber
lain, Ojaacellor of the Exchequer, an
nounced to-day that he would intro
duce a bill in the House of Commons
reducing the standard of fineness of the
silver rolns of the United Kingdom..
Dr this means ho proposed to prevent
the melting down of sliver coins to ob
tain silver, the present high price of
which Is responsible for a great disap
pearance of coins.
WORLD BARTER TO
SAVE DUTCH GOLD
Direct Exchange of
Goods
Plan of Banks.
The Hague, Feb. 11. According to
the Xlentce Courant, the Netherlands
Bank and other great Dutch financial
interests are planning an international
exchange of goods in Amsterdam, with the'
object of relieving the necessity for the
use of gold.
Direct exchange of goods will be
made, and It is hoped In this way to aid
la the resuscitation or EuroDera financial
and commercial ability.
SYSTEM
ADMIRAL KOLCHAK.
many who would havo welcomed the
return of the old monarchy. He failed
of tho support of many of the conser
vatives because It was asserted con
stantly that representative government
would be Impossible under his leader,
ship,
Kolchak was born In 1874. He first
gained his reputntlon for courage In the
defence of Port Arthur during the
Russo-Japanese war. For Ills brnvery.
then the Rurslnn Government bestowed
upon him a sword of honor. When
at the end of the war his forces sur
rendered the Japanese out of esteem
for his bravery did not take away
hlr. sword. In 1917, while In command
of the Black Sea fleet, to which post
ho had been promoted because of his
defence of the Gulf of Riga when the
German fleet tried to force entrance,
tho sailors mutinied and' demanded that
all officers surrender their swords- Kol
chak refused and tlung his sword Into
tho sea. When the sailors learned tho
history of the sword of honor they sent
divers after it and It was found, a&
the water was not deep. The sailors
returned It to the Admiral the next day
with apologies
NAVY TO BUILD
GIANT AIRSHIP
DirigHflo to Bo Largest in the
World and to Use Helium,
Non-inflammable Gas.
NEW GUN DEVELOPED
Capt. Thomas T. Craven Urges
$2,500,000 Appropriation
for Craft.
Special to The Sc." axd New York Herald.
Washington, Feb. 11. The largest
dirigible In the world will bo built by
tho United States Navy If Congress
grants an appropriation of 82,500.000
asked to-day of tho House Naval
Committee by Capt. Thomas T.
Craven, Director of Naval Aviation.
The proposed dreadnought of tho
air will bo 694 feet long, 50 feet longer
than tho airship being built for tho
United States Navy in Great Britain.
Tho one being built overseas is the
same size ns the largest In the British
navy.
In urging the appropriation, Capt.
Craven discussed the future of aerial
warfare as a complement of fleet opera
tions. The ship will carry more arma
ment than any similar craft now In
contemplation by any country. It will
use helium, the nonlnflammable gas
A new aircraft gun being developed by
the navy, a small cannon, will bo the
main weapon of the craft, which also
will mount a number of machine guns.
"The big ship' now being built will
be completed late this summer," said
Capt Craven. "Crews are being
trained now to fly this ship across the
Atlantic next falL The larger ship
that we have planned will be built in
this country after the other ahlp has
arrived from England, and Its construc
tion will require at least a year. The
proposed dirigible will require about
2,700,000 cubic 'feet of gas, and It is
estimated that about $800,000 will be re
quired for Its annual maintenance. The
outer c)oth covering must be renewed
each year.'"
Capt. Craven also told tho committee
the Department plans a large dirigible
baso at Pensacola, Ha., where hangars
will be built to house these ships. Army
hangars probably will have to be used
until new facilities to care for the big.
airships can be built ' ,
"The Department hopes to continue
nine naval air stations. Including a new
one at Hawaii," Capt Craven said
"These will toe lit Chatham, Mass. ; Rock-
away Beach, L. I.; Cape May, N.. J.,'
Anacostla, D. C. : Charleston, S, C. ;
Pensacola, Fla. ; San Diego, CaL; Pan
ama and Hawaii. An air station Is
also planned on the southern tip of the
Florida peninsula to replace the- sta
tions at Miami and Key West, which
are to be abandoned."
Bent Strikebreaker, Gets 3 Years,
For attacking a man who went to
work In n Bronx millinery establish
ment while a strike was ln progress
Herman Uffer, 32, wns sentenced yes
terday to serve three years and seven
months In Jail. He was sentenced by
Judge Louis D. Glbbs, In the Bronx
County Court, where he was convicted
last November.
Ideal Winter Weather and Sports
In the lit, at Tama. Farms, Ttaptnoch, N. T.
U i inters guest list onlr admitted.
rtrUcuUrs N. r. office TeL.IIMMsa. JLto,
SESSION
HOLES
IN THE 14 POINTS
Paris Paper Reveals Facts
of Meeting in Pichon'sOf
. fico Nov. 3,1918.
COL. HOUSE yIN A HOLE
"Open Covenants Openly
Arrived at" Didn't Mean
Public Negotiations.
,
THREAT BY CLEMENCEAU
Lloyd Gcorgo Made Reserva
tions in Regard to Free
dom of tho Seas.
7
Sptcial Cable, Conright, I M0. bv Ths Scs
and Naw Yon lIsrui.D.
Paris, Feb. 11. Several leaves from
tho official records of secret sessions
of the allied Premiers held In the
rooms of Stephen Plchon, then
Foreign Minister, in the early days of
the conference, have Just come to
light. Incidentally, they show how
the European Powers accepted Presi
dent Wilson's fourteen points.
Tho Echo do Paris prints these
note:?, which undoubtedly were taken
from the archives of the Qual d'Orsay
und published at tho inspiration of the
French Foreign Office as a rebuttal
of President Wilson's remarks to Sen
ator Hitchcock (Neb.), Administra
tion spokesman, regarding Artlclo X
The publication of the document
stresses particularly tho freedom of
action reserved by tho allied Premiers
In accepting the fourteen points.
The scone of tho discussion was Mr.
Pichon'a study in tho Qual d'Orsay.
The date, November 3, 1918, soon after
the nrmlstlco terms had been fixed by
tho allied Premiers, members of tho, . . . ., ... .. ,, ,,
Versailles Council and Colonel E. M.;Protect tho territorial integrity and
House. Llovd Georco In addressinir 1
Colonel Houso said
"If we understand the thought of
President Wilson In the armlstlco nego
tiations which the American Government
is ready to engage In with Germany In
concert with tho allied Powers, they are
subordinated to acceptance by the said
Powers of the principles and conditions
laid down by the President, of the
United States on January 8, 1918, and'
later In his speeches. Briefly, wo
must give our. assent to his fourteen
articles."
Hoaac Agree vrlth Lloyd George,
CoL Houso replied that It was ex
actly as the British Premier had stated.
Whereupon Premier Clemcnceau inter
vened :
"As to tho fourteen points, I havo not
yet read them. What are they? Let
them be made known to us."
The reading of the fourteen points be
gan. "Open covenants, openly arrived
at"
Premier Clemenccau arose, exclaim
ing: "Look here, this is not acceptable. We
cannot negotiate In a public square."
Arthur J. Balfour, British Foreign
Secretary, Intervened, explaining that It
was not a question of revealing day by
day the details of the negotiations, but
only that the alms and results of the
negotiations ought to be published. The
rest, he said, was a question for di
plomacy and should bo left to the chan
celleries. "Then all my objections are with
drawn," said the "Tiger," again taking
his seat
Beading of the points was resumed.
Article II., relatlvo to the freedom of
the seas, was barely finished When
Premier Lloyd George was on his feet
protesting :
"The Germans have abused this ques
tion -of the freedom of the eeas to such
an extent that before making any en
gagements whatever we demand to
know exactly what Is meant by 'freedom
of the reas.' "
Article III. Disappear.
Article HI. disappeared like a mist
It meant that the signatories would be
deprived of the faculty of concluding
treaties of commerce, customs, union.
&c The future status of colonies and
of disarmament was passed over. Points
touching on reparation seven, eight
and eleven elicited new reservations
by the Allies. Germany was given to
understsnd that she must not only re
store the territory she had Invaded and
destrocd, but must Indemnify the pop
ulation of those territories for their
losses.
Suddenly Premier Clemenccau turned
to CoL House, saying:
"In case we reject these fourteen
Dolnts. what would hannsnr'
Th President's unalresman . '
"The President will consider as ter
minated the conversations which he has
been engaged in with the Allies on the
subject of tho armistice."
"Will he consider as terminated also
the conversations begun with Germany
at the end of OotoberJ" asked the
"Tiger."
CoL House replied:
"I cannot give you uiy assurance on
'this point"
The climax was reached. Premier
Clemenccau interrupted:
"Adopted."
Hardly had he spoken, however, before
Premier Lloyd George was on his feet
' VWe reserve for oursolves the liberty
of formulating reservations regarding
the freedom of, the seas and repara
tions," he said.
Thus the fourteen points passed Into
history and the meeting adjourned.
HALIFAX BOYCOTTS U. S. T00D.
War Veterans Tnlto Action II c-
cnno of Exchange Rate.
Halifax, N. S., Fob. 11. Members of
the Halifax Army and Navy Veterans
Association to-day unanimously passed
a resolution pledging themselves "In
dividually and as a unit., to purchase a:i
little as possible of goods manufactured
In the United States, ro of food pro
duced in the United States," because ot
the price of the Canadian dollar In that
country.
LODGE OFFERS
REYISED DRAFT
ON ARTICLE X.
Submits Proposal on Which
There Is Possibility of
Two-thirds Vote.
HITCHCOCK IS IN DOUBT
Asserts New Reservation Is
Not a Compromise but a
Surrender.
OTHERS MORE HOPEFUL
Changes Agreed To in Bipar
tisan Conference Formally
Offered in Senate.
Special to Tna Sex akd New York Herald.
Washington, Feb. 11. Real prog
ress toward ratification of the peace
treaty with Americanizing reserva
tions was mndo to-day, when Senator
Lodgo (Mass.), Republican leader,
gave his approval tentatively to a new
compromise reservation to Artlclo X.
of the covenant of the League of Na
tions. The new reservation was drafted by
Senator Lonroot (Wis.), representing
the Republican mild reservatlonlsts,
and later was changed slightly to con
form to suggestions from several Dem
ocratic Senators. It has not received
yet tho npproval of Senator Hitch
cock (Neb.). Presumably Mr. Hitch
cock is waiting for somo further word
from tho President, upon whom all
eyes now arc turned.
Senator Lodge said he regarded tho
reservation ns virtually tho samo In
substanco and principle na his original
reservation to Article X., which de-
Irnvo Ihn nVttltrntlnn nf America to
political Independence
of the other
league members. For that, reason ho
expressed a willingness to show It to
othefRepubllcan Senators with a view
of learning how many votes can bo ob
tained for It.
If it appears this new reservation
can command 64 votes tho. two-thirds
necessary for ratification of a treaty
Mr. Lodge probably will offer it him
self on tho floor of the Senate next
week. Such action on tho part of the
Republican leader would mean that
ratification was all but certain, becauso
ho would not take that step without
assurances that ho would bo sup
ported not only by tho bulk of his own
party but by enough Democrats as
well to force through the treaty with
all of tho other Lodge reservations
virtually Intact.
Text, of Ilrservntlons.
The text of the new reservation was
in circulation among Senators of all fac
tions all day. It read :
The United States assumes no ob
ligations to preserve by the use of
Its military or naval force or by
economic boycott or by any other
means tho territorial integrity or po
litical Independence of any country
or to interfere In controversies be
tween nations whether members of
tho league or not under the pro
visions of Article X., or to employ the
military or naval force of the United
States under any article ft the treaty
for any purpose unless In any par
ticular case the Congress, which un
der tho Constitution has the sols
power to declare war, shall by act
or Joint resolution so provide
The Lodge reservation on Article X as
voted upon November 19 read:
3. The Vnttcd States assumes no ob
llgalion to preserve the territorial in
tegrity or political independence of any
other country or to interfere in con
troversies between nations whether
members of the league or not under
the provisions of Article X., or to cm
ploy the military or novol forces of the
United States wider any article of the
treaty for any purpose unless in any
particular case the Congress which,
Continued on Second Page.
CLOSING
TIME
for
Classified Advertising
in
' AND
NEW YORK, HERALD
for
The Daily Issue
9 P. M. Day before p ublicatim al SUN
AND NEW YORK HERALD O&e,
210 BVit.
8 P. M Diy bjfore puiSctlioa at
AIJBttBehOSkej.
8 P. M. Day before yubtiutiwi at SUN
AND NEW YORK HERALD OSes,
Herald Secure.
SUNDAY ISSUE
P. M. Satunk it SUN AND NEW
YORK HERALD Ofice, 230 BVit.
4 P.M. Situnkr at AQ Rraach OSces.
I P. M. Satarekv at SUN AND NEW
YORK HERALD Oflite, HeraM
MINES REFUSES RAIL PLEA;
UNIONS APPEAL TO WILSON;
STRIKE UNLIKELY AT ONCE
SEE R. R. BILL IN
SENATMWEEK
G. 0. P. Leaders Expect House
to Approve Con forces'1 Report
by Next Wednesday.
OPPOSITION NOT FEARED
Buckley and Sims Will Fight
Measure Many Democrats
for Speedy Passage'.
Special to Tns Sen aid New Yorx Hsiui.d.
Washington, Feb. 11. Republican
House leaders wero certain to-day
that Democratic opposition to the con
ference report on the railroad bill can
not block or even delay appreciably
final agreement on tho bill, which is
considered so necessary before the
roads are returned to tho owners on
March 1.
Republican Leader Mondell (111.)
and Representative Esch (Wis.)',
chairman of tho House Interstate
Commerco Committee, said tho report
will be ready for House action by Mon
day nt tho latest possibly by Satur
day. Tho prediction wns mado that the
House will approve tho conference re
port by Wednesday night, thus leav
ing eleven days within which the Sen
ate can act
Many Democrats will not support Rep
resentative Sims (Tenn.) and Repre
sentative Barker (Ky.), the Democratic
conferees; Representative Kitchln (N.
C.) and other minority leaders In their
opposition to the measure. Tills was
stated openly by Representative Dewalt
(Fa.) and Representative Rayburn
(Tex.), both Democratic members of the
Interstate Commerce Committee.
"I believe the conference report will
be agreed upon, although there are
several features of It which I cannot ap
prove." said Mr. Dewalt. "I Intend to
vote for It because I believe the, country
demands that nothing shall be put In the
way of the return of the roads to their
owners. I believe many of the Demo
crats' will take the position of a hungry
man. If he can't get the whole loaf he
doesn't refuse part of It."
Th brenlc ot this faction from the
Slms-Kitchln leadership seems sufficient
In combination- with'-HepUblicans, to pre
vent lenr delavs. RooresentAtlve Mori-
dell said to-day he was certain virtually
all the Republicans favored Immediate
action.
Mr, Sims In explaining to-day his ob
jections to the conference report de
clared he believed Section 6, providing
for the sruarantee of a return or 6
per cent., will be adjudged unconstitu
tional.
"Wnat I expect to see Is that the
stronger roads will refuse to turn over
any of their excess earnings above 6
per csnt and enjoin the operation of the
section on the grouni that It is uncon-
Bt tutlonal." he said, "mis iiugaiion
no doubt will cover tho entjre two years
durlnj which the guarantee Is to be In
effect Therefore tho Government will
be prevented from loaning the. excess
earnings to the weake' roads during the
period when they will neod credtt the
most.
Mr. Barkley said to-day that he will
Inln Mr. Sims in refusing to sign me
conference report, because of the guar
antee provisions, and win oppose uimi
agreement on the floor.
WILSON DEFINITELY
OUT OF 1920 RACE
That Is Prevailing Opinion in
Official Circles.
Special to Tn Sua ikd Nsw Yobk IUbald.
Washington, Feb. 11. That Presi
dent Wilson definitely Is not a possibility
for a third term seemed to be the pre
vailing opinion In Administration circles
to-day following tne puDiicaiion oi we
Interview with Dr. Hugn u. xoung or
Baltimore rcirardlnir Mr. Wilson's physi
cal condition. Even If there was nothing
else to prevent Mr. Wilson's health
would make his candidacy Impossible.
Up to the present Mr. Wilson always
has been In the consideration as a pos
sible nominee. No one knew precisely
what condition he was In and whether
his aliment persistently described as a
'complete nervous breakdown," might
at the last minute permit him to enter
the contest to make the fight for the
peace treaty as It stands.
Unquestionably Mr. Wilson would be
in a stronger position tn the political
sense If he refrained from making any
expression, for the present at least re
garding the San Francisco nominee.
More than any other man ho has the
power in the Democratic -party to make
or break any candidate who la not ac
ceptable to him at this time. Silence on
the part of the White House will be a
restraining factor working to Mr. Wil
son's advantage for the present
DRASTIC LEVY MADE 1
IN NEW GERMAN TAX
Fortunes and Boosted Capital
Heavily Assessed.
Special CabU. Copyright, Uffl, 6 Tax Sen
AKD NSW TOBK. UsaiXO.
London. Feb. 11. Details of the new
tax by means ot which Germany hopes
to balance her budget and to cut down
the Issue ot paper money were published
here to-day by the Economic Jleview.
The most drastic levy Imposes a tax
of from 10 to 0 per qent on fortunes
of from 5.000 marks to 2,000,000 marks.
Fortunes exceeding 2,000,000 marks will
be taxed 03 per cent A tax also Is
Imposed on capital Increased during the
war. beelnnlnit with 10 per cent on
capital amounting to more than 10,000
marks and rising to 60 per cent, on
capital amounting to 200,000 marks.
A supplementary tax levies irom o to
7 per cent tax on Increased dividends,
interest, shores, profits and real estate.
There la a tax on sales and an addltlonl
levy on luxuries. A 6 per cent tax is
levied on exports, ana a levy raaae on
import ad tokmoso, clra aaa evamte.
Prices of Food Take
Big Tumble in Chicago
Special Despatch to Tiie 8vx and Naw
York Herald.
fJHICAGO, Fob. 11. Food
prices dropped with a bang
to-day. Eggs, for instance, fresh
from the country, cnndled and
sorted, sold to-dny to the retailer
for 66 cents. The Fair Price
Commission allows the retailer 7
cents profit, although a majority,
of retailers are satisfied with 3
to 6 cents. 'That makes strictly
fresh eggs to-day 59 to 62 cents
n dozen.. Recently eggs wero
wholesaling at 92 cents and re
tailing at $1 or more.
Butter sold to-dny at 61 cents
for 93 scoro product 66 to 70
cents at retail was selling to tho
retailer in December at 75 cents.
Potatoes, wholesaling at $4.65
to $4.85 for 100 pounds, wero
wholesaling two weeks ago for
$5.25 to $5.75. The retailor is
allowed no more than one cent
a pound profit
Bakers1 flour dropped another
25 cents a barrel to-day, making
a total decline of 50 cents in a
week.
CLOTHING PRICE
CDT FAR AWAY
Dealers See No Chance of Re
duction for Another Year
at Least.
HIGH COST STOCKS A BAR
Wholesaler Tells Convention
of Campaign to Show Pub- '
lie Where Blame Lie's.
Any chanco that tho price of cloth
ing might be reduced to the consumer
within the next year was dispelled yes.
terday by the announcement by the
largest' retail Clothing manufa'ctuferar
and dealers In the Stato of New Torn
to tho effect that tho present level of
prices would continue to bo charged
until the late fall, at least, and prob
ably until this time next year.
Four hundred men and women, rep
resenting tho clothing industry
throughout the State, attended the
fourth annual convention of the Re
tall Clothiers' Association in an all day
session yesterday a( the McAlpIn
Hotel, and chief among tho subjects up
for discussion was that of the possibil
ity of a lowering ot prices for mate
rials and garments of all kinds In the
near future.
It was the opinion of all dealers who
J spoke and of many who were Interviewed
that becauso of the fact that dealers had
purchased largo stocks of goods at the
prevailing high wholesale .prices they
would bo forced to dispose of these goods
at the prevailing high retail prices In
order to save tnemselves from heavy
losses.
Ludwlg Stein, prcsiaent of the Na
tional Wholesale Clbthlers Association,
speaking before the convention, said that
tho prices could not by any chance be
lowered within a year, and added that
In a few weeks his organisation ex
pected to start a campaign, spending
more than J60.000 in newspaper pub
licity, In an attempt to teach the publlo
that It Is not the wholesale or retail
men who ore responsible for the prices.
"More production," he declared, "and
harder work and a desire to wear more
moderately priced materials are the only
things that can cause reductions of
prices."
Among the speakers at the' convention
'vesterdav were Nathan Lemleln, for
merly president of the Retail Clothiers
Association ; Gordon ,L. Stephens, Fran
cis M. Hugo, Secretary of State; Mark
Eisner, formerly Collector ot Internal
Revenue; Larry Schlff and Francis J.
Best, advertising director of Franklin,
Simon A Co.
LONGER WORK HOURS
DEMANDED IN BERLIN
Employers in Metal Trades
Want Bigger Output.
BtnUN, Feb. 11. The Arbitration
Board, to which the employers and em
ployees of tho Greater Berlin metal
trades referred, the Issue of working
hours, has decided upon a weekly sched
ule of 48 hours! actual working time.
Both parties are pound to the board's
verdict
After the employers had agreed to the
wage Increase and special allowances
they demanded that the workers contrib
ute an increased output by extending
their working hours.
FRENCH POLICY IN
SYRIA UNCHANGED
Millerand Promises to Follow
Clemcnceavs Views.
Paris. Feb. 11. The policy of the
Clemenceau Government with regard to
Svrla and the near Bast will be followed
by the present Ministry. This was made
nlaln hv Premier Millerand when he ad
dressed the Foreign Affairs Committee of
the Chamber of Deputies before leaving
for London.
"If France intervenes in Syria It will
be because she will be called there by
the Syrians or to deiena secular rignis.
he M14.
President Is Expected to
Back Decision of Administrator.
BOTH SIDES HOPEFUL
Workers Will Continue to
Eight After Lines Are
Returned'
ALL UP TO WILSON LEE
Says Promises Have Not Been
Kept Must Bo Mado
Good Now.
Special to Tns Svx and Nsw Toik Hew. id.
Washinoton, Feb. 11. Director-General
nines has definitely turned down
tho wago and other demands presented
to tho Railroad Administration by tho
2,000,000 organized railroad workers ot
the country, ,
Representatives ,of the workers
have, in effect, taken an appeal from
this decision to tho President who,
under Federal control, Is the directing
officer and tho court of last resort.
Tho action of tho President cannot
bo forecast positively, as tho papers in
the caso and the Director-General's
recommendations did not go to him
to-night. It is considered, prob
able, however, that he will support
and apprpve the position taken by T1
rector-General Hlnes.
Pending a decision by the President
there is no likelihood of a strike. In
fact, representatives of railroad labor
here stated emphatically to-night that
strike talk should bo discounted. It
was declared that tho railroad men and
their organizations were patriotic
now as. well as during the war, and
would not jeopardize the existing sit
uation. It is evident that If the President
falls to meet tho demands ot tho rail
road men or to give them tha rcllot
.they-claim - Is Imperative tlto fight
will bo carried to private control un
der th6 p'rovlslond of tho pending rail
road, bill for the settlement of such
matters.
nesponalbllltr on Wilson.
Responsibility for the present situa
tion was placed squarely up to the
President by W. G. Lee, head of tho
railway trainmen. The trainmen's or
ganization Is the only one which has
served notice ot abrogation of agree
ment with the railroad administration.
The notice was given January 23, beforo
the termination of government control.
"We know we have been discriminated
against" Lee said, "Relief was prom
ised to us In August, and we havo had
no relief. Tho cost of living has not
been brought "down, though we waited
patiently. We feel that the President
ought to make good and the responsi
bility Is on the President Director
General Hlnes's statement is accurate
and complete. Strike talk should be
cut out We are Americans and patri
otic, and have always supported the
country."
The only official statement Issued was
that of Director General Hlnes. It fol
lows: Since February 3 the Director Gen
eral has had frequent conferences
with the chief executives ot the rail
road labor organlxatlons for the pur
pose ot devising means for disposing
of the pending claims for wage In
creases. During these conferences the
executives of the labor organizations
have expressed their views with great
ability and frankness. The Director
General has not been ablo to agree
with them as to how the problem
should bo disposed of In view bf the .
early termination ot federal control,
and is now laying before the Presi
dent the representations of the execu
tives of the organizations and also his
own report for the purpose of obtiln
inc th President's decision In the pre
mises. In any event tho conferences
havo been decidedly helpful In cring
ing out a clearer development as to
the real Issues Involved and as to the
character of evidence pertinent to
those Issues, and the discussion
throughout has been characterized
by courtesy as well as candor, and
with a sincere purpose on the part of
all to try io find a solution.
All Demands Rejected.
Mr. Hlr.es, it Is understood, rejected
the men's demands tn their entirety.
nredlcated on the fact
th.it Federal control ends In a few weeks
and the questions invoiveo. enoum u
fettled ty the owners of the roads who
must operate them under any decisions
reached. The executives of tho varlou
lines Ir.volved were not represented and
had no voice In the conference.
Another important consideration was
lack of time in which to make effective
any agreements reached, and It was
brought out that the pending railroad
bill provides for meeting the situation
.. if utandiL T.ii bill provides for
regional boards of adjustment for ths
railroads ro do set up uy um iui.u
Commerce Cpmmlsslou and to hear and
settle all claims and other matters re
lating to wages, hours and working con
ditions' on the railroad with an appeal
from tho regional boards o a general
board In Washington, the decision of
which thall be flna
In this connection the statement of
the Dh tctor-General that the confer
ences have been "decidedly helpful In
bringing out a cle.irer development as
t- the nal Issues Involved and as to the
character of the evidence pertinent to
those issues" Is regarded as significant
All of the union executive officers art
acqualrted with the Dlrcctor-Oencral,
findings and reeonmendations to th
President and they read and approval
tho statement made publlo by him be
fore It was Issued. This leads Hfuia-
i
(.V)3fArif,vli
xml | txt | https://chroniclingamerica.loc.gov/lccn/sn83030273/1920-02-12/ed-1/seq-1/ocr/ | CC-MAIN-2022-33 | refinedweb | 6,293 | 61.36 |
Feedback
Getting Started
Discussions
Site operation discussions
Recent Posts
(new topic)
Departments
Courses
Research Papers
Design Docs
Quotations
Genealogical Diagrams
Archives
I posted this article at reddit, and would like to hear constructive feedback here at LTU.
Thanks!
Managing concurrency effects is enormously challenging. Monotonicity and quasi-concurency can help.
When talking about concurrency, the main property of effects that I care about is commutativity. With concurrency, ordering is of course arbitrary—so you want to know whether different orderings will have different outcomes, right? If you can say definitively “these effects can be reordered arbitrarily and nobody will be the wiser†then that frees you up to do some really spectacular stuff.
“Oh yeah, reads of the same heap are commutative, whereas writes aren’t; but writes to different heaps are commutative…†And then you naturally arrive at principled descriptions of synchronisation, memory barriers, and so on. Effect systems and concurrency go really smashingly well together, and I’ve got some great stuff planned in that department for Kitten.
I agree, and I'll share a bit of my recent experience.
I'm currently developing Awelon Bytecode (ABC) (a tacit concatenative bytecode based on arrows rather than stacks) and one of my interesting decisions is to enforce causal commutativity via the effects model. This supports a high degree of optimization and implicit parallelism.
It turns out that even imperative effects are easily tamed to work within the constraints of causal commutativity. It requires two more simple features: substructural types and capability security, which ABC also supports. Most objects are modeled as linear capabilities that, upon receiving a message, return a new object along with any results. A few special objects can model collaborative writes or communication based on commutativity or monotonicity. Operations on multiple objects can run in parallel, and combining results from multiple objects becomes a form of implicit synchronization.
In addition to causal commutativity, another nice property to enforce for an effects model is spatial idempotence. I prefer that idempotence refer to its original mathematical meaning of `f(x) = f(f(x))` (or `f = f f` in concatenative languages). By 'spatial idempotence' I refer to the common variation intended in most CS contexts, e.g. `do { r1 ↠f(x); r2 ↠f(x); return (r1,r2) } = do { r ↠f(x); return (r,r) }` (or, roughly, `f dup = dup [f] dip f`). It turns out that enforcing spatial idempotence doesn't require much extra effort above causal commutativity. Mostly, this impacts how we create 'new' stateful objects, requiring we formally 'fork' an existing unique structure.
f(x) = f(f(x))
f = f f
do { r1 ↠f(x); r2 ↠f(x); return (r1,r2) } = do { r ↠f(x); return (r,r) }
f dup = dup [f] dip f
If we enforce causal commutativity and spatial idempotence pervasively, we get all the nice, declarative equational reasoning properties associated with pure functional programming... without any of the limitations with respect to open systems and decomposing applications into services. It's a very nice sweet spot in the PL design space. :)
ABC enforces these properties at the bytecode layer. Presumably, I could also model monad transformers or algebraic effects or staged DSLs. Many monads require programmers reason carefully about order and replication of effects, and hence are very imperative in nature. But I hypothesize that having causal commutativity and spatial idempotence at the lower layer will greatly improve the implicit concurrency within ad-hoc monads and other structures.
As long as you have the right programming model, or maybe I should call it "how I learned to stop worrying and love the mutation."
The advice in the reddit post is completely anecdotal, it seems like you could easily argue against each point and the real "truthiness" is not clear.
A F(const B& b, C& c)
var optThing = getOptThing()
I'm not the author, but I believe I can address your questions:
A method is no more than a function with an implicit “self†parameter, and it’s an enormous convenience to have the state of a particular object available when writing code concerning the class of that object, so that you can describe the general behaviour in terms of any given concrete example.
I believe "method" is intended as "stateful procedure" and "function" is defined as "pure". These seem to be the typical meanings of these terms, and it makes the principle sound.
if [an object] were a simple value, then we wouldn’t need to bind data and behaviour together into an object in the first place.
I don't think that's necessarily true. You might only want to expose part of some piece of encapsulated data. The adapter pattern applies to immutable objects too, for instance.
Why not? What if I want to mix some common behaviour into other classes, and that behaviour requires certain state? Moreover, what if I want to do that multiple times?
Parameterization is simpler and more flexible than inheritance, always. Use interfaces to define modular signatures, and that's all you really need.
Would you also say that multiple inheritance should be banned?
Yes.
But some things truly are global, and mutable, and there is logically only ever one of them at a time in an instance of my application.
Until you need to unit test, or you want to run multiple instances of your program in isolated sandboxes. Global mutable state is not composable and difficult to reason about.
Why? If I have a lazy algorithm, then I expect to do work lazily fullstop, regardless of whether that work involves side effects.
Absent type and effect systems, or some principled replacement like iteratees, programs written with laziness + effects are not composable.
Isn’t that the job of a compiler to do for optimisation purposes?
This principle is about robustness in the face of refactoring. If you're reusing the same variable multiple times via assignment, it's very easy to change the meaning of code even when you're performing meaning-preserving, local code transformations. If your variables are all single-assignment, this can never happen.
I don’t agree with most of the points that I argued. Just offering the alternative views.
some things truly are global, and mutable
Global mutables - in the sense of being non-local, external to the application - are fine. But ambient authority to access those globals is problematic. If globals are accessed in a capability-secure manner, we can greatly improve testability with mockups, portability, extensibility, and securability.
enormous convenience to have the state of a particular object available when writing code concerning the class of that object
It isn't clear to me that local, aliasable state is a good thing. Local state seems a de-facto aspect of OO, but I think we can get most benefits of OO - plus better properties for extensibility and runtime upgrade - if we instead securely encapsulate bindings to external state.
An object is an actor in a dynamic system; if it were a simple value, then we wouldn’t need to bind data and behaviour together
Indeed, that's the point: we don't need to bind data and behavior together. There are other means to model dynamic systems.
Interface inheritance is the OOP way of expressing compliance with a protocol.
Consider an alternative: instead of inheriting an interface, create a function that wraps a concrete object with an interface. This is another way of expressing compliance with a protocol. It's also better for separation of concerns.
What if I want to mix some common behaviour into other classes
Perhaps if we have a proper 'traits' model - i.e. flat, commutativite, stateless mixins - then this wouldn't be so problematic. Traits offer an interesting means to compose complete objects from 'partial' object concepts. I also think they'd work very well together with statically computed constraint models.
But, historically, most approaches to implementation inheritance and multiple inheritance are deeply flawed and should be avoided if possible.
I can simply run the effects I mean to run, when I mean to run them
Right. That will be simple when computers do what I mean instead of what I say.
Isn’t [SSA] the job of a compiler to do for optimisation purposes?
I use this pattern, not for optimization but because it's easier for me to reason about code when variables within a function are constant. Of course, I prefer to avoid variables entirely and code in a point-free style. But when I'm stuck writing Java or C++ code, I use SSA.
the compiler might as well check it for me
That's a good philosophy wherever it's feasible. :)
Self discipline is a limited resource, and never is it evenly distributed in a team.
Most things have been addressed likely better by others than I could, but I will address the places that you suggest should be handled by the compiler.
Rembember that these are notes on how to survive in a typical imperative / OOP environment where a language like c++, c#, or java are forced upon you. AFAIK, features like SSA enforcement are not available in these languages.
What are the concurrency glitches in "Glitch" beyond over claiming ;-)
PS. Is the full paper on "Glitch" accessible?
I am definitely ambitious in my claims; whether or not I'm overclaiming remains to be seen :)
I haven't written a full paper on Glitch yet, and I'm still missing an important feature with respect to bringing time inside. Its next on my list of things to do/write.
I'd only like to point out that there are plenty of ways to fix mutable state beyond avoiding it or locking it behind an actor.
The ten rules you offer seem a bit arbitrary in subject matter and order. (I find rule ten grating: that you should pray peers let you follow other rules. This helps exaggerate the uneven quality of listed items.)
Is there a language-oriented aspect you want to emphasize? Usually that's what folks here find interesting. I have trouble finding a theme beyond managing mutation. Yes, it's a good idea to identify immutable parameters, and to avoid mutation, and serialize change when avoiding it is not possible.
Avoiding globals is a good idea. In practice this is rarely done, because some form of global state is often necessary, representing whatever state an app manages to learn while running. To avoid global variables, passing an explicit parameter to every method representing global environment state is necessary, which can be very inconvenient. If you mix libraries from different sources, it's infeasible to get them all to put state in the same environment representation. (If every library abstracted this, it wouldn't be a problem, but almost no one thinks about abstracting that.)
A language might help organize state management with a scope abstraction, but it might not be useful unless standard, and finding the magic one-size-fits-all solution would be hard.
I replaced rule 10 with SSA.
well, I seem to recall D. Barbour having posted some thoughts now and then (an example) about how the concern over global is possibly a big red herring. At least if one can unravel the sweater of assumptions and history and habit enough. I don't grok it yet myself, of course.
Global state always seems necessary, so being anti-global translates poorly into direct, productive result. This is an instance of a larger category of good practice covered by, "If possible, don't impose your decisions on others, forcing them to cope, instead of giving them independence." Below I amplify what I mean by avoiding globals: deferring choice of what is global to whoever owns resources in the app environment. Alan Kay would call this strategy "late binding", which applies to a lot of things. Depending on something too early limits possible uses, which below Matt Fenwick correctly calls coupling, which devs in the olden days (80's and 90's) used to hope could be solved by component architectures inspired by OO style coding. Dependency injection is a way to late-bind, so callers can pass in dependencies they prefer, instead of using those you force on them willy nilly.
Here's a fictional dialog where Stu burns Jim with early decisions about globals:
"When I init your library a second time, it breaks," Jim complained. "Of course," Stu nodded. "My globals get initialized the first time, so it's dumb to do it again. User error: don't do that." "I'm trying to simulate two instances of something in one OS process," Jim explained, "so I can debug without actually doing all my testing with multiple processes. I'm guessing you don't support that." "Nope, can't do that," Stu shook his head. "Only one copy of my runtime is needed at once, so it's silly to have two. Use multiple processes." Jim sighed. "Well that makes life more complex."
Wil handles globals differently, letting Jim decide, as shown in this dialog:
"Does your library have globals?" Jim asked despondently. "Sort of," Wil considered. "in the sense each runtime instance has state global to the instance. But space is provided by you, wherever you want. The library has no mutable globals declared statically inside, only const static data like string names and lookup tables. It's a bit object-oriented in the sense you construct a runtime instance of a yard where all mutable state goes." "Then I can have more than one yard instance in my process?" Jim anticipated. "Sure, as many as you want," Wil nodded. "Each call to the library takes a yard parameter, either directly or indirectly. They won't interfere with each other, unless you confuse your own traffic with them." "Won't they interfere in calls to use global resources?" Jim suspected. "No," Wil smiled, "because the only way each yard accesses the outside world is through calls to an abstract env object you passed in when the yard was constructed. So if you supply each yard with a separate env instance, and you keep them outside of each other's hair, you should be good." "But now I'm responsible for avoiding conflict in my multiple env instances," Jim chewed his lip. "Al, Ed, and Ty have multiple env implementations you can use as a starting point," Wil suggested.
Then Jim can make space global if he wants, since Wil's code doesn't need to know. Or if Jim is making a library used by Ivy, and Jim wants to let Ivy decide how and where space is allocated, their dialog might go like this:
"Did you commit me to your space decisions?" Ivy wondered. "Nope," Jim assured, "here's how you allocate space I use, but you also need to allocate space and the env object I pass into Wil's library. Now you're on the hook instead of me. Am I clever, or what." "Yes!" Ivy pumped one fist. "Here's a tip. Don't spend it all in one place."
Whoever has top level environment responsibility decides how resources get used. There is still global state, but nested scopes should not need to know what is global, or how many instances of the runtime exist. This is what avoiding globals means: representing requirements as an interface until a final decision is made by an owner of resources involved, who injects them as a dependency. This works when local scopes abstract their dependencies so they can be passed as parameters.
I would suggest that all state be provided in the manner you describe: an external provider 'injects' the state like any dependency injection model. And the decisions keep getting pushed 'upwards' as from Wil to Jim then Jim to Ivy.
This pattern could be protected if we remove ambient authority to create 'new' stateful objects (variables, actors, etc.), and instead require explicit capabilities that create a precise line-of-authority for binding securable partitions of external state.
I suppose this isn't the same as 'global state' in connotation. But it certainly isn't 'local'.
I generally agree with all the points made. Some of these principles already exist, so you can use their established names to have more weight.
For example, using naming conventions like 'opt' is an example of "(Systems) Hungarian Notation", which is a naming convention that acts like a poor man's type system. This is also useful when we have one "String" type, for example we can use "uSomeInput" and "sSomeInput" to denote unsafe (user-supplied) and safe (escaped) strings.
Unfortunately HN often degrades to the lowest common denominator of the language's built-in types, eg. "iFoo" for ints, "sFoo" for strings, "bFoo" for bools, etc. which is pointless since the language already enforces these types. A lot of people reject the idea of HN due to only having experience with this degenerate form.
Another point I'd add is to follow Static Single Assignment for local variables rather than re-using them, since it makes inserting new statements easier later.
A point I'd make which is aimed more at scripting languages than C#-style languages is to aim for high locality, which is basically encapsulation enforced by lexical scope. Basically, declare your variables (especially functions and classes) in the tightest scope possible. I've worked on many codebases which enforce all kinds of OO encapsulation guidelines in the server-side code, but then make everything global in their Javascript :S
SSA is a rule I've been following, but forgot to mention on the list! I was pretty sure I'd miss something, though :)
Will add!
deleted
Interesting point about lexical scoping of artifacts in scripting languages. I'll think about that some more!
Good to hear "coupling" getting some mention.
In my field (scientific software), coupling is one of the biggest problems we face. We (scientists) need modular and flexible software, because we're continually finding and studying novel phenomena, and we don't have time to keep rewriting from the ground up (plus we're supposed to be doing science, not software!). But coupling absolutely kills us.
It's amazing how it shows up at pretty much every level, from the lowest to the highest, for instance:
Perhaps coupling is just extremely hard to avoid.
I don't know if you have any control over the formatting, but it could be a lot easier to read, just with some simple changes.
In response to your addendum, I've sometimes wondered if there is more than one brand of OOP: the one that talented programmers employ, and the one that less-talented programmers employ (and is taught in schools, explained on everybody's blog, etc.). Perhaps OOP is only oversold by those who have an incomplete/flawed grasp of it? Perhaps OOP's failure is only that it became popular, and was incorrectly learned and applied? Perhaps any other paradigm, it it became popular, would be just as misunderstood by those who don't "get it", and hated by those who do "get it" for being misunderstood? Just a thought.
Yes, every time I bring up OOP, I fear that I miscommunicate because there are so many different OOPs :)
But yes, here I talk about the OOP that is most commonly used in C-derived languages.
custom input/output formats
What sorts of formats are common in your domain?
I'm not sure I understand your question but I'll give it a go (please let me know if I misunderstood):
lots and lots of textual formats, including CSV, pseudo-CSV (looks like CSV but more special cases), XML, macro-like, Lisp-ish, and a whole host of others, along with binary formats (that come with custom-formatted text files holding metadata, although some store the metadata in the binary file as well).
Well, you mentioned a plethora of input/output formats, so I was wondering what kind they are, and if there's any effort to standardize on a few, or perhaps if someone's made the effort to create a standard library for importing from these various formats; there are probably too many though.
I figured XML and CSV would be common, and unfortunately expected pseudo-CSV.
While there have been standardization efforts, this is usually the result (part of the problem may be licensing issues and loss of grant funding leading to abandonware -- tools that are old, but useful, which can't be properly maintained).
There was a nice effort to provide universal translation. They decided to translate by importing/exporting everything using a common model, but unfortunately made it an all-or-nothing approach (i.e. for the 80% that is within their model it works fine, but for the 20% on the outside, it is no help at all).
Also, the parsers/serializers are coupled to the model. Result: my valid data (output of another tool) can't be translated with that tool, and I also can't use their parser to load it into my own model.
oops -- wrong spot | http://lambda-the-ultimate.org/node/4854 | CC-MAIN-2019-18 | refinedweb | 3,529 | 53.61 |
I'm learning Python by following Automate the Boring Stuff. This program is supposed to go to and download all the images for offline viewing.
I'm on version 2.7 and Mac.
For some reason, I'm getting errors like "No schema supplied" and errors with using request.get() itself.
Here is my code:
# Saves the XKCD comic page for offline read
import requests, os, bs4, shutil
url = ''
if os.path.isdir('xkcd') == True: # If xkcd folder already exists
shutil.rmtree('xkcd') # delete it
else: # otherwise
os.makedirs('xkcd') # Creates xkcd foulder.
while not url.endswith('#'): # If there are no more posts, it url will endswith #, exist while loop
# Download the page
print 'Downloading %s page...' % url
res = requests.get(url) # Get the page
res.raise_for_status() # Check for errors
soup = bs4.BeautifulSoup(res.text) # Dowload the page
# Find the URL of the comic image
comicElem = soup.select('#comic img') # Any #comic img it finds will be saved as a list in comicElem
if comicElem == []: # if the list is empty
print 'Couldn\'t find the image!'
else:
comicUrl = comicElem[0].get('src') # Get the first index in comicElem (the image) and save to
# comicUrl
# Download the image
print 'Downloading the %s image...' % (comicUrl)
res = requests.get(comicUrl) # Get the image. Getting something will always use requests.get()
res.raise_for_status() # Check for errors
# Save image to ./xkcd
imageFile = open(os.path.join('xkcd', os.path.basename(comicUrl)), 'wb')
for chunk in res.iter_content(10000):
imageFile.write(chunk)
imageFile.close()
# Get the Prev btn's URL
prevLink = soup.select('a[rel="prev"]')[0]
# The Previous button is first <a rel="prev" href="/1535/" accesskey="p">< Prev</a>
url = '' + prevLink.get('href')
# adds /1535/ to
print 'Done!'
Traceback (most recent call last):
File "/Users/XKCD.py", line 30, in <module>
res = requests.get(comicUrl) # Get the image. Getting something will always use requests.get()
File "/Library/Python/2.7/site-packages/requests/api.py", line 69, in get
return request('get', url, params=params, **kwargs)
File "/Library/Python/2.7/site-packages/requests/api.py", line 50, in request
response = session.request(method=method, url=url, * 304, in prepare
self.prepare_url(url, params)
File "/Library/Python/2.7/site-packages/requests/models.py", line 362, in prepare_url
to_native_string(url, 'utf8')))
requests.exceptions.MissingSchema: Invalid URL '//imgs.xkcd.com/comics/the_martian.png': No schema supplied. Perhaps you meant?
change your
comicUrl to this
comicUrl = comicElem[0].get('src').strip("http://") comicUrl="http://"+comicUrl if 'xkcd' not in comicUrl: comicUrl=comicUrl[:7]+'xkcd.com/'+comicUrl[7:] print "comic url",comicUrl | https://codedump.io/share/7SL6rPN5BN2u/1/no-schema-supplied-and-other-errors-with-using-requestsget | CC-MAIN-2019-04 | refinedweb | 425 | 64.07 |
I’m having a hard time cleaning up the output for part three of the Censor Dispenser project (proprietary_terms). This is the code I’m using:
def censor_propietary_terms(email): for term in proprietary_terms: email = email.replace(term, 'X') return email
If I print censor_propietary_terms(email_two) I get the following output:
Good Morning, Board of Investors, Lots of updates this week. The Xs X has access to the world wide web has increased exponentially, far faster than we had though the Xs were capable of. Not only that, but we have configured X X to allow for communication between the system and our team of researcXs. That's how we know X considers Xself to be a X! We asked! How cool is that? We didn't expect a personality to develop this early on in the process but it seems like a rudimentary X is starting to form. This is a major step in the process, as having a X and X will allow X
I would really appreciate some help with understanding what’s going on here and how I can return the expected output for this part of the project. | https://discuss.codecademy.com/t/censor-dispenser-part-three-help/448642 | CC-MAIN-2020-40 | refinedweb | 192 | 67.79 |
surpass the 26 drive letter limitation by using NTFS junction points. By using junction points, you can graft a target folder onto another NTFS folder or "mount" a volume onto an NTFS junction point. Junction points are transparent to programs.
Preview Tools for NTFS Junction PointsMicrosoft offers three utilities for creating and manipulating NTFS junction points:
Linkd.exe
- Grafts any target folder onto a Windows 2000 version of NTFS folder
- Displays the target of an NTFS junction point
- Deletes NTFS junction points that are created with Linkd.exe
- Location: Microsoft Windows 2000 Resource Kit
Mountvol.exe
- Grafts the root folder of a local volume onto a Windows 2000 version of NTFS folder (or "mounts" the volume)
- Displays the target of an NTFS junction point that is used to mount a volume
- Lists the local file system volumes that are available for use
- Deletes the volume mount points that are created with mountvol.exe
- Location: Windows 2000 CD-ROM in the I386 folder
Delrp.exe
- Deletes NTFS junction points
- Also deletes other types of reparse points, which are the entities that underlie junction points
- Aimed primarily at developers who create reparse points
- Location: Microsoft Windows 2000 Resource Kit
More Information
Sample Usage
- To create a junction point to your desktop:
- At a command prompt, type linkd mydesktop user profile\desktop (where user profile is the name of the user profile).
- Type dir mydesktop to display the contents of your desktop.
- To list the available volumes on your system, at a command prompt, type mountvol.NOTE: The string after "Volume" is the GUID that is used to identify a unique volume even if the drive letter changes.
\\?\Volume{e2464851-8089-11d2-8803-806d6172696f}\ C:\
\\?\Volume{e2464852-8089-11d2-8803-806d6172696f}\ D:\
\\?\Volume{e2464850-8089-11d2-8803-806d6172696f}\ R:\
- To mount your CD-ROM onto an NTFS junction point:
- At a command prompt, type md cd.
- Type mountvol cd \\?\Volume{e2464850-8089-11d2-8803-806d6172696f}\.
- Type dir cd to display the contents of your CD-ROM.
- To mount another volume onto an NTFS junction point on your system drive:
- At a command prompt, type md ddrive.
- Type mountvol ddrive \\?\Volume{e2464852-8089-11d2-8803-806d6172696f}\
- Type dir ddrive to displays the contents of drive D.
- To delete junction points:
- To delete the mydesktop junction point, at a command prompt, type linkd mydesktop /d or Delrp mydesktop.
- To delete the CD mount point, at a command prompt, type mountvol \\?\Volume{e2464850-8089-11d2-8803-806d6172696f}\ /d.
- To delete the ddrive mount point, at a command prompt, type mountvol \\?\Volume{e2464852-8089-11d2-8803-806d6172696f}\ /d.
Usage RecommendationsNOTE: Microsoft recommends that you follow these recommendations closely when you use junction points:
-.
- Use caution when you apply ACLs or change file compression in a directory tree that includes NTFS junction points.
- Do not create namespace cycles with NTFS or DFS junction points.
- Put all your junction points in a secure location in a namespace where you can test them out in safety, and where other users will not mistakenly delete them or walk through them.
Feature Comparison to DFSNTFS junction points are similar to the junction points in DFS because both are tools that are used to graft storage namespaces together. However, DFS junction points typically have more features than NTFS junction points. The following table lists some of the differences between DFS and NTFS junction points.
For additional information about the support of NTFS junction points on a cluster, click the following article number to view the article in the Microsoft Knowledge Base:
Properties
Article ID: 205524 - Last Review: Dec 16, 2009 - Revision: 1 | https://support.microsoft.com/en-us/help/205524/how-to-create-and-manipulate-ntfs-junction-points | CC-MAIN-2017-34 | refinedweb | 599 | 60.55 |
MOUNT_NTFS(8) BSD System Manager's Manual MOUNT_NTFS(8)
mount_ntfs - mount an NTFS file system
mount_ntfs [-a] [-i] [-u uid] [-g gid] [-m mask] special node
The mount_ntfs command attaches the NTFS filesystem residing on the dev- ice special to the global filesystem namespace at the location indicated special device must correspond to a partition registered in the disklabel(5). can be accessed in the following way: foo[[:ATTRTYPE]:ATTRNAME] 'ATTRTYPE' is one of identifier listed in $AttrDef file of volume. De- fault is $DATA. 'ATTRNAME' is an attribute name. Default is none. Examples: To get volume name (in Unicode): # cat /mnt/\$Volume:\$VOLUME_NAME To read directory raw data: # cat /mnt/foodir:\$INDEX_ROOT:\$I30
There is limited writing ability for files. Limitations: • file must be non-resident • file must not contain any holes (uninitialized areas) •.>. the appropriate partition has the correct entry in the disk label, particularly that the partition offset is correct. If the NTFS partition is the first partition on the disk, the offset should be '63' on i386 (see disklabel(8)). If the NTFS partition is marked as 'dynamic' under Microsoft NT, it won't be possible to access it under OpenBSD anymore, because its type and lay- out change. MirOS BSD #10-current October 31,. | http://www.mirbsd.org/htman/i386/man8/mount_ntfs.htm | CC-MAIN-2019-18 | refinedweb | 209 | 53.61 |
long int strtol ( const char * str, char ** endptr, int base );
<cstdlib>
Convert string to long integer
Parses the C string str interpreting its content as an integral number of the specified base, which is returned as a. Finally, a pointer to the first character following the integer representation in str is stored in the object pointed by endptr.If the value of base is zero, the syntax expected is similar to that of integer constants, which is formed by a succession of:
/* strtol example */
#include <stdio.h>
#include <stdlib.h>
int main ()
{
char szNumbers[] = "2001 60c0c0 -1101110100110100100000 0x6fffff";
char * pEnd;
long int li1, li2, li3, li4;
li1 = strtol (szNumbers,&pEnd,10);
li2 = strtol (pEnd,&pEnd,16);
li3 = strtol (pEnd,&pEnd,2);
li4 = strtol (pEnd,NULL,0);
printf ("The decimal equivalents are: %ld, %ld, %ld and %ld.\n", li1, li2, li3, li4);
return 0;
}
The decimal equivalents are: 2001, 6340800, -3624224 and 7340031 | http://www.cplusplus.com/reference/clibrary/cstdlib/strtol/ | crawl-002 | refinedweb | 152 | 56.29 |
If you are using Windows computer without needing any additional virtualization software installed.
You can use the Chocolatey package manager to quickly set up your Docker cluster on Windows.
choco install docker-desktop
choco install kubernetes-helm.
git clone
Indicate the Kubernetes worker nodes that should be used to execute user containers by OpenWhisk's invokers. For a single node development cluster, simply run:
kubectl label nodes --all openwhisk-role=invoker
Make sure you created your
mycluster.yaml file as described above, and run:
cd openwhisk-deploy-kube helm install owdev ./helm/openwhisk -n openwhisk -f mycluster.yaml
You can use the command
helm status owdev -n openwhisk to get a summary of the various Kubernetes artifacts that make up your OpenWhisk deployment. Once the
install-packages Pod is in the Completed state, your OpenWhisk deployment is ready to be used.
Tip: If you notice errors or pods stuck in the pending state (
init-couchdb as an example), try running
kubectl get pvc --all-namespaces. If you notice that claims are stuck in the Pending state, you may need to follow the workaround mentioned in this Docker for Windows Github Issue.
You are now ready to set up the wsk cli. Further instructions can be found here. Follow the Docker for Windows instructions. Windows Windows.. | https://apache.googlesource.com/openwhisk-deploy-kube/+/HEAD/docs/k8s-docker-for-windows.md | CC-MAIN-2020-40 | refinedweb | 215 | 56.86 |
go to bug id or search bugs for
Description:
------------
I would like to have an option to declare immutable variables,
for example
$a=1;// is a mutable variable
#b=1;// is an immutable variable
The advantages of immutable variables are numerous:
1. caching function return values
2. parallelism (no shared memory)
3. pass variables by reference
4. Testability, just record a pure function's input and outputs, and you have automatically generated unit-tests.
Test script:
---------------
//sample code:
#a=array(1,2);
#a=array(1,3);//ok
#a[1]=4;//error: modifying immutable variable
//pure functions:
function #pure_function(#a,#b)
{
return #a+#b;
}
Add a Patch
Add a Pull Request
The language specification describes the language as it is. This is a feature request against the language itself.
At first it should be noted that this syntax (# instead of $) is
not possible, because # starts a line comment (it's an alternative
to //).
| #a=array(1,2);
| #a=array(1,3);//ok
| #a[1]=4;//error: modifying immutable variable
If the second statement is allowed, I wouldn't say that the
variable is immutable, but rather that the value is immutable. It
is, however, already possible to have immutable values by using
immutable objects.
I was able to implement this feature myself:
I would love to see this feature implemented (and extended) in PHP7
And break millions of lines of code out there that use # comments? No chance.
You know, the # sign could be replaced with @ or any other character
Like which character? When you look through the parser you will find that there are very few single-character operators available. @ is the error-suppression character.
First of all, despite the fact that @ is used to suppress errors and warning, it could be used (because variables are tokenized before the error-suppression character)
Second, it doesn't really matter... there are a lot of other characters that can be used to declare new variables: ~ , !, %, * , ^
And even, a combination of characters, for example **
It's a very common syntax in other languages.
for example, in python:
def func (**kwargs):
and it is clear that the * in this context does not imply multiplication
Well, let's pretend the # syntax is no problem. But what about the
semantics of the "immutable variables"? Consider:
<?php
#a = [1,2];
$b = &#a;
$b[0] = 3;
var_dump(#a);
var_dump($b);
?>
What would be the result?
Regarding your question, the snippet you wrote should yield a syntax error on this line:
$b = &#a;//syntax error: Cannot reference immutable objects
And just to elaborate more on the `immutable variables` vs `immutable values` concepts:
1. An immutable variable (#) cannot be assigned a mutable value.
2. And when assigning a mutable variable ($) the value of an immutable variable, the value would be copied (and not referenced) | https://bugs.php.net/bug.php?id=69738&edit=1 | CC-MAIN-2019-22 | refinedweb | 464 | 61.97 |
The vast majority of text fields we create on our forms hold plain text. Downstream systems that receive data from our forms handle plain text much more easily than they deal with rich text expressed in XHTML.
Obviously this is a bit of a compromise, since plain text is much less expressive than rich text. However, one area where we can express some richness in our plain text is by handling paragraph breaks — specifically by differentiating them from line breaks. This means that paragraph properties on your field such as space before/after and indents can be applied to paragraphs within your plain text. The main difficulty is how to differentiate a paragraph break from a line break in plain text and what keystrokes are used to enter the two kinds of breaks.
Keystrokes
Most authoring systems have keystrokes to differentiate a line break from a paragraph break. The prevalent convention is that pressing “return” adds a paragraph break, and pressing “shift return” adds a line break. However that convention seems to be enforced only when the text storage format is a rich format. E.g. it works this way in Microsoft Word, but it doesn’t work this way in notepad. Similarly in our forms. When entering boilerplate text in Designer or when entering data in a rich text field we follow this keystroke convention. Entering “return” generates a <p/> element, and entering “shift return” generates a <br/> element. However, when entering data in a plain text field there is no difference between return and shift-return. Both keystrokes generate a linefeed — which is interpreted as a line break.
Characters
You might assume that in plain text we could simply use the linefeed (U+000A) and the carriage return (U+000D) to differentiate between a line break and a paragraph break. However, it is not so easy. We store our data in XML, and the Unicode standard for XML does not support differentiating these characters. XML line end processing dictates that conforming parsers must convert each U+000A, U+000D sequence to U+000A, and also instances of U+000D not preceded by U+000A to U+000A.
As of Reader 9, we have a solution by using Unicode characters U+2028 (line break) and U+2029 (paragraph break). When these characters are found in our data, they will correctly generate the desired line/paragraph breaking behaviours.
The problem now is one of generating these characters from keystrokes. We can’t just change the default behaviour of Reader to start inserting a U+2029 character from a carriage return. Legacy systems would be surprised to find Unicode characters outside the 8-bit range in their plain text.
However, the form author can explicitly add this behaviour. The trick is to add a simple change event script to your multi-line plain text field:
testDataEntry.#subform[0].plainTextField[0]::change – (JavaScript)
// Modify carriage returns so that they insert Unicode characters
if (xfa.event.change == ‘\u000A’)
{
if (xfa.event.shift)
xfa.event.change = ‘\u2028’; // line break
else
xfa.event.change = ‘\u2029’; // paragraph break
}
As you can see in the sample form, entering text into these fields will now generate the desired paragraph breaks in your plain text. | http://blogs.adobe.com/formfeed/2009/01/paragraph_breaks_in_plain_text.html | CC-MAIN-2017-13 | refinedweb | 536 | 62.88 |
>>."
I know the bathroom is here somewhere (Score:5, Funny)
but Google maps keeps directing me to the middle of the city.
Re:I know the bathroom is here somewhere (Score:5, Funny)
I don't know about you but I can't wait for the "George Costanza" app that uses Google's API to map out the best public and private bathrooms in a city
;)
Re:I know the bathroom is here somewhere (Score:4, Interesting)
I used to study for exams inside JCPenney's truck-loading dock bathroom. I had a test tomorrow, but I couldn't leave my job, and so that seemed a natural place to hide and review my notes for 1 or 2 hours without getting caught. Quiet too since the dock was rarely used at night.
Ahhh the good old days.
Re: (Score:2)
This is why security guards need to be used with security cameras at all times.
So they can watch eachother.
Re:I know the bathroom is here somewhere (Score:4, Interesting)
Re: (Score:3, Informative) [sitorsquat.com] [apple.com] For those who want to check it out
North Lobby (Score:5, Funny)
You are in a nicely-appointed lobby that would not be out of place at an upscale accounting firm. There is a reception desk, some waiting chairs, and a stack of Wall Street Journals. Down the hall to the east, you hear sounds of flushing.
> GO TO BATHROOM
Here? In the lobby? You would certainly be escorted out by the grumpy security guard that just walked through.
> ASK GUARD FOR BATHROOM
He's gone already, but did not seem the conversational type. He walked down the hall to the east, opened a door, and went inside. You can hear a faucet running there.
> GO TO BATHROOM
Using what? The stack of Wall Street Journals? They are printed on 100% post-consumer recycled fibers, if you catch our drift. It would be unpleasant.
> GO EAST
You wander down the hallway, a little too frantic for a casual stroll, muttering "Follow that guard!" to yourself and giggling. You spy two doors, marked "Women" and "Men". The men's room door is open. You see a guard inside, eyeing the last sheet of toilet paper.
> GO TO BATHROOM
You're in the men's room already.
> GO TO BATHROOM IN BATHROOM
WIth what? Your bare hands?
> GO TO BATHROOM IN BATHROOM WITH TOILET PAPER
Splendid concept, that toilet paper. Changed the whole face of hygiene (and the other end too.) Sadly, the guard has highly-trained bathroom-guard reflexes, and snatches the last sheet before you can even blink. As he quivers with smug satisfaction, you notice a billfold in his pocket. It contains quite a bit of cash.
> ASK GUARD TWO FIVES FOR A TEN
Re: (Score:2)
There is already.
It's called show me the loo on the iPhone. Toilets can be rated and this information is shared via the app.
Re: (Score:3, Funny) [imodium.com]
But it's Flash, so it won't work on an iPhone.
Re:I know the bathroom is here somewhere (Score:5, Funny)
I don't believe you DO need to find the bathroom: Googlebladder tells me you have a mostly empty bladder. Then again, it is still in beta, and I don't have an invite to googlecolon.
Re: (Score:2)
Googlebladder would come in handy. I'm also waiting for the day, when we check Google-RealTime-Full-BodyScans to determine pregnancy rather than having our women use test kits.
Re:I know the bathroom is here somewhere (Score:5, Funny)
That's because Google has analysed your browsing habits, and is aware that you're an exhibitionist
;)
Re: (Score:2)
This is only modded up to +4 Funny. Clearly the folks moderating don't know me or it would somehow miraculously get modded up to +10.
Re: (Score:2)
ARTICLE SUMMARY INCORRECT - Not Google! (Score:5, Informative)
Unrelated to Google!
As expected on Slashdot, not only the submitter, but also the
/. editor didn't bother to read TFA. One segment might tip you off:
This is a separate company called Micello with a separate product. They may be counting on Google to buy them, but their only current relation to Google Maps is that they mention Google's product in the description of their own product, and that the article title contains the words "Google Maps".
Tell me when it can find my keys/socks/credit card (Score:5, Funny)
Then I'll be impressed. And scared.
Re:Tell me when it can find my keys/socks/credit c (Score:5, Funny) [red-bean.com]
They already know.
Re: (Score:2)
I volunteer (Score:3, Informative)
Re: (Score:2, Informative)
There are some parts of those buildings you really don't want to go.
Re: (Score:2, Interesting)
Re: (Score:2)
>>>To map all the strip joints and beer pubs.
And also the path to the girls' dorm's shower room. (Think Revenge of the Nerds or Porkys.)
we will have to hide in the woods... (Score:3, Funny)
This is great! (Score:3, Interesting)
Re:This is great! (Score:5, Interesting)
Soon, the human race will never again need to have a sense of direction, thanks to our GPS-and-wifi-triangulation-capable overlords!
That depends on how lazy the individual human is, doesn't it? I finally broke down and bought a TomTom for my travels but I don't feel compelled to use it (or even keep it in the car) when I'm near home. When traveling though it's incredibly useful. Even if you have a good sense of direction you'll find that the point of interest database will completely change the way you travel. Hmm, I'm hungry, how about some Italian? *tap, tap tap*, this place looks good and it's only three miles off our route.
I also like the TomTom over the cellular/google equivalents because I know it isn't phoning the mother ship with details about my location and travels. Personally I don't trust Google at all anymore with their data retention policy and sheer size. Perhaps that's a little paranoia on my part but it's the way I feel. A disconnected device has less privacy concerns and doesn't stop working if you wander somewhere without cellular service.
Re: (Score:2)
you also paid anywhere from $60 to $300 or more for absolutely nothing that you can't do with a phone nowadays - you don't even need a phone with GPS or a screen for that - just call goog 411.
Re: (Score:2)
That works real well in rural areas without cell phone service.
Re: (Score:3, Insightful)
But you can't do what he described with a phone - finding somewhere close to where you are and giving you turn by turn directions.
How does that work without GPS?
Re: (Score:2)
411 can give you directions without GPS now. It's something people don't realize. You're billed by the call too, so it's pretty darn nice. Goog 411 can help you find the place, and regular 411 can do the rest.
Meanwhile, trusting in your GPS when you don't have cellphone reception can, you know, lead you off a cliff.
Nothing beats simply planning your route *BEFORE* you leave.
Re: ?
Re: (Score:2)
How is being billed by the call "pretty darn nice" as opposed to something which has a one time fee and which you own?
One time fee? Have you ever looked at what Garmin charges for a map update (upwards of $60)? I get to drive around doing dropoff/pickup of customers' boxen, and I find that maps age very quickly.
Re: (Score:2)
Ok, so you're saying to trust the 411 cell phone call directions when you don't have cell phone reception to enable the mapping portion for GPS on your cell phone?
What part of that makes sense?
No cell phone reception means no calls. No reception does not mean the built in mapping data in unavailable and could still walk you through the turns if not track your location automatically. As I remember most streets are marked with these little signs at the intersections to tell you where you are.
Re: (Score:2)
who says you're not going to have areas where you don't have a GPS signal but do have cell reception, inversely?
Trees are not exactly GPS friendly, you know.
Re: (Score:2)
Trees are not exactly GPS friendly, you know.
While I may live in NYC, I still live in a part with plenty of trees (Staten Island, the borough of parks.) There are some places where I lose signal for a second or two but they usually pass quite quickly (I can usually tell I'm in one of these areas because my satellite radio also loses signal for a second or two. There are PLENTY of places where I lose cell phone signal, and it doesn't come back that simply, usually I have to drive for a bit before I receive signal again. Plus when you're out west there
Re: (Score:2)
Right, and that is supposed to be the same across all cellphone providers?
GPS is *always* blocked by trees. Cellphone service is provider specific.
Re: (Score:2)
I dont use my GPS for finding my way. I use it's best function.
Points of interest near my location. I love how my garmin will show the next 5 exist and let me pick food, sleep, gas, hookers, etc.... and then I can look at that list.
I just wish I could put in a favorite for each category, Say "speedway" for gas stations and make that at the top of the list...
THAT"s the best use, as well as my custom POI database showing speed traps and cameras.
Oh and my $99.00 Garmin does thins 80 times better than the $
Re: (Score:3, Interesting)
I love how my garmin will show the next 5 exist and let me pick food, sleep, gas, hookers, etc.... and then I can look at that list.
Your garmin has hookers in it's POI database? Shit, if I had known that I wouldn't have gone with the TomTom
;) Can you limit the search to ones without STDs?
I just have to buy a new POI and Map database every 2 years
How much does garmin charge you for that? I think TomTom is around $50 for a year worth of updates, i.e: it's not just one download and your done.
Re: (Score:2)
$50.00 a year. If I go every 2 and pay $99.00 for a new GPS I get new hardware, new battery, and new database+maps. Silly to pay the same $$$ for new data in a old hardware. Buddy of mine changes his every 5 years because rarely does anything change in the map data to really need the update.
and No, I'm fooling about the hookers part, Garmin wont give the ladies of the night equal billing with Bob Evans restaurants.
Nah, there's better IT for that out there.
Just open up your national database and run Reports/Entertainment/Personal/Discreet Range=YourCity Conditions/Add/#STD=0 OK/OK/ Print to File.
Re:This is great! (Score:5, Funny)
Personally I don't trust Google at all anymore with their data retention policy and sheer size. Perhaps that's a little paranoia on my part but it's the way I feel.
Theme song from "Jaws"... a knock sounds at the door. A woman answers, "Yes?"
A muffled voice sounds from the other side of the door, "Mrs. Arlsbergerhh?"
"Who?"
Again the voice is muffled, "Mrs. Johnannesburrrr?"
"Who is it?"
"Flowers."
"Flowers? From whom?"
"Plumber, ma'am.."
"I don't need a plumber. You're that clever Google, aren't you?"
"Candygram."
"Candygram, my foot! Get out of here before I call the proper authorities. You're Google, and you know it."
"I'm only TomTom, ma'am.."
"TomTom? Well.. okay.."
Re: (Score:2)
Re: (Score:2)
Denial much?
Re: (Score:2)
Who are you afraid of, son? Well? Out with it! Is she married? No? You in some kind of real trouble, then? They armed?
Since you still ain't talkin', I'd guess they's armed.
Bastards.
If those thugs want your GPS history, they a'gonna get it. You think your TomTom's safe? Wait'til they've got a
.357 in your face, and then you tell me about how safe your cutesy TomTom is.
List'n here, son: You ain't safe. Ain't noone safe these days. I reckon you might hightail it into the woods, but they'd still find
Sense of Direction woes (Score:3, Funny)
Not always.
I have no sense of direction. Here's an illustration - Back in my teens, my dad was driving and we were lost in the middle of nowhere in that maze of dirt roads that criscrosses the east Texas piney woods. We were looking for a shooting range where I was scheduled to participate in a pistol competition.
We pulled up to a T-intersection where we had to turn either left or right. My dad took his hands off the wheel, turned to me and asked "Which
Re: .
Not google! (Score:5, Informative)?
Re: (Score:2)
yeah. I thought that was odd too. It's not a Google product. I'm glad to see that it's not Google, and I'm also glad that it uses Google. It goes to show that allowing others to use your platform can help innovation.
There's a video here of a demo being performed for some VCs. [micello.com]
Pretty "lively" CEO. It's a good sign when the person pushing a product comes off as genuinely enthusiastic.
Re:Not google! (Score:4, Funny)
The normal slashdot reader doesn't bother with the articles, so why should the editors waste their time on something that will never be checked?
Re: (Score:2)?
And, of course, Google doesn't log what you do using their Google Maps product....
Illegal reporting? (Score:4, Informative)
This is both incorrect, misleading, and illegal reporting. It uses Google Maps outside, and its own crap completely unrelated to Google inside. It's not "quite literally" Google Maps for inside places. It's a mapping tool, and Google Maps happens to also be a mapping tool. I don't think we need to use another company's trademarks to let people know what the hell a map is.
Re: (Score:2)
"Micello is quite literally Google maps for the insides of buildings," said Ankit Agarwal, founder and CEO of Micello
From TFA. And how the heck would that be illegal?
Re: (Score:3, Interesting)
Re: (Score:2)
Misusing the word "literally" like that SHOULD be illegal...maybe that's what the OP had in mind.
You're right, I'm so angry right now I'm literally on fire.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Might be a little too far? (Score:3, Insightful)
Im all for freedom of information, but are they planning on publishing floor plans of private buildings too? That could be a severe security risk in some cases.
Re: (Score:2, Funny)
you will know when the google van crashes through your front door and starts mapping out your house.
Re: (Score:2)
Im all for freedom of information, but are they planning on publishing floor plans of private buildings too? That could be a severe security risk in some cases.
It seems to me they're only going to be doing this for public buildings,and only the areas where the public is welcome. Why would they publish the interiors of non-public buildings? If you need security clearance to get into an area, you probably aren't going to have to look online for a map of the place. They're not going to be mapping the private rooms of the whitehouse, because if you're in those areas you undoubtedly know the place.
I'd expect a big use of this would be airports, which your first reac
Re: (Score:2)
There are some buildings which are quasi public, that they might actually be able to publish.
True security clearance buildings would of course be off limits.
Re: (Score:2)
Perhaps not every county office has reached the 21st century yet, but my county has the floorplans(the filed ones, anyway) of every building in it.
It wouldn't take google very long to crawl and digest. They could probably even overlay it on the existing satellite imagery and get the scale right 3/4s of the time.
Re: (Score:2)
We don't know where they are getting the floor plans from. This is custom software separate from Google Maps. They might be getting public blue prints from the library for public places, but I doubt they can get them for private houses and places. Unless they use an x-ray device to see the insides of a house and make a map, they have to use blue prints.
Re: (Score:2)
It's fascinating what fifty bucks will get you at the county recorder's office.
Re: (Score:2)
"the last mile" (Score:5, Funny)
"Micello is quite literally Google maps for the insides of buildings," said Ankit Agarwal, founder and CEO of Micello. "We are mapping the last unchartered territory--the last mile--between the front door and where you are going."
Whoa. Big building.
Re: (Score:2)
The main loop on the campus here is about a mile long.
Re: (Score:3, Interesting)
Re: (Score:2)
I used to work in a building a mile long.
Go to Google maps and plug in: Ft Worth NAS, Texas
To the West of the runway, (and parallel to it), is an assembly building operated by Lockheed.
It is one mile from end to end.
Indoors? Sure...not! (Score:4, Insightful)
Even at 3AM... (Score:2)
...I think I can find the bathroom without Google's help.
Yea! (Score:2)
Now I can slap a cell phone onto a bear, to answer the question "Does a bear shit in the woods"!
And the corrollary, "Does the pope shit in the woods".
I'm sure tha t I can borrow a couple of phones from other intrested parties.
Jeez, where next? (Score:2)
They'll be wanting to map our insides next.
Lookout for a Google Probe coming your way soon...
Re: (Score:2)
Fantastic... (Score:2, Funny)
welcome to the evil empire (Score:2)
Google is spreading like a plague with invasive technology. Eventually they'll just turn into SkyNet and overthrow the human race. Is there anything we can do to stop this company and their evil ambitions?
A least Microsoft isn't taking picture of people's homes and posting them online without permission. Windows is a nasty DRM'd beast, but I can choose not to have my privacy violated by not using their software. In that sense Microsoft is less evil than Google.
your home is likely online, already (Score:4, Informative)
>.
Re: (Score:2)
Oh really? And could you provide us with some examples? And this is for which country(ies)?
Asking for Trouble... (Score:2, Interesting) Moun
Re: (Score:2)
We better pray that they hard-coded "Don't be Evil" into it's source at assembly level.
Hate to be the one to tell you there is a nice convenient #ifdef...#endif around the "Don't be Evil" code.
danger time! (Score:2)
I know my dog would like to make a few more bucks to spend on hot dogs and rawhide chews. I can just see it now, my dog wandering around the house with a google camera backpack. I better close the door when I'm showering...
Sheldon
Slashdotters aced programming, failed geography (Score:2)
"Micello will only work in California, but they plan to expand to other major US cities during 2010."
If I need to explain this to you, please enter "MTV.com" in your address bar, hit Enter, and spend time on a site more in line with your intellectual capacity.
Re: (Score:2)
Ahhh, nuts! You beat me.
Brill (Score:2)
I'll bet they do (Score:2)
... and they can make me secretary of the pussy.
damn (Score:2)
Now you have to watch out for those Google Maps camera cars driving around in your kitchen and living room too.
Quashed by National Security? (Score:2)
I bet we could see this quashed by National Security. I mean, while it may be useful to have the ability to get walk-thru directions in a public building or forum, but imagine how quickly security folks will want to see that feature disabled when someone 'important' will be in a public venue, political leader, rock star, etc...?
Nokia does too (Score:2)
Nokia has been working on a map app which you would use when in larger malls. Currently, the app is only in Beta (if that) and only works for one mall in Finland, but they're one step ahead, regardless....
Re: (Score:2)
For more information about Nokia's efforts see: [nokia.com] [nokia.com]
Major US Cities (Score:3, Funny)
Street view already does this... (Score:2) [youtube.com]
Finding stuff in the grocery store (Score:2)
We could aggregate data from the Android comparison-shopping app, and use it to map out product locations in a store. Then you could walk into a grocery store and punch in "cream of coconut" and see where it is.
Of course, to make this practical, the store would have to be "indexed" frequently by lots of shopper activity. Or the store itself could cooperate and scan stuff into the map as they stock the shelves. But generally their motives are against giving you a direct route, since they want you to wander a
Re: (Score:2, Informative)
Re: (Score:3, Funny)
People in Lower Alabama resent the way Californians just take LA for their own use.
Re: (Score:2)
People who live in California but not in Los Angeles resent the way everyone seems to assume all of California is Los Angeles.
People FROM Los Angeles on the other hand resent the implication that LA is not its own continent.
Re: (Score:2)
Louisiana? Isn't that somewhere near Texarkana? I've heard of it.....
Re: (Score:2)
Re: (Score:2)
I thought Micello might push Google to try and claim that turf. Let's see who's spread thinner.
Target acquired (Score:3, Funny)
Works for me.
Re: (Score:2)
i wish there is a map for laptops or computers for non-high-tech people. That would be awesome
:)
Your Map:
Move the arrow thing to the place at the top of your internet. Backspace the letters and stuff with slashdot.org in it. Type in anything else, and click enter. | http://news.slashdot.org/story/09/09/30/2052258/Google-Wants-to-Map-Indoors-Too | CC-MAIN-2014-15 | refinedweb | 3,868 | 82.95 |
I'm having trouble getting a class method from assigning an array parameter to a private array variable. I have the following array of 24 char elements declared:
char pl_Name[24] = "";
And I am trying to pass that same array to a function in my 'Character' class. However, I receive the following error from Dev-C++:
incompatible types in assignment of `char*' to `char[24]'
I get the error when I try to assign an array to another array. Here is the line of code with my error; it is the constructor for my class:
Character(char name[]) { m_name = name; }
The method receives the array just fine, but when I try to assign the name array to the private class variable m_name I receive the incompatible type error. I understand that arrays are always passed by reference, but I am unclear about how that should change my code.
Try char *name.
BAF.zone | SantaHack!
Tip: Use c++ strings if you can.
------------- Bah weep granah weep nini bong!
Don't use C++ Strings.
I am guessing you have a char m_name[24]; in the class. Correct? (if not, note so, as this assumes it to be the case. not enough info in your post to be wholly sure)
You do not copy C-style arrays by assignment. If that's what you're trying to do (copy), use one of the string copying functions (str*cpy). (and for other kinds of arrays, there is the memcpy function. a loop can also be used)
If you know the length of the arrays match or the destination is always atleast as large as that copied from, you can use
strcpy(m_name, name);
otherwise (or bad things might happen), use
strncpy(m_name, name, whatever_the_length_of_m_name_is);
m_name[whatever_the_length_of_m_name_is - 1] = '\0';
where the latter of the two lines ensure the string is null-terminated if the whole string is overwritten. I'd recommend doing thus in this case as this character array has not previously been initialized. In cases where the elements have been initialized to all 0s (as in the number, not the character '0') or '\0' (amounts to the same value - 0) - or atleast the very last character remains so - you can copy a maximum length one less of the size, knowing that a proper 0 will remain, and ommit the latter line.
Using these C-string operation functions will (unless you include something else that includes the file or declares them) require a #include <cstring> somewhere before the code in question. And, this being C++, you'll need to either put using namespace std; afterwards or prefix the those function names with std::
Oh yeah, I forgot. C++ strings would GREATLY simplify and make this much safer.
Don't use C++ Strings.
Care to explain why?
It loads a ton of shit.
"Code is like shit - it only smells if it is not yours"Allegro Wiki, full of examples and articles !!
Joel Pettersson: Thank you for your reply, using the strcpy() function compiled without error.
About C++ strings, I have been trying to get strings working with DIALOG arrays, but I cannot seem do so. I can't get anything other than an array of chars to be read from my d_edit_proc and actually change without causing my program to crash. Can I use other data types with my DIALOGS? Also, I want my strings to have a set size on them; I'm not sure if I can do that with strings or not.
string.c_str() returns 'const char*'.ie:
string c("World!");
printf("Hello %s", c.str());
Ah, thank you for that. I can't wait to implement c_str()
I have another question, I'm hoping someone might be able to help me with. I have a d_edit_proc that could read my array of chars; the code below:
{ d_edit_proc, 30, 50, 270, 16, 0, 0, 0, 0, 24, 0, pl_Name, NULL, NULL }
I have 3 more d_edit_proc's declared, each using a different variable of the same data type to edit.
pl_Name used to be declared like this:
char pl_Name[24]
But I have changed pl_Name to be of the string data type. I can use c_str() and get my d_edit_proc to read the string, but I encounter some issues. The editable text-box I am now using looks like this:
{ d_edit_proc, 30, 50, 270, 16, 0, 0, 0, 0, 24, 0, (void*)pl_Name.c_str(), NULL, NULL },
My Problems1. When I edit one text-box and then mouse over another, it copies the information from the first one, and puts that into the second one.2. After I am finished with the dialog and I have entered all the data, all 4 variables hold the same data as my last variable.
Are you sure that you're keeping the name stored somewhere (that it's not just a local variable in a small method)?
OpenLayer has reached a random SVN version number ;) | Online manual | Installation video!| MSVC projects now possible with cmake | Now alvailable as a Dev-C++ Devpack! (Thanks to Kotori)
Ah, thank you for that. I can't wait to implement c_str()
But... huh? ... c_str() is already implemented as std::string::c_str()
How is my posting? - iPad POS
I think he just meant "start using it".
--- <-- Read it, newbies.
c_str() is a const char*, meaning you can't modify it. You will have to use a char array with d_edit_proc, or write your own version that works with std::String. | https://www.allegro.cc/forums/thread/591424 | CC-MAIN-2018-17 | refinedweb | 910 | 71.14 |
In this chapter, we start creating amazing GUIs using Python 3.6 and above. We will cover the following topics:
- Creating our first Python GUI
- Preventing the GUI from being resized
- Adding a label to the GUI form
- Creating buttons and changing their text property
- Text box widgets
- Setting the focus to a widget and disabling widgets
- Combo box widgets
- Creating a check button with different initial states
- Using radio button widgets
- Using scrolled text widgets
- Adding several widgets in a loop
In this chapter, we will develop our first GUI in Python. We will start with the minimum code required to build a running GUI application. Each recipe then adds different widgets to the GUI form.
In the first two recipes, we will show the entire code, consisting of only a few lines of code. In the following recipes, we will only show the code to be added to the previous recipes.
By the end of this chapter, we will have created a working GUI application that consists of labels, buttons, text boxes, combo boxes, check buttons in various states, as well as radio buttons that change the background color of the GUI.
At the beginning of each chapter, I will show the Python modules that belong to each chapter. I will then reference the different modules that belong to the code shown, studied and run.
Here is the overview of Python modules (ending in a .py extension) for this chapter:
Python is a very powerful programming language. It ships with the built-in
tkinter module. In only a few lines of code (four, to be precise) we can build our first Python GUI.
To follow this recipe, a working Python development environment is a prerequisite. The IDLE GUI, which ships with Python, is enough to start. IDLE was built using tkinter!
Note
- All the recipes in this book were developed using Python 3.6 on a Windows 10 64-bit OS. They have not been tested on any other configuration. As Python is a cross-platform language, the code from each recipe is expected to run everywhere.
- If you are using a Mac, it does come with built-in Python, yet it might be missing some modules such as tkinter, which we will use throughout this book.
- We are using Python 3.6, and the creator of Python intentionally chose not to make it backwards compatible with Python 2. If you are using a Mac or Python 2, you might have to install Python 3.6 from in order to successfully run the recipes in this book.
- If you really wish to run the code in this book on Python 2.7, you will have to make some adjustments. For example, tkinter in Python 2.x has an uppercase T. The Python 2.7 print statement is a function in Python 3.6 and requires parentheses.
- While the EOL (End Of Life) for the Python 2.x branch has been extended to the year 2020, I would strongly recommend that you start using Python 3.6 and above.
- Why hold on to the past, unless you really have to? Here is a link to the Python Enhancement Proposal (PEP) 373 that refers to the EOL of Python 2:
Here are the four lines of
First_GUI.py required to create the resulting GUI:
Execute this code and admire the result:
In line nine, we import the built-in
tkinter module and alias it as
tk to simplify our Python code. In line 12, we create an instance of the
Tk class by calling its constructor (the parentheses appended to
Tk turns the class into an instance). We are using the alias
tk, so we don't have to use the longer word
tkinter. We are assigning the class instance to a variable named
win (short for a window). As Python is a dynamically typed language, we did not have to declare this variable before assigning to it, and we did not have to give it a specific type. Python infers the type from the assignment of this statement. Python is a strongly typed language, so every variable always has a type. We just don't have to specify its type beforehand like in other languages. This makes Python a very powerful and productive language to program in.
Note
A little note about classes and types:
- In Python, every variable always has a type. We cannot create a variable that does not have a type. Yet, in Python, we do not have to declare the type beforehand, as we have to do in the C programming language.
- Python is smart enough to infer the type. C#, at the time of writing this book, also has this capability. Using Python, we can create our own classes using the
classkeyword instead of the
defkeyword.
- In order to assign the class to a variable, we first have to create an instance of our class. We create the instance and assign this instance to our variable, for example:
class AClass(object):
print('Hello from AClass') class_instance = AClass()Now, the variable,
class_instance, is of the
AClasstype. If this sounds confusing, do not worry. We will cover OOP in the coming chapters.
In line 15, we use the instance variable (
win) of the class to give our window a title via the
title property. In line 20, we start the window's event loop by calling the
mainloop method on the class instance,
win. Up to this point in our code, we created an instance and set one property, but the GUI will not be displayed until we start the main event loop.
Note
- An event loop is a mechanism that makes our GUI work. We can think of it as an endless loop where our GUI is waiting for events to be sent to it. A button click creates an event within our GUI, or our GUI being resized also creates an event.
- We can write all of our GUI code in advance and nothing will be displayed on the user's screen until we call this endless loop (
win.mainloop()in the preceding code). The event loop ends when the user clicks the red
Xbutton or a widget that we have programmed to end our GUI. When the event loop ends, our GUI also ends.
By default, a GUI created using tkinter can be resized. This is not always ideal. The widgets we place onto our GUI forms might end up being resized in an improper way, so in this recipe, we will learn how to prevent our GUI from being resized by the user of our GUI application.
This recipe extends the previous one, Creating our first Python GUI, so one requirement is to have typed the first recipe yourself into a project of your own, or download the code from.
We are preventing the GUI from being resized, look at:
GUI_not_resizable.py
Running the code creates this GUI:
Line 18 prevents the Python GUI from being resized.
Running this code will result in a GUI similar to the one we created in the first recipe. However, the user can no longer resize it. Also, note how the maximize button in the toolbar of the window is grayed out.
Why is this important? Because once we add widgets to our form, resizing can make our GUI look not as good as we want it to be. We will add widgets to our GUI in the next recipes.
The
resizable() method is of the
Tk() class, and by passing in
(False, False), we prevent the GUI from being resized. We can disable both the x and y dimensions of the GUI from being resized, or we can enable one or both dimensions by passing in
True or any number other than zero.
(True, False) would enable the x-dimension but prevent the y-dimension from being resized.
We also added comments to our code in preparation for the recipes contained in this book.
A label is a very simple widget that adds value to our GUI. It explains the purpose of the other widgets, providing additional information. This can guide the user to the meaning of an Entry widget, and it can also explain the data displayed by widgets without the user having to enter data into it.
We are extending the first recipe, Creating our first Python GUI. We will leave the GUI resizable, so don't use the code from the second recipe (or comment the
win.resizable line out).
In order to add a
Label widget to our GUI, we will import the
ttk module from
tkinter. Please note the two import statements. Add the following code just above
win.mainloop(), which is located at the bottom of the first and second recipes:
GUI_add_label.py
Running the code adds a label to our GUI:
In line 10 of the preceding code, we import a separate module from the
tkinter package. The
ttk module has some advanced widgets that make our GUI look great. In a sense,
ttk is an extension within the
tkinter package.
We still need to import the
tkinter package itself, but we have to specify that we now want to also use
ttk from the
tkinter package.
Line 19 adds the label to the GUI, just before we call
mainloop .
We pass our window instance into the
ttk.Label constructor and set the text property. This becomes the text our
Label will display.
We also make use of the grid layout manager, which we'll explore in much more depth in Chapter 2, Layout Management.
Note how our GUI suddenly got much smaller than in the previous recipes.
The reason why it became so small is that we added a widget to our form. Without a widget, the
tkinter package uses a default size. Adding a widget causes optimization, which generally means using as little space as necessary to display the widget(s).
If we make the text of the label longer, the GUI will expand automatically. We will cover this automatic form size adjustment in a later recipe in Chapter 2, Layout Management.
In this recipe, we will add a button widget, and we will use this button to change a property of another widget that is a part of our GUI. This introduces us to callback functions and event handling in a Python GUI environment.
This recipe extends the previous one, Adding a label to the GUI form. You can download the entire code from.
We add a button that, when clicked, performs an action. In this recipe, we will update the label we added in the previous recipe as well as the text property of the button:
GUI_create_button_change_property.py
The following screenshot shows how our GUI looks before clicking the button:
After clicking the button, the color of the label changed and so did the text of the button, which can be seen as follows:
In line 19, we assign the label to a variable, and in line 20, we use this variable to position the label within the form. We need this variable in order to change its properties in the
click_me() function. By default, this is a module-level variable, so we can access it inside the function, as long as we declare the variable above the function that calls it.
Line 23 is the event handler that is invoked once the button gets clicked.
In line 29, we create the button and bind the command to the
click_me() function.
Note
GUIs are event-driven. Clicking the button creates an event. We bind what happens when this event occurs in the callback function using the command property of the
ttk.Button widget. Notice how we do not use parentheses, only the name
click_me.
We also change the text of the label to include
red as, in the printed book, this might otherwise not be obvious. When you run the code, you can see that the color does indeed change.
Lines 20 and 30 both use the grid layout manager, which will be discussed in the following chapter. This aligns both the label and the button.
In tkinter, the typical one-line textbox widget is called Entry. In this recipe, we will add such an Entry widget to our GUI. We will make our label more useful by describing what the Entry widget is doing for the user.
GUI_textbox_widget.py
Now, our GUI looks like this:
After entering some text and clicking the button, there is the following change in the GUI:
In line 24, we get the value of the Entry widget. We have not used OOP yet, so how come we can access the value of a variable that was not even declared yet?
Without using OOP classes, in Python procedural coding, we have to physically place a name above a statement that tries to use that name. So how come this works (it does)?
The answer is that the button click event is a callback function, and by the time the button is clicked by a user, the variables referenced in this function are known and do exist.
Life is good.
Line 27 gives our label a more meaningful name; for now, it describes the text box below it. We moved the button down next to the label to visually associate the two. We are still using the grid layout manager, which will be explained in more detail in Chapter 2, Layout Management.
Line 30 creates a variable,
name. This variable is bound to the Entry widget and, in our
click_me() function, we are able to retrieve the value of the Entry widget by calling
get() on this variable. This works like a charm.
Now we see that while the button displays the entire text we entered (and more), the textbox Entry widget did not expand. The reason for this is that we hardcoded it to a width of 12 in line 31.
Note
- Python is a dynamically typed language and infers the type from the assignment. What this means is that if we assign a string to the
namevariable, it will be of the
stringtype, and if we assign an integer to
name, its type will be integer.
- Using
tkinter, we have to declare the
namevariable as the type
tk.StringVar()before we can use it successfully. The reason is that tkinter is not Python. We can use it from Python, but it is not the same language.
While our GUI is nicely improving, it would be more convenient and useful to have the cursor appear in the Entry widget as soon as the GUI appears. Here we learn how to do this.
Python is truly great. All we have to do to set the focus to a specific control when the GUI appears is call the
focus() method on an instance of a
tkinter widget we previously created. In our current GUI example, we assigned the
ttk.Entry class instance to a variable named,
name_entered. Now, we can give it the focus.
Place the following code just above the code which is located at the bottom of the module and which starts the main windows event loop, like we did in the previous recipes:
GUI_set_focus.py
If you get some errors, make sure you are placing calls to variables below the code where they are declared. We are not using OOP as of yet, so this is still necessary. Later, it will no longer be necessary to do this.
Note
On a Mac, you might have to set the focus to the GUI window first before being able to set the focus to the Entry widget in this window.
Adding this one line (38) of Python code places the cursor in our text Entry widget, giving the text Entry widget the focus. As soon as the GUI appears, we can type into this text box without having to click it first.
We can also disable widgets. To do that, we will set a property on the widget. We can make the button disabled by adding this one line (37 below) of Python code to create the button:
After adding the preceding line of Python code, clicking the button no longer creates any action:
This code is self-explanatory. We set the focus to one control and disable another widget. Good naming in programming languages helps to eliminate lengthy explanations. Later in this book, there will be some advanced tips on how to do this while programming at work or practicing our programming skills at home.
In this recipe, we will improve our GUI by adding drop-down combo boxes which can have initial default values. While we can restrict the user to only certain choices, we can also allow the user to type in whatever they wish.
This recipe extends the previous recipe, Setting the focus to a widget and disabling widgets.
We insert another column between the Entry widget and the
Button widget using the grid layout manager. Here is the Python code:
GUI_combobox_widget.py
This code, when added to the previous recipes, creates the following GUI. Note how, in line 43 in the preceding code, we assigned a tuple with default values to the combo box. These values then appear in the drop-down box. We can also change them if we like (by typing in different values when the application is running):
Line 40 adds a second label to match the newly created combo box (created in line 42). Line 41 assigns the value of the box to a variable of a special tkinter type
StringVar, as we did in a previous recipe.
Line 44 aligns the two new controls (label and combobox) within our previous GUI layout, and line 45 assigns a default value to be displayed when the GUI first becomes visible. This is the first value of the
number_chosen['values'] tuple, the string
"1". We did not place quotes around our tuple of integers in line 43, but they got casted into strings because, in line 41, we declared the values to be of the
tk.StringVar type.
The preceding screenshot shows the selection made by the user as
42. This value gets assigned to the
number variable.
If we want to restrict the user to only be able to select the values we have programmed into the
Combobox, we can do that by passing the
state property into the constructor. Modify line 42 as follows:
GUI_combobox_widget_readonly_plus_display_number.py
Now, users can no longer type values into the
Combobox. We can display the value chosen by the user by adding the following line of code to our Button Click Event Callback function:
After choosing a number, entering a name, and then clicking the button, we get the following GUI result, which now also displays the number selected:
In this recipe, we will add three check button widgets, each with a different initial state.
We are creating three check button widgets that differ in their states. The first is disabled and has a check mark in it. The user cannot remove this check mark as the widget is disabled.
The second check button is enabled, and by default, has no check mark in it, but the user can click it to add a check mark.
The third check button is both enabled and checked by default. The users can uncheck and recheck the widget as often as they like. Look at the following code:
GUI_checkbutton_widget.py
Running the new code results in the following GUI:
In lines 47, 52, and 57 we create three variables of the
IntVar type. In the line following each of these variables, we create a
Checkbutton, passing in these variables. They will hold the state of the
Checkbutton (unchecked or checked). By default, that is either 0 (unchecked) or 1 (checked), so the type of the variable is a
tkinter integer.
We place these
Checkbutton widgets in our main window, so the first argument passed into the constructor is the parent of the widget, in our case,
win. We give each
Checkbutton widget a different label via its
text property.
Setting the sticky property of the grid to
tk.W means that the widget will be aligned to the west of the grid. This is very similar to Java syntax and it means that it will be aligned to the left. When we resize our GUI, the widget will remain on the left side and not be moved towards the center of the GUI.
Lines 49 and 59 place a checkmark into the
Checkbutton widget by calling the
select() method on these two
Checkbutton class instances.
We continue to arrange our widgets using the grid layout manager, which will be explained in more detail in Chapter 2, Layout Management.
In this recipe, we will create three tkinter
Radiobutton widgets. We will also add some code that changes the color of the main form, depending upon which
Radiobutton is selected.
This recipe extends the previous recipe, Creating a check button with different initial states.
We add the following code to the previous recipe:
GUI_radiobutton_widget.py
Running this code and selecting the
Radiobutton named
Gold creates the following window:
In lines 75-77, we create some module-level global variables which we will use in the creation of each radio button as well as in the callback function that creates the action of changing the background color of the main form (using the instance variable
win).
We are using global variables to make it easier to change the code. By assigning the name of the color to a variable and using this variable in several places, we can easily experiment with different colors. Instead of doing a global search-and-replace of a hardcoded string (which is prone to errors), we just need to change one line of code and everything else will work. This is known as the DRY principle, which stands for Don't Repeat Yourself. This is an OOP concept which we will use in the later recipes of the book.
Note
The names of the colors we are assigning to the variables (
COLOR1,
COLOR2 ...) are
tkinter keywords (technically, they are symbolic names). If we use names that are not
tkinter color keywords, then the code will not work.
Line 80 is the callback function that changes the background of our main form (
win) depending upon the user's selection.
In line 87 we create a
tk.IntVar variable. What is important about this is that we create only one variable to be used by all three radio buttons. As can be seen from the screenshot, no matter which
Radiobutton we select, all the others will automatically be unselected for us.
Lines 89 to 96 create the three radio buttons, assigning them to the main form, passing in the variable to be used in the callback function that creates the action of changing the background of our main window.
Here is a small sample of the available symbolic color names that you can look up at the official tcl manual page at.
Some of the names create the same color, so
alice blue creates the same color as
AliceBlue. In this recipe, we used the symbolic names
Blue,
Gold, and
Red.
ScrolledText widgets are much larger than simple
Entry widgets and span multiple lines. They are widgets like Notepad and wrap lines, automatically enabling vertical scrollbars when the text gets larger than the height of the
ScrolledText widget.
This recipe extends the previous recipe, Using radio button widgets. You can download the code for each chapter of this book from.
By adding the following lines of code, we create a
ScrolledText widget:
GUI_scrolledtext_widget.py
We can actually type into our widget, and if we type enough words, the lines will automatically wrap around:
Once we type in more words than the height the widget can display, the vertical scrollbar becomes enabled. This all works out-of-the-box without us needing to write any more code to achieve this:
In line 11, we import the module that contains the
ScrolledText widget class. Add this to the top of the module, just below the other two
import statements.
Lines 100 and 101 define the width and height of the
ScrolledText widget we are about to create. These are hardcoded values we are passing into the
ScrolledText widget constructor in line 102.
These values are magic numbers found by experimentation to work well. You might experiment by changing
scol_w from 30 to 50 and observe the effect!
In line 102, we are also setting a property on the widget by passing in
wrap=tk.WORD.
By setting the
wrap property to
tk.WORD we are telling the
ScrolledText widget to break lines by words so that we do not wrap around within a word. The default option is
tk.CHAR, which wraps any character regardless of whether we are in the middle of a word.
The second screenshot shows that the vertical scrollbar moved down because we are reading a longer text that does not entirely fit into the x, y dimensions of the
SrolledText control we created.
Setting the
columnspan property of the grid widget to
3 for the
SrolledText widget makes this widget span all the three columns. If we do not set this property, our
SrolledText widget would only reside in column one, which is not what we want.
So far, we have created several widgets of the same type (for example,
Radiobutton) by basically copying and pasting the same code and then modifying the variations (for example, the column number). In this recipe, we start refactoring our code to make it less redundant.
We are refactoring some parts of the previous recipe's code, Using scrolled text widgets, so you need that code to apply this recipe to.
Here's how we refactor our code:
GUI_adding_widgets_in_loop.py
Running this code will create the same window as before, but our code is much cleaner and easier to maintain. This will help us when we expand our GUI in the coming recipes.
In line 77, we have turned our global variables into a list.
In line 89, we set a default value to the
tk.IntVar variable that we named
radVar. This is important because, while in the previous recipe we had set the value for
Radiobutton widgets starting at 1, in our new loop it is much more convenient to use Python's zero-based indexing. If we did not set the default value to a value outside the range of our
Radiobutton widgets, one of the radio buttons would be selected when the GUI appears. While this in itself might not be so bad, it would not trigger the callback and we would end up with a radio button selected that does not do its job (that is, change the color of the main win form).
In line 95, we replace the three previously hardcoded creations of the
Radiobutton widgets with a loop that does the same. It is just more concise (fewer lines of code) and much more maintainable. For example, if we want to create 100 instead of just three
Radiobutton widgets, all we have to change is the number inside Python's range operator. We would not have to type or copy and paste 97 sections of duplicate code, just one number.
Line 82 shows the modified callback function. | https://www.packtpub.com/product/python-gui-programming-cookbook-second-edition/9781787129450 | CC-MAIN-2022-27 | refinedweb | 4,565 | 71.24 |
simple_cache 0.35
A simple caching utility in Python 3
A simple caching utility in Python 3.
simple_cache uses the pickle module to write any key : value pairs to a file on disk.
It was written as an easy way to cache http requests for local use. It can possibly be used for caching any data, as long as the key s are hashable and the value s are pickleable.
It also provides a decorator to cache function calls directly.
Requirements
Only standard libraries are used, so there are no dependencies.
Installing
pip install simple_cache
Or if you like, you can just download the simple_cache.py file and import it locally.
Usage
Each cache file contains a single dictionary, acting as the namespace for that cache. Within the file, you can set and retrieve any key : value pairs as needed.
When setting a key, you must give a ttl value, or time to live, in seconds. This value determines the amount of time that value will be considered valid. After that, the value is considered expired, and will not be returned.
Calls to a non-existent cache file, a non-existent key, or an expired key all return None.
You can set a key with a new value before or after it expires.
Whenever you ask the cache for a value, and it happens to be expired, the item is deleted from the file. You can also manually ask the cache file at any time, to prune all currently expired items.
API
import simple_cache
Using the decorator format:
Using the same cache file for multiple functions with a decorator might cause problems. The decorator uses the *args, **kwargs of the function as a key, so calling to different functions with the same arguments will cause a clash.
You can specify a custom filename (and ttl) with the decorator format, overriding the default values.
Please note that the decorator format only supports args and kwargs with immutable types. If one of your arguments is mutable (e.g. a list, or a dictionary), the decorator won’t work.
@simple_cache.cache_it() # uses defaults: filename = "simple.cache", ttl = 3600 def some_function(*args, **kwargs): # body return value
@simple_cache.cache_it(filename="some_function.cache", ttl=120) def some_function(*args, **kwargs): # body return value
Using the module functions:
Setting a key and value:
simple_cache.save_key(filename, key, value, ttl)
Retrieving a value:
simple_cache.load_key(filename, key)
Pruning all expired items in a file:
simple_cache.prune_cache(filename)
Loading the whole cache dictionary from a file (possibly for debugging or introspection):
simple_cache.read_cache(filename)
Writing a whole dictionary to a file, overwriting any previous data in the file (possibly for initalizing a cache by batch writing multiple items):
simple_cache.write_cache(filename, cache)
filename is a string containing a valid filename
key is any hashable type, and must be unique within each cache file (otherwise will overwrite)
value is any Python type supported by the pickle module
ttl is an integer or float, denoting the number of seconds that the item will remain valid before it expires
cache is a dictionary containing the key:value pairs
License
simple_cache is open sourced under GPLv3.
- Author: barisumog
- License: GPLv3
- Categories
- Package Index Owner: barisumog
- DOAP record: simple_cache-0.35.xml | https://pypi.python.org/pypi/simple_cache | CC-MAIN-2017-09 | refinedweb | 537 | 55.34 |
I am designing/creating an application in React, but I need some help structuring the whole thing.
The app I want to create is built as follows:
Now, I have a couple questions about how to structure this, and what react plugins or frameworks come in handy:
I guess you can have a layout like this:
<App> <Navbar /> { /* Here you show the username also, this view will depend on props */ } <Layout /> { /* Here you may have the content and the sidenav */ } </App>
App is the top component, the main container that passes props for example in the render() you will have
// You should treat this as a Page component, the one that grabs // data and passes it as props to another components export default class AppPage extends React.Component { render() { return (<AppLayout { ...this.props } />); } } export default class AppLayout extends React.Component { render() { return ( <Navbar someProp={ this.props.someProp } /> { this.props.children } ); } }
Router can be for example:
<Router history={ browserHistory }> <Route path="/" component={ App }> <IndexRoute component={ Landing } /> <Route path="/index" component={ Layout } /> </Route> </Router>
The IndexRoute is '/' and you can say which components should be used on which route, for example if you route to /layout { this.props. children } on AppLayout will render as the component
I suggest you read this article: to better understand how react works... Hope it helped | https://codedump.io/share/HsOSYQjA262Y/1/need-help-structuring-react-app | CC-MAIN-2017-34 | refinedweb | 219 | 53.44 |
This is part of a series I started in March 2008 - you may want to go back and look at older parts if you're new to this series.
I've combined two of the planned parts this time, what was in the list from last time as parts 7 and 8.
We can implement loops using recursion "manually", but adding lambda's now should make it possible to create a slightly cleaner version by actually defining a "while" function:
(defun while (cond body) (if (call cond ()) (do (call body ()) (while cond body) ())) )
In a way, this isn't so far from Ruby blocks, except that we don't provide the syntactic sugar that allows the blocks to be defined without any extra vocabulary, so in use our "while" function would look like this:
(while (lambda () (cond)) (lambda () (body)))
Not terrible, and far better than earlier, but still not very clean. We'll do better shortly, but for now lets start with a pre-requisite that prevents the code above from actually working in any meaningful sense of the word:
We finally have to add support for using the arguments passed.
In our Ruby based syntax, the while function using function arguments looks like this:
[:defun, :while, [:cond, :body], [:if, [:apply, :cond, []], [:do, [:apply, :body, []], [:while, :cond, :body] ], [] ] ]
As a sidebar, indirectly this also means we're providing a hackish way of providing local variables. "Proper" local variables is largely syntactic sugar again. Most stuff in programming is syntactic sugar:
(call (lambda (i) (code here)) (0))
The code above defines the local variable (i), visible to the code inside, and initializes it to 0. Not pretty, but as usual we defer pretty until later. As usual it's also highly inefficient, since it depends on creating a brand new function, but we'll deal with that later.
But lets move on and actually figure out what changes to make to access the function arguments. These preparations also pave the way for proper local variables and more down the line.
First we'll add a "scope" object, and do some minor refactoring. The "scope" defines which sets of variables are actually visible to use at any time.
class Function attr_reader :args,:body def initialize args,body @args = args @body = body end end class Scope def initialize compiler,func @c = compiler @func = func end def get_arg a a = a.to_sym @func.args.each_with_index {|arg,i| return [:arg,i] if arg == a } return [:atom,a] end end
The reason we separate Function and Scope here is that I'll later introduce scopes that match other things than functions, such as classes etc.
Part of the refactoring mentioned is to thread the scope objects through the compiler, so that scope changes are transparent (by just passing the new scope down).
We hook
Scope#get_arg in into
Compiler#get_arg by changing:
return [:atom, a] if (a.is_a?(Symbol))
into this:
return scope.get_arg(a) if (a.is_a?(Symbol))
#output_functions changes into this, to start a new scope for each function:
def output_functions @global_functions.each do |name,func| puts ".globl #{name}" puts ".type #{name}, @function" puts "#{name}:" puts "\tpushl %ebp" puts "\tmovl %esp, %ebp" compile_exp(Scope.new(self,func,func.body) puts "\tleave" puts "\tret" puts "\t.size #{name}, .-#{name}" puts end end
And
#compile_defun changes to create a proper Function object instead of just an array of arguments and the body:
def compile_defun scope,name, args, body @global_functions[name] = Function.new(args,body) return [:subexpr] end
Then we need to actually support accessing the arguments. Again we resort to "gcc -S" to find out how. This:
void bar(const char * str, unsigned long arg, unsigned long arg2) { printf(str,arg,arg2); }
turns into:
bar: pushl %ebp movl %esp, %ebp subl $24, %esp movl 16(%ebp), %eax movl %eax, 8(%esp) movl 12(%ebp), %eax movl %eax, 4(%esp) movl 8(%ebp), %eax movl %eax, (%esp) call printf leave ret
As you can see, the arguments are accessed relative to %ebp, which has been loaded with a copy of %esp at the beginning of the function. Why the offset of 8 for the first argument? Well, the return address gets pushed onto the stack, creating an offset of 4, and the the old value of %ebp is pushed on, giving us an offset of 8 to access the arguments. Remember that %esp grows down in memory, which is why we're adding offsets to get past the last entries pushed onto the stack.
That leads to this addition to #compile_eval_arg as the last "if" check:
if atype == :arg puts "\tmovl\t#{PTR_SIZE*(aparam+2)}(%ebp),%eax" end
Last but not least we create a Function object for "main" by modifying the call to #compile_exp in #compile_main:
@main = Function.new([],[]) compile_exp(Scope.new(self,@main),exp)
Time to test how it works:
prog = [:do, [:defun,:myputs,[:foo],[:puts,:foo]], [:myputs,"Demonstrating argument passing"], ]
Then:
$ ruby step7.rb >step7.s $ make step7 cc step7.s -o step7 $ ./step7 Demonstrating argument passing
The latest version is here | https://hokstad.com/writing-a-compiler-in-ruby-bottom-up-step-7 | CC-MAIN-2021-21 | refinedweb | 838 | 58.32 |
Create a password protected zip file using zlib and ZipEngine
First you need to download zlib package from here and ZipEngine from here.
Once got the sources put all the files in your project folder. There are no duplicate files name than you can keep all the files in the same path. As said before ZipEngine is a wrapper over zlib. This mean using this wrapper you don't need to use the zlib directly but sources (or precompiled library) are needed.
The first step is to create a new zip file or open an existing one with the following code:
#include "zip.h" zipFile hZipFile; m_ZipFile = zipOpen("myfile.zip", APPEND_STATUS_CREATE); if(m_hZipFile == NULL) { // Error here }
The flags available for open a zip file are:
- APPEND_STATUS_CREATE
Create a new zip file from scratch
- APPEND_STATUS_CREATEAFTER
If file exist the zip will be created at the end of the file (useful if the file contain a self extractor code)
- APPEND_STATUS_ADDINZIP
Open an existing zip file and add the new files inside it
Now we have our zip handle we can proceed to add file inside the zip just created.Basically the procudure used by ZipEgine is to create a new item inside the zip file with the name of the file you want to insert. Once successfully create this new item you can write the content of the file connected to the item name. However, since in your example we want to protect our zip file with a password, we need also to have the CRC value of the file content we are going to insert. This mean we have to precalculate the CRC by reading all the file content with the following algorithm:
unsigned long CRC = crc32(0L, Z_NULL, 0); const unsigned int BufferSize = 10000; unsigned char Buffer[BufferSize]; size_t nRead; FILE* pFile; pFile = fopen("myfile.bin", "rb"); while((nRead = fread(Buffer, BufferSize, 1, pFile)) > 0) { CRC = crc32(CRC, Buffer, nRead); } fclose(pFile);
Now we have our CRC number we can proceed to create a new item inside the zip with the name the the file we want to store inside:
zip_fileinfo zipInfo; int Result; zipInfo.dosDate = 0; zipInfo.tmz_date.tm_sec = 0; zipInfo.tmz_date.tm_min = 10; zipInfo.tmz_date.tm_hour = 12; zipInfo.tmz_date.tm_mday = 01; zipInfo.tmz_date.tm_mon = 05; zipInfo.tmz_date.tm_year = 2012; zipInfo.internal_fa = 0; zipInfo.external_fa = 0; Result = zipOpenNewFileInZip3(hZipFile, "myfile.bin", &zipInfo, NULL, 0, NULL, 0, NULL, Z_DEFLATED, Z_DEFAULT_COMPRESSION, 0, 15, 8, Z_DEFAULT_STRATEGY, "mypassword", CRC ); if(Result != ZIP_OK) { // Error }
The function for create a new item get a lot of params. If you wan to have more info regarding the meaning of these params you can check the documentation provided with the ZipEngine package. Anyway, if you are not interested to read boring info regarding how many ways you have for "customize" a zip file and want to simply create a normal zip file, you can get the params above and live happy. The zipInfo structure must be filled with the date and time of the item you want to be saved as file info (in the example we used random date and time). Once successfully create the item you can proceed to fill them with the content of the file:
const unsigned int BufferSize = 10000; unsigned char Buffer[BufferSize]; size_t nRead; FILE* pFile; pFile = fopen("myfile.bin", "rb"); while(Result == ZIP_OK && (nRead = fread(Buffer, BufferSize, 1, pFile)) > 0) { Result = zipWriteInFileInZip(hZipFile, Buffer, nRead); } fclose(pFile);
Operation done. Now close the item:
zipCloseFileInZip(hZipFile);
and, if you have no other file to add, close the zip file itself:
zipClose(hZipFile, NULL);
This is a really simply tutorial. As already said, if you are interested in more info regarding the possible params to change in zip file management, you can check the documentation provided with both products. If you are not interested at all simply get these pieces of code and used them in your software. Enjoy!
Should I compile this code on C or C++?
MinGW also gave me a bunch of errors. I guess that something is missing.
The code is in C style than you can compile in both C or C++. The snippets are very simple, which kind of error do you have?
Great thanks for the code! You saved my time.
But I tried to zip non-empty file - without success.
The decision was to replace
fread(Buffer, BufferSize, 1, pFile) to:
fread(Buffer, 1, BufferSize, pFile) or better fread(Buffer, sizeof(unsigned char), BufferSize, pFile)
2 Vinícius:
#include needed for fopen(), fread(), fwrite()
Thanks, this helped me enormously.
A couple of points:
1. I got a convenient combined package from including zlib and zipengine and a Visual Studio 2010 solution.
2. It looks as though some renaming has gone on and in fact hZipFile, m_hZipFile and m_ZipFile are all supposed to be the same identifier.
Thanks, this code really saved my time! :-)
Hello guys, I am trying to use the password protection feature but this doesn't seem to function. The zip file is successfully created with a bin file inside it. I thought that the zip file itself should be Password protected and not the bin file inside it. When I am trying to open the bin file it ask me for a Password, but whatever I give as Input, the file is opened freely.
1) Is there a way to create a Password protected zip file (and not the files inside it)?
2) If not, why the Password protected bin file inside the zip files opens with any Password I give as Input? Any clues? Thanks
It seem really strange you can open your zip file using any password, it's the first time I hear something like this. About "global" password the "engine" of zip file protection allow you to set password for only some files inside a single zip and live free to open the others. This is the reasons the password doesn't cover the entire zip. If you want something like this you should move to .rar compression that allow what you need.
At first, thank you for your fast response. Actually, the password protection of the file-files inside the zip is ok for me. The problem is that it opens with any Password given. Can you please guide me to fix this issue? Thanks in advance.
I'm sorry but I really don't know how to fix your problem since is the first time I hear about a zip password protected file that can be opened using any password. The only suggestion I can give is to verify again if you followed all the steps explained in this post...
Hello, I followed the tutorial. I could create a zip file protected by a password. However I can not open this file with popular zip open program such as WinRar, 7Zip, AlZip. Although I input right password, these programs always notice error message "Wrong password". Could you tell me how to fix this issue. Many thanks !
Hi, is your password string you set in ascii format instead of some unicode format? You have to be sure to inser a "pure" ascii format, this could be a reason of your problem (but is only an hypothesis since I can not verify directly).
Thanks for your answer. But I use password as your sample "mypassword". So I think that insert "pure" ascii password is not problem. Do you have any ideas for my issue ?
Also if you use a password composed by ascii value only if the compiler you use is set to magane unicode string the ascii value will be converted in unicode (two bytes per character). I don't know which compiler you are using or under which OS you are developing but using latest compilers unicode is the standard. Try to unsert the password in binary hexadecimal format (for example, the password "xxx" will be char passwd[] = { 'x', 'x', 'x' }; ). However it could be true the opposite, that mean the winrar tool will suppose password is in unicode format and you insert an pure ascii string. A test would be to try reopen the .zip password file using the same sofware you developed for create your zip (that same compiler). This test will give you a first clue if the zip is created correctly. In positive case try to open with your software a zip password protected created by winrar and see if it work.
I will try your suggestions. Thanks a lot :D
Hi FalsinSoft, could you give me a sample code for unzipping zip files with password. Thank you.
Sorry, I don't have a ready made unzipping code. This post is very old and I don't use this library anymore for my projects.
Ok Thank you :)
Use crc = 0 and the magic appears!!! ; ) | https://falsinsoft.blogspot.com/2012/03/create-password-protected-zip-file.html | CC-MAIN-2018-51 | refinedweb | 1,462 | 71.95 |
A study in XML culture and evolution
There is a spoken language in Africa - I believe it is Malinke but
memory (and Google) may have failed me here - which has evolved a very
interesting alternative to designating people by name. A hyper-pronoun
system if you like.
Speakers of Malinke prefer not to name people directly in speech. To do
so would give the evil spirits a direct connection with the named
individual. Not good. Instead, speakers of Malinke embark on a
circumlocutory route to identifying the individual. For example if Mr. X
has just come in through the door, he might be identified in speech as
"the man who has just come through the door" rather than "Mr. X". Thus
throwing the evil spirits off the hunt.
Closer to home for me, the Irish language makes it well nigh impossible
to say "hello" without invoking God. In Irish, "God be with you"[1] is
the most common form of greeting. Without even thinking about it, Irish
speakers go around the place invoking the powers that be, to look kindly
on the people they meet and greet. A sort of built-in protection
mechanism against the forces of evil, right in the heart of the
language.
These are examples, of course, of culture impacting human language.
Nothing controversial there. More likely to be controversial is the
assertion I am about to make, that culture impacts computer languages
too. XML provides a good example of this phenomenon.
Mr. X comes through the door as in the Malinke example. How would we
capture the details of Mr. X in a computer program - say a Time and
Attendance system?
We might start - as so many HR and CRM systems do - with the idea that
people have names and addresses. But what is a name? What is it about a
name that distinguishes it from an address?
Perhaps a name is nothing more than a synonym for the innermost part of
an address. Let's use my address as an example and hope the evil spirits
do not read ITworld articles:
Sean McGrath
Propylon Ltd.
45 Blackbourne Square
Rathfarnham Gate
Dublin 14
Ireland
Europe
Earth
Let's turn this upside down:
Earth
Europe
Ireland
Dublin 14
Rathfarnham Gate
45 Blackbourne Square
Propylon Ltd.
Sean McGrath
The latter form of my address is a form that Malinke speakers would
perhaps prefer. It consists of a layer by layer zooming in on an
individual through ever tighter - ever more qualified - contexts. The
"name" is nothing more than the tightest qualifier. Malinke speakers
could stop short of that final, innermost part of the address and say
something like "The CTO" or "The tall bearded guy with the faraway look
in his eyes" etc. Thus uniquely identifying me.
There is an entire culture in IT that is Malinke-like in its approach to
identifying things. The proto-language for that culture is called SGML
and the most common dialects spoken today are HTML and XML.
Speakers of the XML dialect are the "doc-heads" who brought markup into
the new world of e-commerce and Web Services.
XML doc-heads are very fond of addressing - as opposed to unique names -
as a way of uniquely identifying things. Their cultural preference is a
direct result, I believe, of the impossibility of allocating unique
names for things in richly complex hierarchical structures.
Think of the complex structures you find in corpora of legislation,
exegesis of biblical texts, financial reports and so on. How do you
identify 'the third paragraph of the written judgement of Judge McGrath
in the case of X versus Y heard in the Queens Court on the 1st of
February 2001'? You do it Malinke-style. It has no unique name. It only
has an *address*.
Something very interesting happened to XML when it crossed over into the
world of e-commerce and Web Services. That land turned out to be heavily
populated with an aboriginal population of "data-heads". A culture with
a very different world view when it comes to naming things versus
addressing things.
In the relational database culture that many XML data-heads emanated
from, unique naming was of paramount importance. With a record
*everything* has a unique name. It simply must be so for the relational
model to work. Records themselves have unique identifiers. Again, it
simply must be so for the cultural ceremonies of normalization and joins
and so on to function.
As often happens when cultures collide, friction resulted in the XML
world over this issue. Some doc-heads, such as I, felt no need for any
new naming machinery other than already supplied in XML 1.0. In XML 1.0
an element of a document has a name - a tag. It is a name in the sense
that "Sean McGrath" is a name (i.e. it is merely the most qualified part
of my address). My full address can be determined by going to the top of
the document, and working your way down, layer by later until your
arrive at "Sean McGrath". This is the fully qualified address - a
completely unique name for me in an XML document.
However, the data-heads, felt the need for a direct naming convention
that could be used to make individual element names unique in an XML
document. This resulted in the creation of the XML namspaces
recommendation[2].
Now, we might look upon this as a quaint example of linguistic
evolution. Or we might view the clash of approaches and a sure fire way
to invoke the evil spirits of bad design.
I believe the latter is true. Unique names are so much less useful than
powerful addressing. Moreover, the pseudo-cascading effect of XML
namespaces creates a brittle middle-ground between content-free and
context-dependent naming that is best avoided in my opinion[3].
I have a simple rule about namespaces in XML applications. I don't use
them.
A simple rule that I commend to you for your consideration in your own
XML applications.
[1]
[2]
[3]
»! | http://www.itworld.com/nl/ebiz_ent/03252003/ | crawl-002 | refinedweb | 1,010 | 65.01 |
Enabling the storage of an audio file in a XML file
Is there any way to enable the storage of an audio file in a XML file? Yes, you can store binary data in XML documents. Because of the special characters that are needed by XML, it is necessary to encode the binary data using either the Base64 encoding algorithm, or the hexadecimal encoding algorithm. Read more on this from MSDN here:....
Continue Reading This Article
Enjoy this article as well as all of our content, including E-Guides, news, tips and more.
By submitting you agree to receive email communications from TechTarget and its partners. Privacy Policy Terms of Use.
asp
The System.Convert class contains methods for helping you convert between Base64 and non-Base64 data. Refer to the System.Convert.ToBase64String() and System.Convert.FromBase64String() methods for more information in the MSDN Library.
Here is an example method of how you might read the file in and convert it to a Base64 encoded string. Note that this method is just for demo purposes and I would not recommend reading the entire file in at once like this method is going to show. Read it in segments or asynchronously but never all at once.
using System; using System.Diagnostics; using System.IO; namespace Base64Example { /// <summary> /// Summary description for Base64Encoder. /// </summary> public class Base64 { /// <summary> /// Read the file into a base 64 string (Not an efficient means, just used for demonstration purposes!) /// </summary> /// <param name="path">The name of the file to read</param> /// <returns></returns> public static string ReadFile(string path) { string base64String = string.Empty; try { FileStream fs = File.Open(path, FileMode.Open, FileAccess.Read); if (fs != null) { byte[] bytes = new byte[fs.Length]; int bytesRead = fs.Read(bytes, 0, (int)fs.Length); if (bytesRead > 0) { base64String = System.Convert.ToBase64String(bytes); } } } catch(Exception ex) { Trace.WriteLine(ex); } return base64String; } } }Reversing the process should be trivial. Simply take the string and use the FromBase64String method to convert it back to it's original byte[]. Once converted, write it back out to your file.
Cheers,
Mark
Dig deeper on C# programming language | http://searchwindevelopment.techtarget.com/answer/Enabling-the-storage-of-an-audio-file-in-a-XML-file | CC-MAIN-2015-06 | refinedweb | 352 | 61.02 |
0
Here is my code, I am trying to make a smaller version of a chatterbot like Eliza. Anyways here it is.
#include <cstdlib> #include <iostream> #include <cstring> #include <stdio.h> #include <string.h> #include <string> #include <windows.h> #include <fstream> #include <tchar.h> // a few global variables char str[50]; // this one is for the responses, the 50 is a random number, needed something big int redo; // for restarting the speech thing using namespace std; void open_Something() { cout << "Enter something to open: "; char way[50]; cin.getline (way,50); char *path = ); } int main() { do { string resp[5] = { "Got anything else to say?", "That is it, nothing else?", "...Anything else", "So, what's up?", "Talk to me", }; // number generator int which_Num; unsigned seed = time(0); srand(seed); which_Num = rand() % 5; cout << resp[which_Num]; cin.getline (str,50); char * pch; pch = strstr (str,"open"); if (pch != NULL) { open_Something(); } char * pch1; pch = strstr(str,"what, is, your, name,"); if (pch1 != NULL) { cout << "My name, you say\n" << "It is Jubajee\n" << "Cool eh?\n"; redo = 1; } char * pch2; pch = strstr(str,"are you smart, are you intelligent, are you dumb"); if (pch2 != NULL) { cout << "What kind of a question is that\n" << "I am talking to you aren't I\n" << "btw I am smart okay?\n"; redo = 1; } //the bottom curly brace is for the do-while loop }while (redo == 1); system("PAUSE"); return 0; }
Okay, the problem is that I can't get it to choose one response. Every time I write one of the keywords all of what is in the if loops appears.
I can't get it to single out one response and then go back to the listing one of the responses in string resp.
Any suggestions on the code would be helpful. Thanks in advance. | https://www.daniweb.com/programming/software-development/threads/271941/help-with-this-program | CC-MAIN-2018-09 | refinedweb | 300 | 83.86 |
I am prototyping some C# 3 collection filters and came across this. I have a collection of products:
public class MyProduct { public string Name { get; set; } public Double Price { get; set; } public string Description { get; set; } } var MyProducts = new List<MyProduct> { new MyProduct { Name = "Surfboard", Price = 144.99, Description = "Most important thing you will ever own." }, new MyProduct { Name = "Leash", Price = 29.28, Description = "Keep important things close to you." } , new MyProduct { Name = "Sun Screen", Price = 15.88, Description = "1000 SPF! Who Could ask for more?" } };
Now if I use LINQ to filter it works as expected:
var d = (from mp in MyProducts where mp.Price < 50d select mp);
And if I use the Where extension method combined with a Lambda the filter works as well:
var f = MyProducts.Where(mp => mp.Price < 50d).ToList();
Question: What is the difference, and why use one over the other? | http://ansaurus.com/question/5194-c-30-where-extension-method-with-lambda-vs-linqtoobjects-to-filter-in-memory-collections | CC-MAIN-2018-39 | refinedweb | 147 | 75.71 |
FOPEN(3V) FOPEN(3V)
NAME
fopen, freopen, fdopen - open a stream
SYNOPSIS
#include <<stdio.h>>
FILE *fopen(filename, type)
char *filename, *type;
FILE *freopen(filename, type, stream)
char *filename, *type;
FILE *stream;
FILE *fdopen(fd, type)
int fd;
char *type;
DESCRIPTION
fopen() opens the file named by filename and associates a stream with
it. If the open succeeds, fopen() returns a pointer to be used to
identify the stream in subsequent operations.
filename points to a character string that contains the name of the
file to be opened.
type is a character string having one of the following values:
r open for reading
w truncate or create for writing
a append: open for writing at end of file, or create for
writing
r+ open for update (reading and writing)
w+ truncate or create for update
a+ append; open or create for update at EOF
freopen() opens the file named by filename and associates the stream
pointed to by stream with it. The type argument is used just as in
fopen. The original stream is closed, regardless of whether the open
ultimately succeeds. If the open succeeds, freopen() returns the orig-
inal value of stream.
freopen() is typically used to attach the preopened streams associated
with stdin, stdout, and stderr to other files.
fdopen() associates a stream with the file descriptor fd. File
descriptors are obtained from calls like open(2V), dup(2V), creat(2V),
or pipe(2V), which open files but do not return streams. Streams are
necessary input for many of the Section 3S library routines. The type
of the stream must agree with the access permissions of the open file.
When a file is opened for update, both input and output may be done on
the resulting stream. However, output may not be directly followed by
input without an intervening fseek(3S) or rewind(), and input may not
be directly followed by output without an intervening fseek(),
rewind(), or an input operation which encounters EOF.
SYSTEM V DESCRIPTION
When a file is opened for append (that is, when type is a or a+), it is
impossible to overwrite information already in the file. fseek() may
be used to reposition the file pointer to any position in the file, but
when output is written to the file, the current file pointer is disre-
garded. All output is written at the end of the file and causes the
file pointer to be repositioned at the end of the output. If two sepa-
rate processes open the same file for append, each process may write
freely to the file without fear of destroying output being written by
the other. The output from the two processes will be intermixed in the
file in the order in which it is written.
RETURN VALUES
On success, fopen(), freopen(), and fdopen() return a pointer to FILE
which identifies the opened stream. On failure, they return NULL.
SEE ALSO
open(2V), pipe(2V), fclose(3V), fseek(3S)
BUGS
In order to support the same number of open files that the system does,
fopen() must allocate additional memory for data structures using cal-
loc() after 64 files have been opened. This confuses some programs
which use their own memory allocators.
21 January 1990 FOPEN(3V) | http://modman.unixdev.net/?sektion=3&page=fdopen&manpath=SunOS-4.1.3 | CC-MAIN-2017-17 | refinedweb | 538 | 67.18 |
The.
Tom Miller finally states the last word on the subject of game loops in Managed DirectX code. Now…
So Tom how does this way of looping compare (fps and memory wise) to the way Rick Hoskinson recently started discussing on his blog.
-Richard
Richard.Parsons (at) gmail.com
Why on earth is this page using SSL?
Tom, in your Message struct, what is WindowMessage? Would that be System.Windows.Forms.Message?
How does this stand up to supporting rendering to Controls and not just a Form, I wonder..?
Added link to ‘the saga of the MDX render loop’
Hi Tom,
Can’t help but notice that your new approach, if i’m not mistaken, is quite similar how GLUT handles the rendering. In GLUT, you need to hook up a callback to the idle event for rendering. Your latest approach is the cleanest so far. KUDOS Tom!
BTW, I have your book "Beginning 3D Game Programming" and it ROCKS!
PS: Sorry about the OpenGL / GLUT post. Since I moved to using C#, I’m now a Managed DirectX fan 😛
This is bloody brilliant, why didn’t I think of this? I have perhaps the most complicated render loop in the history of the universe right now (mind you, it handles switching between windowed/fullscreen, task switching, res changes flawlessly).
public WindowMessage msg;
Where does "WindowMessage" come from? I can’t seem to find it in any of the namespaces!
I’ve been tracking the posts regarding render loops, but early on I moved the scene preperation and rendering into a seperate thread. The thread renders to a window or a full screen, at synchronized frame rate or at maximum, with varying "sleep" times. This has worked much better than all proposed "main thread" render loop proposals so far.
i was going to try it out, but where did you get that WindowMessage enum/struct/class/whatever from? the one inside the Message struct declaration.
thanks []
also, why the use of SuppressUnmanagedCodeSecurity? theres no need for that (i think).
and where is that CharSet.Auto from? the only one i know is CharacterSet, and it has no .Auto
also, could you post a complete working example please? something really simple like device.Clear(…); device.Present();
i managed to compile it by removing the CharSet=CharSet.Auto and chaging the public WindowMessage msg; Message struct member to public int msg; (since i dont know where that WindowMessage is), but i dont know if it is as good as yours.
again thanks 🙂
Is there anywhere a newbie can go to view VB.NET solutions to the issues that started this issue applies to VB.NET Managed DirectX coding too?
Everything about Managed DirectX samples and blog chats is infuriatingly C# centric. Am I simply choosing the wrong language if I wish to learn to use Managed DirectX? The utter lack of VB.NET samples even in the DirectX SDK itself, not to mention no talk of how this render loop issue here is implemented in VB.NET, suggests VB.NET is indeed a poor language choice.
Any pointers to a good VB site covering this issue would be very welcome.
I hate to sound stupid but what #using’ do you need with that.
I copied the code in to a blank project and I get compile errors.
Thanks
Mort.
Wassup with all that C# Code?
Forgot about vb have we?
The PeekMessage loop has been in use for ages by game programmers who code in unmanaged DirectX and even OpenGL. What surprises me is that it took such a long before someone was able to figure it out for managed dx!
I put the using directive System.Runtime.InteropServices and that solved all of the errors except for one that says:
The type or namespace name ‘WindowMessage’ could not be found (are you missing a using directive or an assembly reference?)
I’ve looked in the MSDN and all over the web, and I cannot find the namespace it is in. Thank you for your help.
I can only guess this works – since I cannot get it to run. Reason for this being that I – as an average programmer – simply have no idea what namespaces / references to include.
What type is WindowMessage? IntPtr? (I tried looking for WindowMessage in MSDN to no avail).
System.Drawing.NativeMethods seems to be an internal class. Consequently I get compilation errors when trying to compile the code. What am I missing?
Might be I am the only one that plays around with MDX without a complete understanding and knowledge of the .net framework, at least to my very limited mind a complete code sample would be extremely helpful.
Don’t get me wrong, I am not critizing here. It’s just that I have very limited ressources (both time and brain 😉 )
I have made a sample framework app using the loop you have suggested here. It does work better than the original SDK loop. It is located <a href="">here</a>.
I’m wondering how much of a performance difference will be if I take out the AppStillIdle while loop and just call the update and render methods within OnApplicationIdle.
Tom, do you have any figures on this?
That looks great. I am new to DirectX, but I remember back when doing OpenGL that the main render loop was the most difficult part- especially trying to balance always redrawing without overworking the CPU.
Could you provide the source for the C# version of the WindowMessage struct as well? I am having difficulty getting this to compile since I assume that structure is defined else where in your code. (Is it defined somewhere in the managed source for the April MDX SDK update?)
Thanks,
-Chris
Simply put: Yuck. That’s not simple nor elegant. :/
Hrm. The Message struct is already defined by the .NET libraries as:
System.Windows.Forms.Message
I have checked in a new initial version of the Client UI which is based on the OnApplicationIdle style…
PingBack from
PingBack from
PingBack from
PingBack from
PingBack from | https://blogs.msdn.microsoft.com/tmiller/2005/05/05/my-last-post-on-render-loops-hopefully/ | CC-MAIN-2016-36 | refinedweb | 1,013 | 75.1 |
.
For the first scenario, regular asserts from the TestNG library are used (assertTrue, assertEquals and so on). These are found in the Assert class.
For the second scenario, the same types of asserts can be done (you can check whether two items are equal, whether a condition is true, and so on). However in this case, the methods come from another class, SoftAssert. One difference from the first kind of assertions is that after all the assertions have been written, an additional assert is needed: assertAll(). The way soft asserts work is that all the asserts will be executed. All of them will be run , even if they fail, but after all are run, the results are displayed. These results include all the failures that were encountered during the test execution. If there is at least one failure, the test will be marked as failed. If you forget to add the assertAll() check, the test will pass, and no failures will be printed, just as if everything works properly.
Importing the assertion methods/class
There is also another difference between the two kinds of assertions: for regular asserts it would be enough for you to make a static import on the class / assert methods and just use the assertions (without a prefix). For softAsserts no static import is possible, hence you need to instantiate the SoftAssert class. A little example helps to understand what i mean:
- How to import the methods/class for regular asserts:
import static org.testng.Assert.assertEquals; import static org.testng.Assert.assertFalse; import static org.testng.Assert.assertTrue;
In this case, you only have access to the three assert methods: assertEquals, assertFalse, assertTrue. To use other asserts from the Assert class, you will need to import those as well.
If you would like to use only one import statement and to have access to all the assert methods from the Assert class, you can use the following import instead:
import static org.testng.Assert.*;
import org.testng.asserts.SoftAssert;
Before using any asserts from this class, you will need to instantiate it, right at the beginning of the class, like this:
private SoftAssert softAssert = new SoftAssert();
Usage and examples
A few examples are shown below, with the usage and results of running regular asserts versus soft asserts.
- Regular asserts
@Test public void regularAssert() { assertEquals(1, 2); assertTrue(false); assertFalse(true); }
The result:
java.lang.AssertionError: Expected :2 Actual :1
In this case, even though there are several asserts to be run, the test will stop running when it encounters the first assertions failure.
@Test public void softAssert() { softAssert.assertEquals(1, 2); softAssert.assertTrue(false); softAssert.assertFalse(true); softAssert.assertAll(); }
The result:
java.lang.AssertionError: The following asserts failed: expected [2] but found [1], expected [true] but found [false], Expected :false Actual :true
Here all checks were made and all failures reported to the user.
Another example, where only two of the assertions are failing:
@Test public void softAssert() { softAssert.assertEquals(1, 2); softAssert.assertTrue(false); softAssert.assertFalse(false); softAssert.assertAll(); }
In this case, the results are:
java.lang.AssertionError: The following asserts failed: expected [2] but found [1], Expected :true Actual :false
Note: Ideally when working a project you should use the latest version of your dependencies. However some of the latest TestNG versions (including 6.9.10) have some issues, so if you encounter a message that says:
“org.testng.TestNGException: org.xml.sax.SAXParseException; lineNumber: 3; columnNumber: 44; Attribute “parallel” with value “none” must have a value from the list “false methods tests classes instances “
–> you should simply downgrade the library to a good version. You could try 6.9.4. | https://imalittletester.com/2016/04/27/softassert-dont-make-your-test-fail-on-the-first-assertion-failure/?shared=email&msg=fail | CC-MAIN-2022-21 | refinedweb | 608 | 54.22 |
ASP.NET 2.0 Tips and Tricks
I:
- Taking applications off-line: Adding an app_offline.htm file into the root of an ASP.NET Website automatically shuts down the application until the file is removed. That's a great feature for people that need a friendly message to be displayed while a particular server box is being updated.
- Cross-page postbacks: <asp:button PostBackUrl=“PostbackPage.aspx” runat=“server”/>. I've used these quite a bit in ASP.NET V2 and it's a really nice feature especially when a postback really needs to go to a different page such as a search page.
- Setting a default button: <form DefaultButton=“btnSubmit” runat=server>. This allows someone to hit the enter key and still have the button's postback event handler hit in the code-behind page. The DefaultButton property can also be added to Panel server controls.
- Setting default focus: <form DefaultFocus=“txtName”. This allow you to easily set the default focus to a control in your form without writting code. The Page class's SetFocus() method can also be called. Controls also expose a Focus() method.
- Setting focus during a validation error: The validation controls now allow you to easily set focus on a control in error using the SetFocusOnError property. Validation controls also work properly in browsers such as FireFox.
<asp:RequiredFieldValidator
SetFocusOnError="true"
ErrorMessage="TextBox3 is empty" ControlToValidate="TextBox3"
runat="server“/>
- Register server controls and user controls in web.config: You can now register frequently used server controls and user controls (ones used across multiple pages) in web.config which avoids having to define the <%@ Register %> directive in each page (which I personally never liked):
<controls>
<add tagPrefix="acme" tagName="uc" src="~/UserControls/Header.ascx" />
<add tagPrefix="my" namespace="CustomControls.Basic" assembly="CustomServerControl" />
</controls>
- CSS Control Adapters: Prefer to use CSS and divs instead of tables for the HTML output by ASP.NET server controls? Now you can with CSS adapters.
- RSS Tookit: Easily consume and create RSS feeds with the RSS Toolkit.
Additional topics were covered but these items were some of my favorites simply because they help with some of the stuff that was somewhat of a pain in ASP.NET 1.1 and are very simple to use. | http://weblogs.asp.net/dwahlin/441807 | CC-MAIN-2014-42 | refinedweb | 373 | 50.02 |
Opened 6 years ago
Closed 5 years ago
#17200 closed Bug (fixed)
"View on site" link breaks when quote(object_id) != object_id
Description
The change_view method of django.contrib.admin.options.ModelAdmin returns object_id as passed in. This means that the "View on site" link breaks in cases where the django.contrib.admin.util.quote(object_id) != object_id.
There are two possible solutions:
1) Call django.contrib.admin.util.unquote() on object_id before returning to change_view template;
2) Call django.contrib.admin.util.unquote() on object_id in django.contrib.contenttypes.views.shortcut in this line: obj = content_type.get_object_for_this_type(pk=object_id).
Attached are svn diffs for each.
Attachments (3)
Change History (12)
Changed 6 years ago by
Changed 6 years ago by
comment:1 Changed 6 years ago by
comment:2 follow-up: 3 Changed 6 years ago by
comment:3 Changed 6 years ago by
Solution 1) is the right one. Could you provide a test case? Thanks!
Happy to take a crack at a test case. Since I've not contributed one to Django before, I'm not sure the more helpful way to do this. My first guess is to write a test_viewonsite_link method in the AdminViewStringPrimaryKeyTest class of tests/regressiontests/admin_views/tests.py.
Changed 6 years ago by
Initial attempt at patch + unit test. Note that i18n not applied to "View on site" in test.
comment:4 Changed 6 years ago by
I realized that the new test could just be added to the test_get_change_view() method. As noted, i18n may need to be applied to "View on site" text in test.
comment:5 Changed 5 years ago by
comment:6 Changed 5 years ago by
comment:7 Changed 5 years ago by
comment:8 Changed 5 years ago by
I have the feeling that the test
test_shortcut_view_with_escaping at line 1453 in tests/regressiontests/admin_views/tests.py confirms that the issue is fixed:
def test_shortcut_view_with_escaping(self): "'View on site should' work properly with char fields" model = ModelWithStringPrimaryKey(pk='abc_123') model.save() response = self.client.get('/test_admin/admin/admin_views/modelwithstringprimarykey/%s/' % quote(model.pk)) should_contain = '/%s/" class="viewsitelink">' % model.pk self.assertContains(response, should_contain)
comment:9 Changed 5 years ago by
OK, let's consider it fixed.
Solution 1) is the right one. Could you provide a test case? Thanks! | https://code.djangoproject.com/ticket/17200 | CC-MAIN-2017-30 | refinedweb | 379 | 58.69 |
Created attachment 561364 [details]
benchmark
IonMonkey w/ LSRA does rather poorly on the attached integer benchmark.
Crankshaft: 260ms
Ion LSRA: 350ms
Ion Greedy: 284ms
TI -m -n: 315ms
Assembly dumps are below. It seems like LSRA has eleven mysterious, probably unnecessary stores in the tightest loop. The Greedy allocator has 5 stores and 6 loads (it effectively spills at loop edges). Crankshaft has one load and zero stores.
We should find out what's going on here.
Crankshaft emits the following assembly:
=> 0x5c8231e3: mov %esi,%ebx
=> 0x5c8231e5: and $0xffff,%ebx
=> 0x5c8231eb: mov %eax,%edi
=> 0x5c8231ed: add %ebx,%edi
=> 0x5c8231ef: mov %esi,%ebx
=> 0x5c8231f1: sar $0x10,%ebx
=> 0x5c8231f4: mov %ecx,%edx
=> 0x5c8231f6: add %ebx,%edx
=> 0x5c8231f8: mov %edi,%ebx
=> 0x5c8231fa: sar $0x10,%ebx
=> 0x5c8231fd: add %ebx,%edx
=> 0x5c8231ff: shl $0x10,%edx
=> 0x5c823202: and $0xffff,%edi
=> 0x5c823208: or %edi,%edx
=> 0x5c82320a: mov -0x3c(%ebp),%ebx
=> 0x5c82320d: or %edx,%ebx
IM LSRA:
0xf73ef1d8: mov %ebp,0x7c(%esp)
=> 0xf73ef1dc: mov %edx,%ebp
0xf73ef1de: and $0xffff,%edx
0xf73ef1e4: mov %ecx,0x78(%esp)
0xf73ef1e8: add %edx,%ecx
0xf73ef1ea: jo 0xf73ef013
0xf73ef1f0: mov %ebp,%edx
0xf73ef1f2: sar $0x10,%ebp
0xf73ef1f5: mov %ebx,0x74(%esp)
0xf73ef1f9: add %ebp,%ebx
0xf73ef1fb: jo 0xf73ef018
0xf73ef201: mov %ecx,%ebp
0xf73ef203: sar $0x10,%ecx
0xf73ef206: mov %ebx,0x70(%esp)
0xf73ef20a: add %ecx,%ebx
0xf73ef20c: jo 0xf73ef01d
0xf73ef212: mov %ebx,0x70(%esp)
0xf73ef216: shl $0x10,%ebx
0xf73ef219: mov %ebp,0x70(%esp)
0xf73ef21d: and $0xffff,%ebp
0xf73ef223: mov %ebx,0x70(%esp)
0xf73ef227: or %ebp,%ebx
0xf73ef229: mov %esi,0x70(%esp)
0xf73ef22d: or %ebx,%esi
0xf73ef22f: mov %edx,0x6c(%esp)
0xf73ef233: add $0x1,%edx
0xf73ef236: jo 0xf73ef022
0xf73ef23c: mov 0x74(%esp),%ebx
0xf73ef240: mov 0x78(%esp),%ecx
IM Greedy:
0xf73ef1ba: mov 0x9c(%esp),%ecx
0xf73ef1c1: mov 0x78(%esp),%edx
0xf73ef1c5: mov 0x6c(%esp),%ebx
=> 0xf73ef1c9: mov 0x70(%esp),%ebp
0xf73ef1cd: mov 0x74(%esp),%esi
0xf73ef1d1: mov 0x7c(%esp),%edi
0xf73ef1d5: cmp %ecx,%esi
0xf73ef1d7: jl 0xf73ef1ff
0xf73ef1dd: mov %edi,%esi
0xf73ef1df: mov %edx,%edi
0xf73ef1e1: add $0x1,%edi
0xf73ef1e4: jo 0xf73ef00e
0xf73ef1ea: mov %edi,0x78(%esp)
0xf73ef1ee: mov %edi,%edx
0xf73ef1f0: mov %edi,0x78(%esp)
0xf73ef1f4: mov %esi,%edi
0xf73ef1f6: mov %esi,0x6c(%esp)
0xf73ef1fa: jmp 0xf73ef16e
0xf73ef1ff: mov %esi,%edx
0xf73ef201: and $0xffff,%edx
0xf73ef207: add %edx,%ebx
0xf73ef209: jo 0xf73ef013
0xf73ef20f: mov %esi,%edx
0xf73ef211: sar $0x10,%edx
0xf73ef214: add %edx,%ebp
0xf73ef216: jo 0xf73ef018
0xf73ef21c: mov %ebx,%edx
0xf73ef21e: sar $0x10,%edx
0xf73ef221: add %edx,%ebp
0xf73ef223: jo 0xf73ef01d
0xf73ef229: shl $0x10,%ebp
0xf73ef22c: and $0xffff,%ebx
0xf73ef232: or %ebx,%ebp
0xf73ef234: or %ebp,%edi
0xf73ef236: add $0x1,%esi
0xf73ef239: jo 0xf73ef022
0xf73ef23f: mov %esi,0x74(%esp)
0xf73ef243: mov %edi,0x7c(%esp)
0xf73ef247: jmp 0xf73ef1ba
I looked at this, since this regalloc problem is really common. Consider this function:
function f(x) {
x >>= 1;
return x;
}
The LIR:
// ...
0 movegroup ()[arg:8 -> =eax] <|@
6 unbox ([i:4 (r)]) (arg:12), (=eax) <|@
0 movegroup () <|@
7 shiftop ([i:7 (=eax)]) (=eax), (c) <|@
8 box ([t:8 (=ecx)], [d:7 (r)]) (=eax) <|@
0 movegroup ()[=eax -> =edx] <|@
9 return () (=ecx), (=edx) <|@
Here we have the following (problematic) intervals:
interval 1: [12, 15]
interval 2: [15, 19] (requirement: SAME_AS interval 1)
The algorithm proceeds as follows:
1) process [12, 15]. Assign register eax
2) process [15, 19]. We have to assign register eax (due to the SAME_AS requirement). However, register eax is still in use by interval 1. So interval 1 is split into [12, 14] and [15, 15].
3) process [15, 15]. This interval has no requirement or hint, so it's (eagerly) spilled.
I think the basic problem here is that interval 1 and 2 overlap but require the same register. adrake, what do you think is the best fix here? Maybe the first interval should be [12, 14] because the current instruction has an interval with SAME_AS requirement?
I'm working on this now, per our discussion yesterday. The testcase is interesting because there are 5 values we really want to keep in registers in the inner loop:
1: hoisted i & 0xFFFF
2: hoisted i >> 16
3: j
4: t
5: b
On x86 this leaves only 3 registers for the temporaries inside the loop and |i| and |a| from the outer loop.
(In reply to Jan de Mooij (:jandem) from comment #2)
> On x86 this leaves only 3 registers for the temporaries inside the loop and
> |i| and |a| from the outer loop.
Typo, 2 registers.
Note that the Crankshaft info in comment 0 is not entirely correct. They have 4 loads and 1 store (at the end of the inner loop, right before the jump). They spill (store and load) t and reload b and two other values that are not used inside the inner loop.
Created attachment 573551 [details] [diff] [review]
WIP 1
This patch makes two changes to the linear scan allocator to reduce spills. It passes jit-tests with --ion-eager, but I still need to clean it up and test it better before asking for review.
For the attached benchmark:
d8 : 313 ms
IM LSRA new: 351 ms
TI+JM : 383 ms
IM Greedy : 427 ms
IM LSRA old: 435 ms
The remaining difference with Crankshaft is probably caused by the overflow checks for add (bug 699883). Here's the code we generate for the inner loop:
--
0x9d61c4: cmp %eax,%ebx
0x9d61c6: jge 0x9d622c
0x9d61cc: mov %ebx,%eax
0x9d61ce: and $0xffff,%ebx
0x9d61d4: mov %edx,0x7c(%esp)
0x9d61d8: add %ebx,%edx
0x9d61da: jo 0x9d600e
0x9d61e0: mov %eax,%ebx
0x9d61e2: sar $0x10,%eax
0x9d61e5: mov %ebp,0x78(%esp)
0x9d61e9: add %eax,%ebp
0x9d61eb: jo 0x9d6013
0x9d61f1: mov %edx,%eax
0x9d61f3: sar $0x10,%edx
0x9d61f6: add %edx,%ebp
0x9d61f8: jo 0x9d6018
0x9d61fe: shl $0x10,%ebp
0x9d6201: and $0xffff,%eax
0x9d6207: or %eax,%ebp
0x9d6209: mov %edi,%eax
0x9d620b: or %ebp,%edi
0x9d620d: mov %ebx,%edx
0x9d620f: add $0x1,%ebx
0x9d6212: jo 0x9d601d
0x9d6218: mov 0x78(%esp),%ebp
0x9d621c: mov 0x7c(%esp),%edx
0x9d6220: mov 0x9c(%esp),%eax
0x9d6227: jmp 0x9d61c4
--
3 loads and 2 stores. It's interesting that these stores are for the loop invariant entries (i & 0xffff) and (i >> 16). We should be able to get rid of them by moving the stores to the outer loop.
Wimmer's 2005 paper (not 2010) mentions they do this by moving the split position before the loop block. I'll look into that tomorrow; if it works, we have 3 loads (like Crankshaft) but 0 stores (Crankshaft has 1).
Created attachment 574009 [details] [diff] [review]
Fix
Passes jit-tests and a few hours of fuzzing with anion. I think we should take this patch, it helps this benchmark (and other loops) a lot. We can think about other fixes when we see benchmarks where we do significantly worse than others.
Comment on attachment 574009 [details] [diff] [review]
Fix
Bug 709731 will fix this for the most part (but we still want some changes from this patch)
Was this patch (or parts of it, from comment 6) ever reviewed / pushed? Is there any benefit to doing so at this point?
(In reply to Paul Wright from comment #7)
> Is there any benefit to doing so at this point?
No, the patch was obsoleted by bug 709731. Regalloc for the inner loop is much better now:
--
0xd5b52b: cmp %ecx,%ebp
0xd5b52d: jge 0xd5b590
0xd5b533: mov %ebp,%eax
0xd5b535: and $0xffff,%eax
0xd5b53b: mov %esi,%ecx
0xd5b53d: add %eax,%ecx
0xd5b53f: jo 0xd5b01d
0xd5b545: mov %ebp,%eax
0xd5b547: sar $0x10,%eax
0xd5b54a: mov %edi,%esi
0xd5b54c: add %eax,%esi
0xd5b54e: jo 0xd5b022
0xd5b554: mov %ecx,%eax
0xd5b556: sar $0x10,%eax
0xd5b559: add %eax,%esi
0xd5b55b: jo 0xd5b027
0xd5b561: shl $0x10,%esi
0xd5b564: and $0xffff,%ecx
0xd5b56a: or %ecx,%esi
0xd5b56c: mov %edx,%eax
0xd5b56e: or %esi,%eax
0xd5b570: mov %ebp,%ecx
0xd5b572: add $0x1,%ecx
0xd5b575: jo 0xd5b02c
0xd5b57b: mov 0x4c(%esp),%esi
0xd5b57f: mov %ecx,%ebp
0xd5b581: mov 0x48(%esp),%ecx
0xd5b585: mov %eax,%edx
0xd5b587: mov 0x44(%esp),%eax
0xd5b58b: jmp 0xd5b52b
--
Still not optimal though:
1) Range analysis is needed to get rid of the overflow checks for most adds.
2) We're spilling values that are used in the inner loop, instead of spilling values used by the outer loop (for instance, we don't touch ebx inside the inner loop, but we should).
To fix this, we could insert use positions for intervals used inside the loop at the loop backedge, like JVM's allocator. However, this is not high priority since regalloc is reasonable now and comparable to v8 and the greedy allocator. | https://bugzilla.mozilla.org/show_bug.cgi?id=688078 | CC-MAIN-2017-04 | refinedweb | 1,406 | 59.87 |
NOTE: This is a translation of my German post 'ESP8266 "telefoniert nach
Hause"' in mikrocontroller.net. I translated it to reach a broader and
more international audience.
Back in october/november I build a light sensor to control the lighting
around my house. You can see the result in the attached picture and the
sensor works fine. The ESP8266 is connected to a WLAN, that is not
connected to the internet and is utilized only for local traffic. Then I
got a new router and noticed, that the ESP8266 tries to reach an
external IP address (now 128.85.255.63, two weeks ago it was
112.82.255.65). The ESP8266 scans ports in the upper port range from
about port 30.000 to 62.000 on that specific address.
Until I noticed the described behaviour, the ESP8266 was not capable to
communicate with external (internet) IP addresses. I can rule out that
the ESP8266 responds to incoming traffic from the internet. I changed
that now and currently I am sniffing the traffic. An actual log of this
traffic is attached as a PCAP-File, which you can download for the use
in Wireshark.
My device is an ESP8266 coupled with an I2C BH1750 light sensor. The
ESP8266 contains portions of the „ESP8266AdvancedWebserver“ example of
the Aruino IDE (1.6.4) plugin, which was installed through the
board-manager. The webserver returns the sensor value upon a http
request.
I have coded the ESP myself via Arduino IDE and scanned the libs
available as source code. I did not find anything that could explain the
behaviour.
I have made use of the following libs:
#include <ESP8266WiFi.h>
#include <WiFiClient.h>
#include <ESP8266WebServer.h>
#include <ESP8266mDNS.h>
#include <Wire.h>
#include <math.h>
At the moment I cannot re-enact this situation in lab conditions and
with different releases. Hence, I kindly ask you to check if your
ESP8266 is calling home as well. You just need to check the routing
table of your router.
Please post if your ESP8266 connects to "alien" servers as well.
Positive/negative reports are welcome.
I attached a picture of my light-sensor, screenshots of my routing table
and a pcap file for Wireshark, where I have collected a few of the
suspicous pakets.
I look forward to hearing from you,
Jo
It turned out that the issue was caused by the application itself (not
the ESP module or firmware). What seemed like "calling home" are really
random data.
What do you mean by "random data?" I have two that are doing the exact
same thing.
Call me one of those conspiracy nuts, but one of those IP addresses is
in China. Think about this, you're making these tiny controllers that
are made to connect to wireless LANs and likely the internet. You make
these and sell them for a couple bucks and half a bazillion are sold and
being tinkered with everywhere. You have 1) a massive source of data
that could be harvested with no effort, and 2) a controller that can be
triggered to perform a simple function. We really don't know what is in
the chip. They may have inaccessible memory that has a tiny program to
phone home and do something. I'm just sayin....
Am I off my rocker? Has my cheese slipped off my cracker?
Has anyone read "Daemon" by Daniel Suarez?
Awesome book. You would definitely be interested.
Configure your router to cut the ESP from external connections. If you
need access from thr global internet, use an Rpi to handle the data or
use VPN. | https://embdev.net/topic/387963 | CC-MAIN-2018-30 | refinedweb | 602 | 76.42 |
Perhaps of my project with a setup.py file so I can easily move the site around to different development machines or server environments as needed. I wanted to make sure I didn’t install different versions on a different machine, so I’ve provided a packages directory and install the file with the dependencies locked at specific versions and the packages directory noted. Easy install then just uses the package I place in the packages directory and every place I install this website, I get the same setup.
My setup.py snippets
setup ( # ... install_requires=[ 'Django=1.1', 'MySQL-Python==1.2.3c1', # etc ], dependency_links = [ 'packages' ], # ... )
So far so good. I should point out that some have suggested pip as a replacement to easy_install. For my rant portion of this post, both pip and easy_install don’t offer any help. If I’m wrong about pip, somebody please point out the proper way to do this..
My problem begins when I find a package with no setup.py file. The package I’m attempting to use is the forum module from Sphene Community Tools. As I download and go through their documentation, I find that they pretty much expect you to just add the sphene package to your python path. That doesn’t fit with my installable website model though so I want to add a simple setup.py file and install it with the rest of my packages.
Issues:
- Package Data
Turbogears contributers worked on a find_package_data function that can recursively add your package data. The fact that this isn’t included yet in the distutils api is quite annoying. I copied the file from another project [1]. I also added ‘.svn’ to the ignore directory list since I was building from the svn view.
- Extra Data
Just as annoying as the package data problem is the static data. I originally thought I could just include the static directory in data files like this:
data_files=[ ('static', findall('static') ]
That just puts every file recursively found in the static directory all in one static directory for the install. Instead, I modified the result returned by the find_package_data function to use the data_files format. Here is my completed setup.py file:
import os from setuptools import setup, find_packages from finddata import find_package_data packages=find_packages('sphenecoll') package_data=find_package_data('sphenecoll') static = find_package_data('static','sphene') # not in correct format static_dict={} # dir -> files for path in static['sphene']: dir, file = os.path.split(path) dir = os.path.join('static', dir ) files = static_dict.setdefault( dir, [] ) files.append(os.path.join('static',path)) setup( name='sphene', version='0.6dev', packages=packages, package_data=package_data, package_dir={'sphene':'sphenecoll/sphene'}, scripts=['dist/scripts/make-messages.py', 'dist/scripts/compile-all-sph-messages.py' ], data_files=static_dict.items() )
If someone else is interested, you can put finddata.py and this setup.py in the sphene/communitytools directory and build your own egg. Not really the point of this post, but perhaps someone will find it useful. Actually, the Sphene tools should be installable I think. The expectation to just put the directory in your PYTHONPATH isn’t very flexible.
- Install from source
Both easy_install and pip didn’t actually copy any of the package data or data files into the installed site-packages. I had to use setuptools bdist_egg and then installing the egg caused the files to be copied correctly.
- No uninstall
This one isn’t really an issue with this install in particular, but it is always annoying.
Maybe one of these days some Python developer will come up with a good solution to the install mess. I’m too busy working on the projects I’ve got to get done aside form the occasional sidetrack that installing Python modules causes.
1: I copied paste.util.finddata
Sounds like a bug with sphene that it doesn’t have a setup.py file.
Agreed, that data files in setup.py are a pain. If you care about changing that (or your other issues, unistall is a common one) you should at least mention it on distutils-sig since they are in the process of revamping everything….
hi,
well, i haven’t really had the request yet to add a setup.py to SCT – maybe because i don’t even have django installed, but am always using the latest trunk symlinked 🙂
anyways… do you think your setup.py is stable enough so i can add it to the next SCT release? 🙂
thanks,
herbert
It works quite well for me. I haven’t done extensive testing though. I’m not sure if all of the required files and packages are included and copied correctly. What I needed for my own install is included properly. It would be quite easy to verify that everything else is included appropriately too though but I’m not familiar enough with all the features to authoritatively say. | https://allmybrain.com/2009/10/21/the-python-install-system-needs-an-overhaul/ | CC-MAIN-2020-45 | refinedweb | 809 | 57.77 |
From: sk8terg1rl
Newsgroups: uk.legal uk.finance
Subject: Re: Overpaid PAYE tax?
Date: Tue, 05 Jun 2007 04:35:32 -0700
posting-account=Mb4I8Q0AAACZWnfzrgi2Y63Mp7QYNLXj
Hi Andy,
On 5 Jun, 09:00, "Andy Pandy"
wrote:
> "sk8terg1rl" wrote in message
>
> news:1180989488.535609.306460@d30g2000prg.googlegroups.com...
>
> > Hi group,
>
> > Here are my statement of earnings for my 2 years at uni. This
> > represents "casual pay" work at college. I do a bit of private
> > tutoring paid for with cash and make a bit from the stock market
> > (dividends + capital gain).
>
> > Casual college pay + private tutoring work is definitely under =A35k.
> > Stock dividends + capital gain also definitely under =A39.2k
>
> Which should mean no tax at all. Dividends count as income, but they will=
be paid net
> of basic rate tax, so no further tax liability unless you pay higher rate=
tax (IIRC
> you can't reclaim the tax credit even if you don't earn enough to pay tax=
)=2E
Okay, thanks for clearing that up.
>
> > Has my college's finance department applied tax wrongly to me when
> > none is due? I am scrupulously honest but am loathe to want to
> > contribute to hideous =A3400,000 London 2012 Olympics Logos, ID cards
> > and so on...
>
> The worrying thing is that the logo is only 0.004% of the projected cost =
of the
> Olympics!!
I'll say. And don't forget these are people from the same lot who
decide how to spend our trillion =A3 GDP. Makes you shudder.
>
> > Also, why did my tax code change from BR non cumulative to BR
> > cumulative from my first year to the next?
>
> You shouldn't be on a BR code at all, that's the problem. The college has=
n't screwed
> up - the tax code gets set by HMRC and the employer just uses the code th=
ey are told
> to.
Strange how I must go to Usenet to get an answer for this. The finance
office staff at my college have no idea why I'm getting taxed.
> Did you say the college job wasn't your "main job" in the tax form you fi=
lled in when
> you got the job? If so that's why you've got a funny tax code, rather tha=
n a proper
> one like 522L.
I didn't fill in any tax forms when I got the college tutoring jobs. I
simply filled out a "casual pay" form and entered my NI number,
personal & bank details.
My tutoring work was for weaker first year students, those who didn't
do further maths in their A-levels (and are hurting because of it) and
also various paid tasks in the department that needed doing
(departmental fresher tours and so on).
I think the wrong tax code came about because my first job was several
months in a secretarial job before I came to uni. At that time I was
trying to get away from my family who had so thoroughly messed up my
life and got my financial independence by grabbing (almost) the first
job I could find. I literally ran away from home for a few months and
didn't talk to any family. I don't tend to speak of this because it is
a repressed memory. In the end I decided I hated that job (and its
future prospects) more than I hated my parents, ate humble pie and
went back home.
> You need to phone your tax office (the number will probably be on your pa=
yslip or
> P60) and explain the situation to them. You should then either get a prop=
er tax code
> for the college job, or an NT code (no tax) - in both cases it'll mean no=
tax will be
> taken if you earn within the personal allowance.
>
> You should also ask the tax office for a tax return (or do one online) to=
claim back
> the tax they've taken off you. It's easy to do online, but you need to re=
gister with
> the govt gateway first:
>
>
>
> --
> Andy
Okay, thanks. I'll do that. I could really use the money.
skate xx | http://www.info-mortgage-loans.com/usenet/posts/64548-84567.uk.finance.shtml | crawl-002 | refinedweb | 688 | 78.99 |
Dates and timestamps
The
Date and
Timestamp datatypes changed significantly in Databricks Runtime 7.0. This article describes:
- The
Datetype and the associated calendar.
- The
Timestamptype and how it relates to time zones. It also explains the details of time zone offset resolution and the subtle behavior changes in the new time API in Java 8, used by Databricks Runtime 7.0.
- APIs to construct date and timestamp values.
- Common pitfalls and best practices for collecting date and timestamp objects on the Apache Spark driver.
Dates and calendars
A
Date is a combination of the year, month, and day fields, like (year=2012, month=12, day=31). However, the values of the year, month, and day fields have constraints to ensure that the date value is a valid date in the real world. For example, the value of month must be from 1 to 12, the value of day must be from 1 to 28,29,30, or 31 (depending on the year and month), and so on. The
Date type does not consider time zones.
Calendars
Constraints on
Date fields are defined by one of many possible calendars. Some, like the Lunar calendar, are used only in specific regions. Some, like the Julian calendar, are used only in history. The de facto international standard is the Gregorian calendar which is used almost everywhere in the world for civil purposes. It was introduced in 1582 and was extended to support dates before 1582 as well. This extended calendar is called the Proleptic Gregorian calendar.
Databricks Runtime 7.0 uses the Proleptic Gregorian calendar, which is already being used by other data systems like pandas, R, and Apache Arrow. Databricks Runtime 6.x and below used a combination of the Julian and Gregorian calendar: for dates before 1582, the Julian calendar was used, for dates after 1582 the Gregorian calendar was used. This is inherited from the legacy
java.sql.Date API, which was superseded in Java 8 by
java.time.LocalDate, which uses the Proleptic Gregorian calendar.
Timestamps and time zones
The
Timestamp type extends the
Date type with new fields: hour, minute, second (which can have a fractional part) and together with a global (session scoped) time zone. It defines a concrete time instant. For example, (year=2012, month=12, day=31, hour=23, minute=59, second=59.123456) with session time zone UTC+01:00. When writing timestamp values out to non-text data sources like Parquet, the values are just instants (like timestamp in UTC) that have no time zone information. If you write and read a timestamp value with a different session time zone, you may see different values of the hour, minute, and second fields, but they are the same concrete time instant.
The hour, minute, and second fields have standard ranges: 0–23 for hours and 0–59 for minutes and seconds. Spark supports fractional seconds with up to microsecond precision. The valid range for fractions is from 0 to 999,999 microseconds.
At any concrete instant, depending on time zone, you can observe many different wall clock values:
Conversely, a wall clock value can represent many different time instants.
The time zone offset allows you to unambiguously bind a local timestamp to a time instant. Usually, time zone offsets are defined as offsets in hours from Greenwich Mean Time (GMT) or UTC+0 (Coordinated Universal Time). This representation of time zone information eliminates ambiguity, but it is inconvenient. Most people prefer to point out a location such as
America/Los_Angeles or
Europe/Paris. This additional level of abstraction from zone offsets makes life easier but brings complications. For example, you now have to maintain a special time zone database to map time zone names to offsets. Since Spark runs on the JVM, it delegates the mapping to the Java standard library, which loads data from the Internet Assigned Numbers Authority Time Zone Database (IANA TZDB). Furthermore, the mapping mechanism in Java’s standard library has some nuances that influence Spark’s behavior.
Since Java 8, the JDK exposed a different API for date-time manipulation and time zone offset resolution and Databricks Runtime 7.0 uses this API. Although the mapping of time zone names to offsets has the same source, IANA TZDB, it is implemented differently in Java 8 and above compared to Java 7.
For example, take a look at a timestamp before the year 1883 in the
America/Los_Angeles time zone:
1883-11-10 00:00:00. This year stands out from others because on November 18, 1883, all North American railroads switched to a new standard time system. Using the Java 7 time API, you can obtain a time zone offset at the local timestamp as
-08:00:
java.time.ZoneId.systemDefault
res0:java.time.ZoneId = America/Los_Angeles
java.sql.Timestamp.valueOf("1883-11-10 00:00:00").getTimezoneOffset / 60.0
res1: Double = 8.0
The equivalent Java 8 API returns a different result:
java.time.ZoneId.of("America/Los_Angeles").getRules.getOffset(java.time.LocalDateTime.parse("1883-11-10T00:00:00"))
res2: java.time.ZoneOffset = -07:52:58
Prior to November 18, 1883, time of day in North America was a local matter, and most cities and towns used some form of local solar time, maintained by a well-known clock (on a church steeple, for example, or in a jeweler’s window). That’s why you see such a strange time zone offset.
The example demonstrates that Java 8 functions are more precise and take into account historical data from IANA TZDB. After switching to the Java 8 time API, Databricks Runtime 7.0 benefited from the improvement automatically and became more precise in how it resolves time zone offsets.
Databricks Runtime 7.0 also switched to the Proleptic Gregorian calendar for the
Timestamp type. The ISO SQL:2016 standard declares the valid range for timestamps is from
0001-01-01 00:00:00 to
9999-12-31 23:59:59.999999. Databricks Runtime 7.0 fully conforms to the standard and supports all timestamps in this range. Compared to Databricks Runtime 6.x and below, note the following sub-ranges:
0001-01-01 00:00:00..1582-10-03 23:59:59.999999. Databricks Runtime 6.x and below uses the Julian calendar and doesn’t conform to the standard. Databricks Runtime 7.0 fixes the issue and applies the Proleptic Gregorian calendar in internal operations on timestamps such as getting year, month, day, etc. Due to different calendars, some dates that exist in Databricks Runtime 6.x and below don’t exist in Databricks Runtime 7.0. For example, 1000-02-29 is not a valid date because 1000 isn’t a leap year in the Gregorian calendar. Also, Databricks Runtime 6.x and below resolves time zone name to zone offsets incorrectly for this timestamp range.
1582-10-04 00:00:00..1582-10-14 23:59:59.999999. This is a valid range of local timestamps in Databricks Runtime 7.0, in contrast to Databricks Runtime 6.x and below where such timestamps didn’t exist.
1582-10-15 00:00:00..1899-12-31 23:59:59.999999. Databricks Runtime 7.0 resolves time zone offsets correctly using historical data from IANA TZDB. Compared to Databricks Runtime 7.0, Databricks Runtime 6.x and below might resolve zone offsets from time zone names incorrectly in some cases, as shown in the preceding example.
1900-01-01 00:00:00..2036-12-31 23:59:59.999999. Both Databricks Runtime 7.0 and Databricks Runtime 6.x and below conform to the ANSI SQL standard and use Gregorian calendar in date-time operations such as getting the day of the month.
2037-01-01 00:00:00..9999-12-31 23:59:59.999999. Databricks Runtime 6.x and below can resolve time zone offsets and daylight saving time offsets incorrectly. Databricks Runtime 7.0 does not.
One more aspect of mapping time zone names to offsets is overlapping of local timestamps that can happen due to daylight savings time (DST) or switching to another standard time zone offset. For instance, on November 3 2019, 02:00:00, most states in the USA turned clocks backwards 1 hour to 01:00:00. The local timestamp
2019-11-03 01:30:00 America/Los_Angeles can be mapped either to
2019-11-03 01:30:00 UTC-08:00 or
2019-11-03 01:30:00 UTC-07:00. If you don’t specify the offset and just set the time zone name (for example,
2019-11-03 01:30:00 America/Los_Angeles), Databricks Runtime 7.0 takes the earlier offset, typically corresponding to “summer”. The behavior diverges from Databricks Runtime 6.x and below which takes the “winter” offset. In the case of a gap, where clocks jump forward, there is no valid offset. For a typical one-hour daylight saving time change, Spark moves such timestamps to the next valid timestamp corresponding to “summer” time.
As you can see from the preceding examples, the mapping of time zone names to offsets is ambiguous, and is not one to one. In the cases when it is possible, when constructing timestamps we recommend specifying exact time zone offsets, for example
2019-11-03 01:30:00 UTC-07:00.
ANSI SQL and Spark SQL timestamps
The ANSI SQL standard defines two types of timestamps:
TIMESTAMP WITHOUT TIME ZONEor
TIMESTAMP: Local timestamp as (
YEAR,
MONTH,
DAY,
HOUR,
MINUTE,
SECOND). These timestamps are not bound to any time zone, and are wall clock timestamps.
TIMESTAMP WITH TIME ZONE: Zoned timestamp as (
YEAR,
MONTH,
DAY,
HOUR,
MINUTE,
SECOND,
TIMEZONE_HOUR,
TIMEZONE_MINUTE). These timestamps represent an instant in the UTC time zone + a time zone offset (in hours and minutes) associated with each value.
The time zone offset of a
TIMESTAMP WITH TIME ZONE does not affect the physical point in time that the timestamp represents, as that is fully represented by the UTC time instant given by the other timestamp components. Instead, the time zone offset only affects the default behavior of a timestamp value for display, date/time component extraction (for example,
EXTRACT), and other operations that require knowing a time zone, such as adding months to a timestamp.
Spark SQL defines the timestamp type as
TIMESTAMP WITH SESSION TIME ZONE, which is a combination of the fields (
YEAR,
MONTH,
DAY,
HOUR,
MINUTE,
SECOND,
SESSION TZ) where the
YEAR through
SECOND field identify a time instant in the UTC time zone, and where SESSION TZ is taken from the SQL config spark.sql.session.timeZone. The session time zone can be set as:
- Zone offset
(+|-)HH:mm. This form allows you to unambiguously define a physical point in time.
- Time zone name in the form of region ID
area/city, such as
America/Los_Angeles. This form of time zone info suffers from some of the problems described previously like overlapping of local timestamps. However, each UTC time instant is unambiguously associated with one time zone offset for any region ID, and as a result, each timestamp with a region ID based time zone can be unambiguously converted to a timestamp with a zone offset. By default, the session time zone is set to the default time zone of the Java virtual machine.
Spark
TIMESTAMP WITH SESSION TIME ZONE is different from:
TIMESTAMP WITHOUT TIME ZONE, because a value of this type can map to multiple physical time instants, but any value of
TIMESTAMP WITH SESSION TIME ZONEis a concrete physical time instant. The SQL type can be emulated by using one fixed time zone offset across all sessions, for instance UTC+0. In that case, you could consider timestamps at UTC as local timestamps.
TIMESTAMP WITH TIME ZONE, because according to the SQL standard column values of the type can have different time zone offsets. That is not supported by Spark SQL.
You should notice that timestamps that are associated with a global (session scoped) time zone are not something newly invented by Spark SQL. RDBMSs such as Oracle provide a similar type for timestamps:
TIMESTAMP WITH LOCAL TIME ZONE.
Construct dates and timestamps
Spark SQL provides a few methods for constructing date and timestamp values:
- Default constructors without parameters:
CURRENT_TIMESTAMP()and
CURRENT_DATE().
- From other primitive Spark SQL types, such as
INT,
LONG, and
STRING
- From external types like Python datetime or Java classes
java.time.LocalDate/
Instant.
- Deserialization from data sources such as CSV, JSON, Avro, Parquet, ORC, and so on.
The function
MAKE_DATE introduced in Databricks Runtime 7.0 takes three parameters—
YEAR,
MONTH, and
DAY—and constructs a
DATE value. All input parameters are implicitly converted to the
INT type whenever possible. The function checks that the resulting dates are valid dates in the Proleptic Gregorian calendar, otherwise it returns
NULL. For example:
spark.createDataFrame([(2020, 6, 26), (1000, 2, 29), (-44, 1, 1)],['Y', 'M', 'D']).createTempView('YMD') df = sql('select make_date(Y, M, D) as date from YMD') df.printSchema()
root |-- date: date (nullable = true)
To print DataFrame content, call the
show() action, which converts dates to strings on executors and transfers the strings to the driver to output them on the console:
df.show()
+-----------+ | date| +-----------+ | 2020-06-26| | null| |-0044-01-01| +-----------+
Similarly, you can construct timestamp values using the
MAKE_TIMESTAMP functions. Like
MAKE_DATE, it performs the same validation for date fields, and additionally accepts time fields HOUR (0-23), MINUTE (0-59) and SECOND (0-60). SECOND has the type Decimal(precision = 8, scale = 6) because seconds can be passed with the fractional part up to microsecond precision. For example:
df = spark.createDataFrame([(2020, 6, 28, 10, 31, 30.123456), \ (1582, 10, 10, 0, 1, 2.0001), (2019, 2, 29, 9, 29, 1.0)],['YEAR', 'MONTH', 'DAY', 'HOUR', 'MINUTE', 'SECOND']) df.show()
+----+-----+---+----+------+---------+ |YEAR|MONTH|DAY|HOUR|MINUTE| SECOND| +----+-----+---+----+------+---------+ |2020| 6| 28| 10| 31|30.123456| |1582| 10| 10| 0| 1| 2.0001| |2019| 2| 29| 9| 29| 1.0| +----+-----+---+----+------+---------+
df.selectExpr("make_timestamp(YEAR, MONTH, DAY, HOUR, MINUTE, SECOND) as MAKE_TIMESTAMP") ts.printSchema()
root |-- MAKE_TIMESTAMP: timestamp (nullable = true)
As for dates, print the content of the ts DataFrame using the show() action. In a similar way,
show() converts timestamps to strings but now it takes into account the session time zone defined by the SQL config
spark.sql.session.timeZone.
ts.show(truncate=False)
+--------------------------+ |MAKE_TIMESTAMP | +--------------------------+ |2020-06-28 10:31:30.123456| |1582-10-10 00:01:02.0001 | |null | +--------------------------+
Spark cannot create the last timestamp because this date is not valid: 2019 is not a leap year.
You might notice that there is no time zone information in the preceding example. In that case, Spark takes a time zone from the SQL configuration
spark.sql.session.timeZone and applies it to function invocations. You can also pick a different time zone by passing it as the last parameter of
MAKE_TIMESTAMP. Here is an example:
df = spark.createDataFrame([(2020, 6, 28, 10, 31, 30, 'UTC'),(1582, 10, 10, 0, 1, 2, 'America/Los_Angeles'), \ (2019, 2, 28, 9, 29, 1, 'Europe/Moscow')], ['YEAR', 'MONTH', 'DAY', 'HOUR', 'MINUTE', 'SECOND', 'TZ']) df = df.selectExpr('make_timestamp(YEAR, MONTH, DAY, HOUR, MINUTE, SECOND, TZ) as MAKE_TIMESTAMP') df = df.selectExpr("date_format(MAKE_TIMESTAMP, 'yyyy-MM-dd HH:mm:ss VV') AS TIMESTAMP_STRING") df.show(truncate=False)
+---------------------------------+ |TIMESTAMP_STRING | +---------------------------------+ |2020-06-28 13:31:00 Europe/Moscow| |1582-10-10 10:24:00 Europe/Moscow| |2019-02-28 09:29:00 Europe/Moscow| +---------------------------------+
As the example demonstrates, Spark takes into account the specified time zones but adjusts all local timestamps to the session time zone. The original time zones passed to the
MAKE_TIMESTAMP function are lost because the
TIMESTAMP WITH SESSION TIME ZONE type assumes that all values belong to one time zone, and it doesn’t even store a time zone per every value. According to the definition of the
TIMESTAMP WITH SESSION TIME ZONE, Spark stores local timestamps in the UTC time zone, and uses the session time zone while extracting date-time fields or converting the timestamps to strings.
Also, timestamps can be constructed from the LONG type using casting. If a LONG column contains the number of seconds since the epoch 1970-01-01 00:00:00Z, it can be cast to a Spark SQL
TIMESTAMP:
select CAST(-123456789 AS TIMESTAMP); 1966-02-02 05:26:51
Unfortunately, this approach doesn’t allow you to specify the fractional part of seconds.
Another way is to construct dates and timestamps from values of the
STRING type. You can make literals using special keywords:
select timestamp '2020-06-28 22:17:33.123456 Europe/Amsterdam', date '2020-07-01'; 2020-06-28 23:17:33.123456 2020-07-01
Alternatively, you can use casting that you can apply for all values in a column:
select cast('2020-06-28 22:17:33.123456 Europe/Amsterdam' as timestamp), cast('2020-07-01' as date); 2020-06-28 23:17:33.123456 2020-07-01
The input timestamp strings are interpreted as local timestamps in the specified time zone or in the session time zone if a time zone is omitted in the input string. Strings with unusual patterns can be converted to timestamp using the
to_timestamp() function. The supported patterns are described in Datetime Patterns for Formatting and Parsing:
select to_timestamp('28/6/2020 22.17.33', 'dd/M/yyyy HH.mm.ss'); 2020-06-28 22:17:33
If you don’t specify a pattern, the function behaves similarly to
CAST.
For usability, Spark SQL recognizes special string values in all methods that accept a string and return a timestamp or date:
epochis an alias for date
1970-01-01or timestamp
1970-01-01 00:00:00Z.
nowis the current timestamp or date at the session time zone. Within a single query it always produces the same result.
todayis the beginning of the current date for the
TIMESTAMPtype or just current date for the
DATEtype.
tomorrowis the beginning of the next day for timestamps or just the next day for the
DATEtype.
yesterdayis the day before current one or its beginning for the
TIMESTAMPtype.
For example:
select timestamp 'yesterday', timestamp 'today', timestamp 'now', timestamp 'tomorrow'; 2020-06-27 00:00:00 2020-06-28 00:00:00 2020-06-28 23:07:07.18 2020-06-29 00:00:00 select date 'yesterday', date 'today', date 'now', date 'tomorrow'; 2020-06-27 2020-06-28 2020-06-28 2020-06-29
Spark allows you to create
Datasets from existing collections of external objects at the driver side and create columns of corresponding types. Spark converts instances of external types to semantically equivalent internal representations. For example, to create a
Dataset with
DATE and
TIMESTAMP columns from Python collections, you can use:
import datetime df = spark.createDataFrame([(datetime.datetime(2020, 7, 1, 0, 0, 0), datetime.date(2020, 7, 1))], ['timestamp', 'date']) df.show()
+-------------------+----------+ | timestamp| date| +-------------------+----------+ |2020-07-01 00:00:00|2020-07-01| +-------------------+----------+
PySpark converts Python’s date-time objects to internal Spark SQL representations at the driver side using the system time zone, which can be different from Spark’s session time zone setting
spark.sql.session.timeZone. The internal values don’t contain information about the original time zone. Future operations over the parallelized date and timestamp values take into account only Spark SQL sessions time zone according to the
TIMESTAMP WITH SESSION TIME ZONE type definition.
In a similar way, Spark recognizes the following types as external date-time types in Java and Scala APIs:
java.sql.Dateand
java.time.LocalDateas external types for the
DATEtype
java.sql.Timestampand
java.time.Instantfor the
TIMESTAMPtype.
There is a difference between
java.sql.* and
java.time.* types.
java.time.LocalDate and
java.time.Instant were added in Java 8, and the types are based on the Proleptic Gregorian calendar–the same calendar that is used by Databricks Runtime 7.0 and above.
java.sql.Date and
java.sql.Timestamp have another calendar underneath–the hybrid calendar (Julian + Gregorian since 1582-10-15), which is the same as the legacy calendar used by Databricks Runtime 6.x and below. Due to different calendar systems, Spark has to perform additional operations during conversions to internal Spark SQL representations, and rebase input dates/timestamp from one calendar to another. The rebase operation has a little overhead for modern timestamps after the year 1900, and it can be more significant for old timestamps.
The following example shows how to make timestamps from Scala collections. The first example constructs a
java.sql.Timestamp object from a string. The
valueOf method interprets the input strings as a local timestamp in the default JVM time zone which can be different from Spark’s session time zone. If you need to construct instances of
java.sql.Timestamp or
java.sql.Date in specific time zone, have a look at java.text.SimpleDateFormat (and its method
setTimeZone) or java.util.Calendar.
Seq(java.sql.Timestamp.valueOf("2020-06-29 22:41:30"), new java.sql.Timestamp(0)).toDF("ts").show(false)
+-------------------+ |ts | +-------------------+ |2020-06-29 22:41:30| |1970-01-01 03:00:00| +-------------------+
Seq(java.time.Instant.ofEpochSecond(-12219261484L), java.time.Instant.EPOCH).toDF("ts").show
+-------------------+ | ts| +-------------------+ |1582-10-15 11:12:13| |1970-01-01 03:00:00| +-------------------+
Similarly, you can make a
DATE column from collections of
java.sql.Date or
java.sql.LocalDate. Parallelization of
java.sql.LocalDate instances is fully independent of either Spark’s session or JVM default time zones, but the same is not true for parallelization of
java.sql.Date instances. There are nuances:
java.sql.Dateinstances represent local dates at the default JVM time zone on the driver.
- For correct conversions to Spark SQL values, the default JVM time zone on the driver and executors must be the same.
Seq(java.time.LocalDate.of(2020, 2, 29), java.time.LocalDate.now).toDF("date").show
+----------+ | date| +----------+ |2020-02-29| |2020-06-29| +----------+
To avoid any calendar and time zone related issues, we recommend Java 8 types
java.sql.LocalDate/
Instant as external types in parallelization of Java/Scala collections of timestamps or dates.
Collect dates and timestamps
The reverse operation of parallelization is collecting dates and timestamps from executors back to the driver and returning a collection of external types. For example above, you can pull the
DataFrame back to the driver using the
collect() action:
df.collect()
[Row(timestamp=datetime.datetime(2020, 7, 1, 0, 0), date=datetime.date(2020, 7, 1))]
Spark transfers internal values of dates and timestamps columns as time instants in the UTC time zone from executors to the driver, and performs conversions to Python datetime objects in the system time zone at the driver, not using Spark SQL session time zone.
collect() is different from the
show() action described in the previous section.
show() uses the session time zone while converting timestamps to strings, and collects the resulted strings on the driver.
In Java and Scala APIs, Spark performs the following conversions by default:
- Spark SQL
DATEvalues are converted to instances of
java.sql.Date.
- Spark SQL
TIMESTAMPvalues are converted to instances of
java.sql.Timestamp.
Both conversions are performed in the default JVM time zone on the driver. In this way, to have the same date-time fields that you can get using
Date.getDay(),
getHour(), and so on, and using Spark SQL functions
DAY,
HOUR, the default JVM time zone on the driver and the session time zone on executors should be the same.
Similarly to making dates/timestamps from
java.sql.Date/
Timestamp, Databricks Runtime 7.0 performs rebasing from the Proleptic Gregorian calendar to the hybrid calendar (Julian + Gregorian). This operation is almost free for modern dates (after the year 1582) and timestamps (after the year 1900), but it could bring some overhead for ancient dates and timestamps.
You can avoid such calendar-related issues, and ask Spark to return
java.time types, which were added since Java 8. If you set the SQL config
spark.sql.datetime.java8API.enabled to true, the
Dataset.collect() action returns:
java.time.LocalDatefor Spark SQL
DATEtype
java.time.Instantfor Spark SQL
TIMESTAMPtype
Now the conversions don’t suffer from the calendar-related issues because Java 8 types and Databricks Runtime 7.0 and above are both based on the Proleptic Gregorian calendar. The
collect() action doesn’t depend on the default JVM time zone. The timestamp conversions don’t depend on time zone at all. Date conversions use the session time zone from the SQL config
spark.sql.session.timeZone. For example, consider a
Dataset with
DATE and
TIMESTAMP columns, with the default JVM time zone to set to
Europe/Moscow and the session time zone set to
America/Los_Angeles.
java.util.TimeZone.getDefault
res1: java.util.TimeZone = sun.util.calendar.ZoneInfo[id="Europe/Moscow",...]
spark.conf.get("spark.sql.session.timeZone")
res2: String = America/Los_Angeles
df.show
+-------------------+----------+ | timestamp| date| +-------------------+----------+ |2020-07-01 00:00:00|2020-07-01| +-------------------+----------+
The
show() action prints the timestamp at the session time
America/Los_Angeles, but if you collect the
Dataset, it is converted to
java.sql.Timestamp and the
toString method prints
Europe/Moscow:
df.collect()
res16: Array[org.apache.spark.sql.Row] = Array([2020-07-01 10:00:00.0,2020-07-01])
df.collect()(0).getAs[java.sql.Timestamp](0).toString
res18: java.sql.Timestamp = 2020-07-01 10:00:00.0
Actually, the local timestamp 2020-07-01 00:00:00 is 2020-07-01T07:00:00Z at UTC. You can observe that if you enable Java 8 API and collect the Dataset:
df.collect()
res27: Array[org.apache.spark.sql.Row] = Array([2020-07-01T07:00:00Z,2020-07-01])
You can convert a
java.time.Instant object to any local timestamp independently from the global JVM time zone. This is one of the advantages of
java.time.Instant over
java.sql.Timestamp. The former requires changing the global JVM setting, which influences other timestamps on the same JVM. Therefore, if your applications process dates or timestamps in different time zones, and the applications should not clash with each other while collecting data to the driver using Java or Scala
Dataset.collect() API, we recommend switching to Java 8 API using the SQL config
spark.sql.datetime.java8API.enabled. | https://docs.microsoft.com/en-us/azure/databricks/spark/latest/dataframes-datasets/dates-timestamps | CC-MAIN-2022-05 | refinedweb | 4,378 | 56.96 |
I'm a little bit stuck here as to what else I should code in so maybe someone here can help fill it in for me. Basically I have two calendars on a web application on Visual Studio 2010. I want to catch the dates selected by click on the calendars as FromDate and ToDate on […]
Month: February 2015
Help: Bubble sorting problem.
Hello boys and girls! I'm currently working on how bubble sorting works and this is what I have: using System; namespace bubbleSort_array { class Bubble_sort_test { static void Main() { int[] letters = { 'c', 's', 'a', 'k', 'x', 'l', 'j' }; int temp = 0; for (int write = 0; write < letters.Length; […] | http://howtocode.net/2015/02/page/2/ | CC-MAIN-2018-47 | refinedweb | 110 | 66.67 |
The btowc() function is defined in <cwchar> header file.
btowc() prototype
wint_t btowc( int c );
The btowc() function takes a single byte character c as its argument and returns its wide character equivalent.
btowc() Parameters
- c: The single byte character to convert to wide character.
btowc() Return value
The btowc() function returns the wide character representation of c if c is a valid single byte character. If c is EOF, WEOF is returned.
Example: How btowc() function works?
#include <cwchar> #include <cstring> #include <iostream> using namespace std; int main() { char str[] = "Hello\xf4\xdf"; wchar_t wc; int count = 0; for (int i=0; i<strlen(str); i++) { wc = btowc(str[i]); if (wc != WEOF) count++; } cout << count << " out of " << strlen(str) << " characters were successfully widened"; return 0; }
When you run the program, the output will be:
5 out of 7 characters were successfully widened | https://cdn.programiz.com/cpp-programming/library-function/cwchar/btowc | CC-MAIN-2021-04 | refinedweb | 144 | 62.68 |
A Line Following Robot is a great way to enter the world of Robotics. Not only is it a fun to watch little robot, but it's also widely used from airport to even in Tesla's factory. And now, you can make one yourself at home! This workshop will guide you through the complete build and coding process so you can get started even without much experience with Arduino and programming voodoo stuff. So buckle up!
What you'll need:
I added links to some products that are commonly not available. Others should be easy to find online or in your nearby electronics/hobby shop.
- Chassis kit, something like this will work great. or, if you decide to make your own chassis:
- 2x Motors with gearbox or one of these twin motor gearbox
- 2x Wheels
- 1x Ball Caster
- Plywood, 4-5mm thickness works great.
- 4x Bolts and 12x Nuts
- Extra screws to mount all the hardware to the wood.
- 1x Arduino Nano or Uno. (or any other microcontroller board if you know what you are doing)
- 1x L298n Motor driver module
- 1x QTR-8a or, QTR-8RC IR Reflectance sensor array.
- 1x Half sized breadboard
- Some jumper wires
- Battery (more about it later)
- 1x M5StickC ESP32 module with display or a bluetooth module (optional)
- 1x XL6009 votlage boost converter (optional)
I will be explaining about some of the parts in later steps
Part 1: The build
I made a video summarizing the build and working mechanism, so that you can get an image of the whole process. Pardon my newbie video making skills. Click the image below to watch it.
Choosing the battery:
Choosing the right battery is fairly important for your robot to work properly. Lithium batteries would be the best option to get a decent amount of run time out of a single charge. In terms of voltage, if you decide to power your robot directly from the battery, you should choose one with a voltage of at least 7V, in order to make sure the Arduino and everything else is powered properly. The LiPo batteries used for radio controlled models are widely available for cheap, like this one on HobbyKing. However, you can simply use some rechargeable AA cells, 4 should give you a nice and stable 6V. For me, I had to use what I had lying around. I had some 18650 Lithium cells collected from old laptop batteries. I made a pack of 2 cell battery. If you don't know what I mean, take a look here
XL6009 voltage booster module (optional but recommended):
I added a a voltage booster module to increase or boost the battery voltage up to 9 volts. This supplies a stable 9V to the robot in spite of what the battery voltage is. So I don't have to worry about the lower voltages causing the motors to go slower and messing up your PID tuning.
Making the chassis:
Feel free to skip this step if you decided to buy a chassis.
The chassis is something that you can either make yourself, or buy for pretty cheap. I went with the make option because I had some motor with gearbox and wheels lying around. And of course, making is more fun.
I made a 3D model in Fusion 360 to help me make sure everything would fit in the way I want. It also helped me to cut the woods to the exact shape that I would need for everything to fit nicely. Note that I did update the design as I built the robot. I didn't plan the whole design all at once before starting the build. Feel free to take a look or download the 3D design file here.
The robot consists of two round wooden plates held together with bolts and nuts. The bottom and top plates are 180mm diameter round shapes cut out of 4mm plywoods.
I copied the bottom plate drawing from Fusion 360 onto a sheet of plywood and cut out the round shape. Your design may vary depending on the motor and wheels that you use.
The cut doesn't have to be exactly round since we will next use a bolt and nut to connect it to a drill, in order to sand it down to a nice and round shape.
Then I cut out the top plate with the exact same diameter as the bottom one. For now it doesn't need any cutouts or holes. These plates will be held together later on using bolts and nuts. So I taped the two plates together and made 4 holes for M5 bolts to pass through later on. You should do it in this step because drilling the holes later will make it hard to align the two plates and you will end up with an ugly looking robot.
Populating the bottom plate:
Next we will start populating the bottom plate. The bottom plate holds the gearbox, ball caster, IR sensor array, battery and the optional XL6009 voltage booster module.
The assembly is hopefully pretty easy to understand from the video. But I will just go over it adding some notes.
First, start by mounting the gearbox and wheel set. I just used some M3 bolts and nuts to mount everything. You can then start mounting the infrared sensor array, the ball caster and the XL6009 voltage booster board (optional).
Note that the array of sensors should be mounted low so that they are really close to the ground.
So lastly we'll mount the battery right above where the ball caster is. This shifts the center of gravity of the robot backwards and therefore, the robot always stays on it's wheels and ball caster. Mounting the battery will depend on the type of battery you used. But generally it's never a good idea to hot glue batteries, since heat can damage them or even cause fire hazard. So I prefer using double sided tapes to mount the battery. You may want to use some velcros if you think you might need to take the battery out later to charge it.
Populating the top plate:
The top plate holds the Arduino on a breadboard, motor driver, the on/off switch and the optional M5StickC Module. Mark where the holes need to be and drill all the holes, in order to connect the parts with nuts and bolts. Using a powered hand drill makes the whole process really fast and clean. The breadboard can be mounted using the double sided tape that it comes with.
I had to cut a hole on the plywood for the sliding switch. You can use any switch that you like or have lying around. Just make sure that it's rated to handle at least 2A of current or more. Once you are done with these, we can start putting the two plates together.
Connecting the two plates together:
I simply used four M5 bolts and three nuts for each of them to hold the two plates together. It's a really simple process but makes a really strong structure. You could also use some standoffs instead. I just used what I had lying around.
Wiring everything together:
This part should be done with care. Doing something wrong will result in the release of magic smoke when you power the robot. I made a wiring diagram to make things easier.
You can just use jumper wires to connect the digital circuitries. For wiring the battery, boost converter module and the motors, you will have to solder some wires yourself. You can connect your battery directly to the motor driver through a switch if you don't want to use the voltager booster board.
Part 2: How it works
The way this robot works is actually pretty simple. It is based on a PID controller, which is a mathematical equation in the code that takes an input value and spits out an output value. It then has a feedback which basically tells it how much the input value is off from what we set it to be, it's called the error. The PID controller will do whatever it can with the output to make sure the error stays at, or as close as possible, to 0.
If we look at our robot, the input is coming from the 8 line sensors underneath the robot. The sensors gives us a value between 0 and 7000, 0 meaning the line is all way to the left of the robot and 7000 meaning it's on the right of the robot. The PID controllers work is to keep the value at 3500 while the robot is moving, which would mean the line is right in the middle of the robot. If it's not, the PID controllers output changes, which then changes the speed of the motors, turning the robot and bringing the line back to the middle of the robot.
Part 3: Uploading the code
Here's the code. I added comment throughout the codes explaining every little thing that I thought might be a bit confusing. Take a look below or download it here.
/* Line Following Robot Code By: Iqbal Samin Prithul Feel free to use or change the code however you like. PID code inspired from Pololus 3Pi Robot code*/ #include <QTRSensors.h> #include <SoftwareSerial.h> #define USE_BLUETOOTH_SERIAL //simply comment out this line if you are not using Bluetooth Serial //-------------------Variables for the motors-------------------- const int M1A = 3; // Motor 1 direction pin A const int M1B = 9; // Motor 1 direction pin B const int M2A = 10; // Motor 2 direction pin A const int M2B = 11; // Motor 2 direction pin B int maximum = 60; // Maximum speed in 8 byte PWM value (0-255) // 60 is a good starting point when tuning the PID values //---------------Variable used in PID controller--------i------- int error = 0; unsigned int last_proportional = 0; long integral = 0; // Note: These are just values that worked for me. You will have to tune these for your robot float Kp = 1.9; // proportional divider constant unsigned long Ki = 10000.0; // integral divider constant float Kd = 5.0; // derivative multiplier constant //-------------------Setting up the sensors-------------------- const int NUM_SENSORS = 8; // number of sensors used const int NUM_SAMPLES_PER_SENSOR = 4; // average 4 analog samples per sensor reading const int EMITTER_PIN = 2; // emitter is controlled by digital pin 2 // sensors 0 through 5 are connected to analog inputs 0 through 5, respectively QTRSensorsAnalog qtra((unsigned char[]) { A0, A1, A2, A3, A4, A5, A6, A7 }, NUM_SENSORS, NUM_SAMPLES_PER_SENSOR, EMITTER_PIN); unsigned int sensorValues[NUM_SENSORS]; //-------------Setting up software serial and timer------------ unsigned long timer = 0; SoftwareSerial mySerial(4, 5); // RX, TX void setup() { Serial.begin(9600); Serial.println("hello"); mySerial.begin(9600); mySerial.println("hello"); // initialize all the motor driver input pins as outuput in HIGH state. pinMode(M1A, OUTPUT); pinMode(M1B, OUTPUT); pinMode(M2A, OUTPUT); pinMode(M2B, OUTPUT); digitalWrite(M1A, HIGH); digitalWrite(M2A, HIGH); digitalWrite(M1B, HIGH); digitalWrite(M2B, HIGH); pinMode(LED_BUILTIN, OUTPUT); digitalWrite(LED_BUILTIN, HIGH); // turn on Arduino's LED to indicate we are in calibration mode for (int i = 0; i < 400; i++) // make the calibration take about 10 seconds { qtra.calibrate(); // reads all sensors 10 times at 2.5 ms per six sensors (i.e. ~25 ms per call) } digitalWrite(LED_BUILTIN, LOW); // turn off Arduino's LED to indicate we are through with calibration // print the calibration minimum values measured when calibrating for (int i = 0; i < NUM_SENSORS; i++) { Serial.print(qtra.calibratedMinimumOn[i]); Serial.print(' '); } Serial.println(); // print the calibration maximum values measured when calibrating for (int i = 0; i < NUM_SENSORS; i++) { Serial.print(qtra.calibratedMaximumOn[i]); Serial.print(' '); } Serial.println(); Serial.println(); delay(1000); } void loop() { unsigned int position = qtra.readLine(sensorValues); // Serial.println(position); // The "proportional" term should be 0 when we are on the line. This is the error int proportional = (int)position - 3500; // Compute the derivative (change) and integral (sum) of the position. int derivative = proportional - last_proportional; integral += proportional; // Remember the last error last_proportional = proportional; // This is where the magic happens. This equation below is the PID controller int power_difference = proportional / Kp + integral / Ki + derivative * Kd; // Make sure the difference is never above the maximum value or below the negative maximum value if (power_difference > maximum) power_difference = maximum; if (power_difference < -maximum) power_difference = -maximum; if (power_difference < 0) set_motors(maximum + power_difference, maximum); else set_motors(maximum, maximum - power_difference); #ifndef USE_BLUETOOTH_SERIAL // Print some values through the Bluetooth Serial module. In my case, the Bluetooth on the M5StickC if ((millis() - timer) > 2000) { mySerial.print(proportional); mySerial.print(','); mySerial.print(power_difference); mySerial.print(' '); mySerial.print(' '); mySerial.print(Kp); mySerial.print(','); mySerial.print(Ki); mySerial.print(','); mySerial.print(Kd); mySerial.print(','); mySerial.println(maximum); timer = millis(); } // This let's us change the Kp, Ki, Kd or the maximum speed value over Bluetooth. // For example, just type pValue to change Kp variable. For example p1.65 sets Kp to 1.65 if (mySerial.available() > 0) { int yyy = mySerial.read(); mySerial.println(char(yyy)); switch (yyy) { case 112: //p Kp = mySerial.parseFloat(); mySerial.print("Kp set to: "); mySerial.println(Kp); break; case 105://i Ki = mySerial.parseFloat(); mySerial.print("Ki set to: "); mySerial.println(Ki); break; case 100://d Kd = mySerial.parseFloat(); mySerial.print("Kd set to: "); mySerial.println(Kd); break; case 109: //m maximum = mySerial.parseInt(); mySerial.print("Speed set to: "); mySerial.println(maximum); break; } } #endif } // This function let's us drive the motors simply using two values between -255 and 255. // A negavtive value means the motor will go backward, and positive will make the motor go forward. // First value is for motor 1 or in my case, the left motor, and the second value is for the right motor. void set_motors(int speed1, int speed2) { byte M1ASpeed = speed1 > 0 ? speed1 : 0; byte M1BSpeed = speed1 > 0 ? 0 : speed1 * -1; analogWrite(M1A, M1ASpeed); analogWrite(M1B, M1BSpeed); byte M2ASpeed = speed2 > 0 ? speed2 : 0; byte M2BSpeed = speed2 > 0 ? 0 : speed2 * -1; analogWrite(M2A, M2ASpeed); analogWrite(M2B, M2BSpeed); }
Controlling it over Bluetooth:
I added an ESP32 module that has Bluetooth built-in. It's running a simple code that just receives the Arduinos data, and sends it to my phone using Bluetooth. It also takes any input from the phone sends it to the Arduino. It made tuning the PID constants a lot easier, rather than having to upload a code everytime I want to change the values. Maybe storing the values in the built-in EEPROM would be a nice little update. I also explained the Bluetooth function at the end of the video.
Tuning the PID constants:
Your robots performance will depend on how well you tune the P, I and D constants in the code. You can tune them in this part of the code:
float Kp = 1.9; // proportional divider constant unsigned long Ki = 10000; // integral divider constant float Kd = 5.0; // derivative multiplier constant
I recommend starting with a really large intergral divider constant, something like 100,000 to basically make sure it has no effects. Then tinker with the Kp and Kd variables. Kd should be bigger than Kp. First try to get the robot to follow the line with oscillation. Then eventually turn up the derivative value to kind of dampen the oscillation. Remember that increasing Kp makes the robot less responsive since it is dividing the error value and affecting the motor speeds directly. Increasing the Kd also makes the robot a little less responsive but it might act differently based on how much the robot is turning. Lastly you can slowly start reducing the integral divider to make your robot further responsive. Having more effect in the integral component means that you will probably need to increase the proportional constant a little bit or you will start oscillating.
All and all, you will have to play around with the constants until you reach perfection. This is the most tiem consuming but probably the most fun part in the making of this robot. You can get the robot going really fast without ever losing track if you do a good job with the tuning.
Next steps:
Great job! Now you have a working line following robot!
Now you can get creative and start adding more functions. Maybe have a grabbing arm and make it move obstacles on the way? Maybe make it controllable using a phone over Bluetooth? How about using faster motors to make it go faster? Maybe you can win a line following robot race in a maker fair somewhere! Making it solve a maze made out of black lines also sounds like a great idea. Your creativity is the limit.
I myself am now working on making an app that will easily let me change the Kp, Ki and Kd values. I will eventually be adding a manual control function, and also connect the ESP32 to the internet using Wi-Fi to do more cool stuff later on.
So I hope you enjoyed this workshop and learned something new as well. If you make this robot, be sure to share it on the internet to inspire more people to make the same.
Thank you :) | https://workshops.hackclub.com/line_following_robot/ | CC-MAIN-2022-33 | refinedweb | 2,850 | 63.09 |
01 November 2011 11:46 [Source: ICIS news]
LONDON (ICIS)--Stocks in the European chemical sector were dragged down by a sharp fall in the global markets on Tuesday, on news that ?xml:namespace>
Eurozone leaders on 27 October agreed a deal involving a 50% writedown of Greek debt, amounting to around €100bn ($139bn), alongside a second bail-out rescue loan of €130bn.
However, Greek prime minister George Papandreou said on Monday his country will hold a referendum to vote on whether to accept the new deal, following large-scale protests in
At 10:37 GMT, the
At the same time, the Dow Jones Euro Stoxx Chemicals index was trading down by 2.98%, as shares in many of
Top European producers were hit hard – German major BASF’s shares had dropped by 3.35%, Bayer had fallen by 2.61%, Dutch coatings firm AkzoNobel was down by 2.89%, and
Catalysts maker and precious metals trader Johnson Matthey of the
Markets were also rattled by news on Monday that the
Earlier on Tuesday, US crude futures prices fell by more than $2/bbl, undermined by a stronger US dollar and renewed European debt worries. At 09:12 GMT, December NYMEX light sweet crude futures (WTI) were at $91.55/bbl, down by $1.64/bbl from the previous close. Earlier, the
December Brent crude on
Equity markets fell sharply, with the Nikkei 22 Index in
Meanwhile, growth figures from the Office for National Statistics showed that the UK’s GDP grew by 0.5% in the third quarter of 2011.
“The
“While we are not forecasting a recession in the
“Uncertainty has returned once again to markets. No doubt that the phone lines between
Additional reporting by James Dennis
( | http://www.icis.com/Articles/2011/11/01/9504300/Europe-chemical-stocks-dragged-down-by-Greece-referendum.html | CC-MAIN-2013-48 | refinedweb | 290 | 68.7 |
It's not the same without you
Join the community to find out what other Atlassian users are discussing, debating and creating.
Hi all,
i would to know is it possible to select that only the creator of an issue can make some specific transition. We have 3 fields Assignee, Reporter and Creator of an issue. I want to restrict that only creator can make the transition. Is there any way to do that? I also tried with Groovy script but i don´t understand this groovy scripting.
Thanks
Hi Harris,
If you use script runner for your script then your condition should be
currentUser == issue.creator
or else I suppose you can get the caller, something like
def caller = transientVars.get("context").getCaller() return caller == issue.creator.key
regards, Thanos
Hi Thanos,
i saw that i need to import some managers at the begin of the script. I don´t understand which Managers should i import. Should i only add this lines to the Groovy field or?
Thanks
Hi Thanos,
i saw that i need to import some managers at the begin of the script. I don´t understand which Managers should i import. Should i only add this lines to the Groovy field or?
Thanks
Is this a script runner Fast-track transition an issue ? If that is the case then you do not want to import anything in the Condition field.
For example:
Thanks
Haris,
do you use the ScriptRunner plugin ? I cannot answer if i don't know more details (there are more than one plugin that make use of groovy scripts) ...
Sorry
I am using JIRA Misc Workflow Extension Plugin. With this plugin is also possible to write groovy scripts.
Thanks
ok then I am not exactly sure about the syntax but should be something like
import com.atlassian.jira.component.ComponentAccessor def currentUser = ComponentAccessor.getJiraAuthenticationContext().getLoggedInUser() return currentUser == issue.get("creator")
Hi,
it is possible without using Script Runner. When you change your workflow, you can define conditions for a transition. So go to the transition, click on conditions and add a new one. There is already the "Condition Only Creator" predefined. (Not sure what the exact wording is in english, my JIRA is in german
).
That's the way, it works for us, and it works fine.. | https://community.atlassian.com/t5/Jira-questions/How-can-i-allow-only-to-creator-to-make-the-transition/qaq-p/331743 | CC-MAIN-2018-09 | refinedweb | 384 | 67.65 |
rdfadict 0.7.4
An RDFa parser wth a simple dictionary-like interface.
Contents
- Installation
- Usage
- Limitations and Known Issues
- Change History
- 0.7.3 (2011-01-27)
- 0.7.1.1 (2010-03-17)
- 0.7.1 (2009-07-20)
- 0.7 (2009-06-02)
- 0.6 (2008-10-14)
- 0.5.2 (2008-08-14)
- 0.5.1 (2008-08-13)
- 0.5 (2008-08-12)
- 0.4.2 (2007-06-05)
- 0.4.1 (2007-03-21)
- 0.4.0 (2007-03-20)
- 0.3.3 (2007-03-14)
- 0.3.2 (2007-03-12)
- 0.3.1 (2007-03-09)
- 0.3 (2007-03-08)
- 0.2 (2006-11-21)
- 0.1 (2006-11-20)
- Download
Installation
rdfadict and its dependencies may be installed using easy_install (recommended)
$ easy_install rdfadict
or by using the standard distutils setup.py:
$ python setup.py install
If you are installing from source, you will also need the following packages:
- rdflib 2.4.x
- pyRdfa
- html5lib (required if you want to support non-XHTML documents)
easy_install will satisfy depedencies for you if necessary.:
>>> ... <h1 property="dc:title">Vacation in the South of France</h1> ... <h2>created ... by <span property="dc:creator">Mark Birbeck</span> ... on <span property="dc:date" type="xsd:date" ... ... January 2nd, 2006 ... </span> ... </h2> ... </div>"""
Triples can be extracted using rdfadict:
>>> import rdfadict >>>>> parser = rdfadict.RdfaParser() >>> triples = parser.parse_string.parse_string:
>>> ... <link rel="alternate" href="/foo/bar" /> ... <h1 property="dc:title">Vacation in the South of France</h1> ... <h2>created ... by <span property="dc:creator">Mark Birbeck</span> ... on <span property="dc:date" type="xsd:date" ... ... January 2nd, 2006 ... </span> ... </h2> ... <img src="/myphoto.jpg" class="photo" /> ... (<a href="" rel="license" ...CC License</a>) ... </div>"""
We can extract RDFa triples from it:
>>> parser = rdfadict.RdfaParser() >>>>> triples = parser.parse_string
Similar to this case is the link tag in the example HTML. Based on the subject resolution rules for link and meta tags, no subject can be resolved for this assertion. However, this does not throw an exception because the value of the rel attribute is not namespaced.
Consider an alternative, contrived example:
>>> ... <link rel="dc:creator" href="" /> ... </div>"""
Based on the subject resolution rules for link tags, we expect to see one assertion: that represents the creator of. This can be tested; note we supply a different base_uri to ensure the subject is being properly resolved.
>>> parser = rdfadict.RdfaParser() >>>>> triples = parser.parse_string(link_sample, link_base_uri)>>> triples.keys() [''] >>> len(triples['']) 1 >>> triples[''][''] ['']
Note that this HTML makes no assertions about the source document:
>>> link_base_uri in triples.keys() False
If the HTML sample is modified slightly, and the about attribute is omitted, rdfadict is resolves the subject to the explicit base URI.
>>> ... <link rel="dc:creator" href="" /> ... </div>""" >>> parser = rdfadict.RdfaParser() >>>>> triples = parser.parse_string(link_sample, link_base_uri) >>> link_base_uri in triples.keys() True
If a namespace is unable to be resolved, the assertion is ignored.
>>>Content</a> ... """ >>> parser = rdfadict.RdfaParser() >>> triples = parser.parse_string(ns_sample, '') >>> triples {}
See the RDFa Primer for more RDFa examples.
Parsing Files
rdfadict can parse from three sources: URLs, file-like objects, or strings. The examples thus far have parsed strings using the parse_string method. A file-like object can also be used:
>>> from StringIO import StringIO >>>the license</a> ... </body> ... </html> ... """ >>> parser = rdfadict.RdfaParser() >>> result = parser.parse_file(StringIO(file_sample), ... "") >>> result.keys() [''] >>> result[''] {'': ['']}
Parsing By URL
rdfadict can parse a document retrievable by URI. Behind the scenes it uses urllib2 to open the document.
>>> parser = rdfadict.RdfaParser() >>> result = \ ... parser.parse_url('') >>> print result['']\ ... [''][0] 表示 2.1 日本
Note that parse_file is not recommended for use with urllib2 handler objects. In the event that pyRdfa encounters a non-XHTML source, it re-opens the URL to begin processing with a more tolerant parser. When parse_file is used to initiate parsing, it is unable to re-open the URL correctly.() >>> result = parser.parse_string(rdfa_sample, base_uri, sink=list_sink)>>> result is list_sink True>>>.7.3 (2011-01-27)
- Corrected minor bug; the Sets modules was imported in all cases, resulting in a DeprecationWarning.
0.7.1.1 (2010-03-17)
- Added missing MANIFEST.in, which prevented setup.py from working when run from an sdist (as opposed to a checkout).
0.7.1 (2009-07-20)
- Updated DictSetTripleSink to coerce triple() parameters to unicode instead of str.
0.7 (2009-06-02)
- DictTripleSink uses encode instead of str, making it friendlier to Unicode.
- Eliminated custom pyRDFa wrapper.
- Added a test for handling Unicode triples.
0.6 (2008-10-14)
- Added DictSetTripleSink
0.5.2 (2008-08-14)
- Corrected bug with parse_url; non-XHTML sources will now be parsed correctly.
0.5.1 (2008-08-13)
- Added parse_file method for parsing data from a file-like object.
- parseurl and parsestring are now aliased to parse_url and parse_string respectively.
0.5 (2008-08-12)
- rdfadict now acts as a wrapper for pyRdfa for full compliance with the candidate recommendation.
- The cc namespace is no longer special cased with a default value.
- Removed tidy extra and uTidylib dependency; parsing is now handled by pyRdfa which uses html5lib for handling more broken HTML.
- Doctests are now in README.txt in the rdfadict package.
- The default XHTML namespace is now instead of
0.4.2 (2007-06-05)
- Corrected dependency link for uTidylib.
0.4.1 (2007-03-21)
0.4.0 (2007-03-20)
- Provide rudimentary fallback to Tidy when we encounter HTML which is not well-formed XML.
0.3.3 (2007-03-14)
- Removed special case for cc:license; instead, cc namespace simply has a default value of.
0.3.2 (2007-03-12)
- Ignore assertions which have unresolvable namespace prefixes.
- Special case handling for cc:license.
0.3.1 (2007-03-09)
- Fixed bug in subject resolution exception handling.
- License: MIT
- Package Index Owner: nathan, JED3
- Package Index Maintainer: JED3, cwebber
- DOAP record: rdfadict-0.7.4.xml | https://pypi.python.org/pypi/rdfadict | CC-MAIN-2016-44 | refinedweb | 970 | 54.59 |
Your answer is one click away!
I have a
csv file which displays a number of columns and almost 500000 rows. I need to slice this file with respect to the second column, which displays the year, maintaining all the other columns:
COL1 COL2 COL3 COL4 COL5 COL6 COL7 xxx 1986 xxx xxx xxx xxx xxx xxx 1992 xxx xxx xxx xxx xxx xxx 1998 xxx xxx xxx xxx xxx ... ... ... ... ... ... ... xxx 2015 xxx xxx xxx xxx xxx xxx 1984 xxx xxx xxx xxx xxx
My question: how can I produce another
csv file out of this, where the values in the second column are
>=1992?
Desired output:
COL1 COL2 COL3 COL4 COL5 COL6 COL7 xxx 1992 xxx xxx xxx xxx xxx xxx 1998 xxx xxx xxx xxx xxx xxx 2015 xxx xxx xxx xxx xxx
My attempt is this, but I got stuck at the point where I should insert an
if linked to the second column, but I don't know how to do that:
from __future__ import division import numpy from numpy import * import csv from collections import * import os import glob directoryPath=raw_input('Working directory: ') #Indicates where the csv file is located for i,file in enumerate(os.listdir(directoryPath)): #Loops over the folder where the csv files are if file.endswith(".csv"): #Checks if they are csv files filename=os.path.basename(file) #Takes the complete path to the file filelabel=file #Takes the filename only strPath = os.path.join(directoryPath, file) #Retrieves the complete path to find the csv file x=numpy.genfromtxt(strPath, delimiter=',')[:,7] #I GOT STUCK HERE
You can iterate over the rows of the CSV to see if the value in COL2 is >= to the year your are interested in. If it is, just add the row to a new list. Pass in the new list to a CSV writer. You can call the function in a loop to create new CSVs for all files ending with a
csv extension.
You will have to pass in the
working_directory and the
year. This is the folder of CSVs you want to process.
import csv import os def make_csv(in_file, out_file, year): with open(in_file, 'rb') as csv_in_file: csv_row_list = [] first_row = True csv_reader = csv.reader(csv_in_file) for row in csv_reader: if first_row: csv_row_list.append(row) first_row = False else: if int(row[1]) >= year: csv_row_list.append(row) with open(out_file, 'wb') as csv_out_file: csv_writer = csv.writer(csv_out_file) csv_writer.writerows(csv_row_list) for root, directories, files in os.walk(working_directory): for f in files: if f.endswith('.csv'): in_file = os.path.join(root, f) out_file = os.path.join(root, os.path.splitext(f)[0] + '_new' + os.path.splitext(f)[1]) make_csv(in_file, out_file, year) | http://www.devsplanet.com/question/35276682 | CC-MAIN-2017-22 | refinedweb | 447 | 72.46 |
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
How to add all the employees in an rml report??
Hello!!
I work with a class named test_example:
class test_example(osv.osv):
_name = 'test.example'
_columns = {
'date_start': fields.date('Date Start ',size=256),
'date_end': fields.date('Date End ',size=256),
'company_id': fields.many2one('res.company', string ="Company Name",size=256),
}
I want to create a report which will contains all the employees of the selected company.
Please help
I really need an answer please
First for every report you need a source for the report, because you need all the employees of an specific company first you need to select the company in a wizard like form that have the print button or you could use the company form itself. You need to define what will be the model of the report, could be res.company or hr.employee. More direct solution is to define the report on the hr.employee (that not means that the print button need to be in the employee form, just the type of records that you iterate on the report). Next in your case you need to filter the employees by the selected company. That's better in a wizard like form, you select the company, clic on the print button that call a method in your wizard and search the employees ids filtering by the selected company and pass it to the report by the context, something like this:
def print_btn(self, cr, uid, ids, context=None):
company_id = self.browse(cr, uid, ids, context=context)[0].company_id
employee_ids = self.pool.get('hr.employee').search(cr, uid, [('company_id','=',company_id)])
context = dict(context or {}, active_ids=employee_ids)
return { 'type': 'ir.actions.report.xml', 'report_name': 'employee.report', 'context': context, }
next you just need to define your rml report in xml and the report itself. For iterate on the employees:
<?xml version="1.0"?> <document filename="test.pdf"> <template title="Employees" author="Soltein" allowSplitting="20"> <pageTemplate id="first"> <pageGraphics> <image x="20" y="760" height="70.0">[[company.logo]]</image> <rect x="15" y="750" width="410" height="3" fill="yes" stroke="yes"/> <rect x="425" y="746" width="155" height="12" fill="yes" stroke="yes"/> <setFont name="Helvetica" size="8"/> <drawRightString x="550" y="15">Pág.: <pageNumber/> / <pageCount/></drawRightString> <fill color="white"/> <setFont name="Helvetica-Bold" size="9"/> <drawString x="440" y="750">[[company.website]]</drawString> </pageGraphics> <frame id="first" x1="15.0" y1="20.0" width="560" height="800"/> </pageTemplate> </template>
<story>
<blockTable colWidths="534.0"> <tr> <td> <para>Name</para> </td> </tr> </blockTable>
<blockTable colWidths="534.0"> <tr>[[ repeatIn(objects,'o') ]] <td> <para>[[ o.name ]]</para> </td> </tr> </blockTable>
</story> </document>
That is a very simple report template, you could add more columns to the blocktable and don't forget to change colwidths with a value for every column that you put in the blocktable. You will be iterating for every employee as 'o' and to output any field of the employee you use this syntax [[ o.field ]]
About This Community
Odoo Training Center
Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
Please don't create double posts for exactly the same.. You could have easily modified your initial topic and added this code.
try to answer !!!
Sorry but I never work with RML reports. They're from V7 and will soon be deprecated so I never use them.. Can't answer this one for you. | https://www.odoo.com/forum/help-1/question/how-to-add-all-the-employees-in-an-rml-report-90013 | CC-MAIN-2018-17 | refinedweb | 602 | 58.28 |
Regex EngineRegex Engine
An awesome package to generate regex
Read Documentation
Report Bug · Request Feature
Table of ContentsTable of Contents
- About the Project
- Getting Started
- Usage
- Roadmap
- Contributing
- License
- Acknowledgements
About The ProjectAbout The Project
Generating regex can sometimes be complicated. That is why we are introducing this package to help you get things done.
Supported functionalities :
- Regex Generation for Numerical Range
What does each functionalities do?What does each functionalities do?
1. Regex Generation for Numerical Range
Generate regex given a numerical range, So when given a new number between this range the regex will match.
Person who has motivated me to start this repository is listed in the acknowledgments.
Coded With LanguageCoded With Language
Getting StartedGetting Started
Simply install the package, import it in your python code and run the method needed. Look at the docstring or source code to understand what is happening.
PrerequisitesPrerequisites
Python 3.6 or greater
InstallationInstallation
pip install regex-engine
UsageUsage
1. Regex Generation for Numerical Range1. Regex Generation for Numerical Range
You get what you give : If given numbers are integers you get a regex that will only match with integer and if floating-point numbers are given it only match with floating-point number.
Supports integer and floating-point numbers. It can even be a negative range.
from regex_engine import generator generate = generator() regex1 = generate.numerical_range(5,89) regex2 = generate.numerical_range(81.78,250.23) regex3 = generate.numerical_range(-65,12)
Example regex generated for 25-53
^([3-4][0-9]|2[5-9]|5[0-3])$
The regex might not be optimal but it will surely serve the purpose.
The problem of checking a number is within a range might have been simple if you didn't choose the regex path.
if a <= your_input_number <=b would have simply solved the same problem.
We dedicate this method in the package to the people who are pursuing a different path or thinking out of the box.
RoadmapRoadmap
See the open issues for a list of proposed features (and known issues).
ContributingContributing
Contributions are what makeLicense
Distributed under the MIT License. See
LICENSE for more information.
ContactContact
Raj Kiran P - @raj_kiran_p - rajkiranjp@gmail.com
GitHub :
Website : | https://libraries.io/pypi/regex-engine | CC-MAIN-2021-10 | refinedweb | 362 | 50.02 |
import "github.com/elves/elvish/pkg/tt"
Package tt supports table-driven tests with little boilerplate.
See the test case for this package for example usage.
Test tests a function against test cases.
Case represents a test case. It is created by the C function, and offers setters that augment and return itself; those calls can be chained like C(...).Rets(...).
Args returns a new Case with the given arguments.
Rets modifies the test case so that it requires the return values to match the given values. It returns the receiver. The arguments may implement the Matcher interface, in which case its Match method is called with the actual return value. Otherwise, reflect.DeepEqual is used to determine matches.
FnToTest describes a function to test.
Fn makes a new FnToTest with the given function name and body.
ArgsFmt sets the string for formatting arguments in test error messages, and return fn itself.
RetsFmt sets the string for formatting return values in test error messages, and return fn itself.
type Matcher interface { // Match reports whether a return value is considered a match. The argument // is of type RetValue so that it cannot be implemented accidentally. Match(RetValue) bool }
Matcher wraps the Match method.
Any is a Matcher that matches any value.
RetValue is an empty interface used in the Matcher interface.
T is the interface for accessing testing.T.
Table represents a test table.
Package tt imports 3 packages (graph) and is imported by 1 packages. Updated 2020-02-22. Refresh now. Tools for package owners. | https://godoc.org/github.com/elves/elvish/pkg/tt | CC-MAIN-2020-16 | refinedweb | 255 | 60.72 |
gpg --keyserver keys.gnupg.net --recv-key 5FB027474203454C
gpg --lsign 5FB027474203454C
To save some reading.
Search Criteria
Package Details: razercfg 0.39-1
Dependencies (5)
- libusb (libusb-nosystemd)
- python
- cmake (cmake-git) (make)
- hardening-wrapper (make)
- python-pyside (optional) – for the graphical qrazercfg tool
Required by (0)
Sources (3)
Pinned Comments
Toqoz commented on 2016-10-03 09:56
gpg --keyserver keys.gnupg.net --recv-key 5FB027474203454C
Niksko commented on 2015-09-03 06:39
Getting an error about an incorrect magic number like I was? The fix is to delete the /usr/bin/pyrazer.pyc file.
Latest Comments
polyzen commented on 2017-05-02 19:01
> gpg --search-keys 5FB027474203454C
gpg: data source:
(1) Michael Büsch (Git tag signing key) <m@bues.ch>
Michael Büsch (Release signing key) <m@bues.ch>
4096 bit RSA key 0x5FB027474203454C, created: 2012-02-19, expires: 2018-03-03
Works with `--keyserver keys.gnupg.net` as well.
liamdawe commented on 2017-05-02 09:45
I can't get the gpg key to work, it just comes back saying no data?
polyzen commented on 2017-02-25 01:01
To check if razerd is running:
# systemctl status razerd.service
This might be relevant:
Was the mouse recently released? You can try the bugtracker, but I don't see anything for those ID's:
andrej commented on 2017-02-25 00:46
$ lsusb | grep Razer
Bus 003 Device 013: ID 1532:0214 Razer USA, Ltd
Bus 003 Device 014: ID 1532:0039 Razer USA, Ltd
$ sudo razercfg -L
No Razer device found in the system
So this^^^ doesn't work at all on my system. Presumably, razerd is running and both devices (mouse and keyboard) work just fine otherwise.
Toqoz commented on 2016-10-03 09:56
gpg --keyserver keys.gnupg.net --recv-key 5FB027474203454C
gpg --lsign 5FB027474203454C
To save some reading.
polyzen commented on 2016-08-10 15:04
xordspar0, thank you for the suggestion.
xordspar0 commented on 2016-08-04 03:11
So, if I understand correctly, this tool is only for mice, not keyboards? Could you update the description to reflect that?
Moneysac commented on 2016-05-27 17:22
This package doesn't work with my Deathadder chroma. LED color and state is working but mouse has a wired behavior (clicks will not always recognized).
However after installing the package "da2013ctl-git" from the AUR the mouse is working (LED control doesn't work with this version).
polyzen commented on 2016-03-10 22:10
tuxfusion, Niksko, thank you for reporting this. The file is not shipped with this package:
~/p/repo > pacman -Qo /usr/bin/pyrazer.pyc
error: No package owns /usr/bin/pyrazer.pyc
~/p/repo > find ../build/razercfg/src -name \*.pyc
../build/razercfg/src/razercfg-0.33/ui/pyrazer/__pycache__/main.cpython-35.pyc
../build/razercfg/src/razercfg-0.33/ui/pyrazer/__pycache__/__init__.cpython-35.pyc
~/p/repo > find ../build/razercfg/pkg -name \*.pyc
../build/razercfg/pkg/razercfg/usr/lib/python3.5/site-packages/pyrazer/__pycache__/main.cpython-35.pyc
../build/razercfg/pkg/razercfg/usr/lib/python3.5/site-packages/pyrazer/__pycache__/__init__.cpython-35.pyc
Edit: Woops. Fixed those `find`s. Not sure if I should remove /__pycache__/?
tuxfusion commented on 2016-03-08 15:08
Could you please not ship /usr/bin/pyrazer.pyc as mentioned below(2015). Every user has to delete the file if python version differs from maintainer, if I understand correct
polyzen commented on 2016-01-19 18:49
od1ssea, did you get it working? I had sent a response.. If you run `pacman -Qo` on one of those files, is it owned by razercfg? If not, you will have to manually rm them to resolve the conflict.
lucasheringer,
alexf commented on 2016-01-12 10:19
Catched an error during makepkg (upgrading 0.32 -> 0.33)
Packages (1) razercfg-0.33-1
Total Installed Size: 0.36 MiB
Net Upgrade Size: 0.00 MiB
:: Proceed with installation? [Y/n] y
(1/1) checking keys in keyring [######################] 100%
(1/1) checking package integrity [######################] 100%
(1/1) loading package files [######################] 100%
(1/1) checking for file conflicts [######################] 100%
error: failed to commit transaction (conflicting files)
razercfg: /usr/lib/python3.5/site-packages/pyrazer/__init__.py exists in filesystem
razercfg: /usr/lib/python3.5/site-packages/pyrazer/__pycache__/__init__.cpython-35.pyc exists in filesystem
razercfg: /usr/lib/python3.5/site-packages/pyrazer/__pycache__/main.cpython-35.pyc exists in filesystem
razercfg: /usr/lib/python3.5/site-packages/pyrazer/main.py exists in filesystem
Errors occurred, no packages were upgraded.
==> WARNING: Failed to install built package(s).
polyzen commented on 2015-11-11 07:02
erkexzcy, read below gtmanfred commented on 2015-03-27 00:40
erkexzcx commented on 2015-11-11 06:28
If it fails for you because of PGP, temporary solution would be:
makepkg --skippgpcheck
Thymo commented on 2015-09-10 20:16
It works now, thanks.
polyzen commented on 2015-09-10 20:13
Did you manage to install qrazercfg without pyside?
pacman -Q python-pyside
Thymo commented on 2015-09-10 19:58
I get this error when running qrazercfg:
Traceback (most recent call last):
File "/usr/bin/qrazercfg", line 18, in <module>
from PySide.QtCore import *
ImportError: No module named 'PySide'
Any ideas?
Niksko commented on 2015-09-03 06:39
Getting an error about an incorrect magic number like I was? The fix is to delete the /usr/bin/pyrazer.pyc file.
polyzen commented on 2015-08-08 02:24
Yes
mischka commented on 2015-08-07 23:26
We still have to manually trust the pgp key?
polyzen commented on 2015-07-20 19:54
eNTI, the program you're using is seeing my packages as orphaned on aur.archlinux.org. I am only maintaing them on here (aur4.arch..)
eNTi commented on 2015-07-20 13:03
any reason why this package is marked orphaned on my system?
gtmanfred commented on 2015-03-27 00:40
You have to gpg --recv-keys the pgp key, and then gpg --lsign it, you will need to setup your own gpg keyring as well, otherwise just remove the .asc file from the sources.
mischka commented on 2015-03-27 00:12
I am also getting the pgp error.
z1lt0id commented on 2015-03-17 07:50
I got the following issue when compiling, not the right pgp keys.
[code]
==> Validating source files with sha256sums...
razercfg-0.31.tar.bz2 ... Passed
razercfg-0.31.tar.bz2.asc ... Skipped
razercfg.desktop ... Passed
razer.svg ... Passed
tmpfile.conf ... Passed
==> Verifying source file signatures with gpg...
razercfg-0.31.tar.bz2 ... FAILED (unknown public key 5FB027474203454C)
==> ERROR: One or more PGP signatures could not be verified!
==> ERROR: Makepkg was unable to build razercfg.
[/code]
haliski commented on 2015-03-13 21:58 l...ry
Warning: razerd.service changed on disk. Run 'systemctl daemon-reload' to reload units.
Hint: Some lines were ellipsized, use -l to show in full.
[haliski@ArchLinux ~]$ systemctl -l status razerd loading shared libraries: librazer.so: cannot open shared object file: No such file or directory
Warning: razerd.service changed on disk. Run 'systemctl daemon-reload' to reload units.
Cannot get it working!
polyzen commented on 2015-02-21 16:28
StatelessCat, re your first post, how are you starting razerd? `systemctl restart razerd` should get it "working again." You may need to open an issue upstream. This seems relevant
As for your second post, that's normal. It's due to fakeroot. ldconfig is run from razercfg.install
StatelessCat commented on 2015-02-20 19:00
i also got :
-- Installing: /tmp/yaourt-tmp-raphael/aur-razercfg-git/pkg/razercfg-git/usr/lib/librazer.so
ldconfig: Can't create temporary cache file /etc/ld.so.cache~: Permission denied
CMake Warning at librazer/cmake_install.cmake:55 (message):
WARNING: ldconfig failed: 1
Your system will probably be unable to locate librazer.so library
Maybe it's related to the previous issue ?
StatelessCat commented on 2015-02-20 18:54
Hey, sometimes i got:
Feb 20 19:52:38 xxx razerd[2559]: Razer device service daemon
Feb 20 19:52:38 xxx razerd[2559]: librazer: razer-taipan: USB read 0x01 0x300 failed: 26
Feb 20 19:52:38 xxx razerd[2559]: librazer: hw_taipan: Failed to commit initial settings
and razercfg -r answer "No Razer device found in the system"
When this happens, i reboot my computer, and most often it works again.
polyzen commented on 2015-02-14 10:42
Glad it's working for you now, sorry I couldn't help
cokomoko commented on 2015-02-13 15:24
This problem is solved with cmake 3.1.3-1
julianjames7 commented on 2015-02-08 04:18
rcz, check your path (not just your Python path!) for anything named "pyrazer.pyc". I had a /usr/bin/pyrazer.pyc that was overriding the package's pyc file, which didn't work as it was an old Python 2.7 file.
cokomoko commented on 2015-02-02 22:37
Problem with package;
CMake Error at /usr/share/cmake-3.1/Modules/CMakeTestCCompiler.cmake:78 (CMAKE_DETERMINE_COMPILE_FEATURES):
Unknown CMake command "CMAKE_DETERMINE_COMPILE_FEATURES".
Call Stack (most recent call first):
CMakeLists.txt:1 (project)
-- Configuring incomplete, errors occurred!
See also "/tmp/yaourt-tmp-cokomoko/aur-razercfg/src/razercfg-0.31/CMakeFiles/CMakeOutput.log".
please update
f4bio commented on 2015-01-21 22:27
additionally i had to specify the keyserver mentioned at
gpg --keyserver hkp://keys.gnupg.net/ --recv-key 5FB027474203454C
youri commented on 2015-01-20 13:11
gpg --recv-key 5FB027474203454C
worked for me.
StatelessCat commented on 2015-01-16 22:34
I got "razercfg-0.31.tar.bz2 ... FAILED (unknown public key 5FB027474203454C)" while I added Michael Büsch <m@bues.ch> keys to my gpg, can you help ?
rcz commented on 2014-12-01 19:26
I'm getting this error when running razercfg:
Traceback (most recent call last):
File "/usr/bin/razercfg", line 21, in <module>
from pyrazer import *
ImportError: bad magic number in 'pyrazer': b'\x03\xf3\r\n'
Any ideas?
polyzen commented on 2014-10-09 21:24
Import the author's key if you want makepkg to verify the source tarball:
polyzen commented on 2014-08-02 15:29
@sistematico, does qrazercfg not work with python-pyside?
sistematico commented on 2014-07-31 23:57
Please put python2-pyside as depends.
Thanks.
polyzen commented on 2014-07-22 21:29
Upstream now installs the udev rule to $(pkg-config --variable=udevdir udev)/rules.d/80-razer.rules.
GeneMarston commented on 2014-07-22 19:19
It says in README "The 'make install' step did already install the UDEV script automatically.
87 It installed the script to
88 /etc/udev/rules.d/01-razer-udev.rules"
I have no such file in my /etc/udev/rules.d/ directory. What went wrong? Was it explicitly removed in this arch package?
polyzen commented on 2014-07-07 23:11
The service unit is now in /usr/lib/systemd/system/. If you had it enabled, you can do the following:
# systemctl reenable razerd.service
suthernfriend commented on 2014-07-06 22:16
Wow, thank you this package really helped me out.
But if i set the leds for my Razer Naga i get the following errors:
root@maschine # razercfg -l 1:Scrollwheel:on
Traceback (most recent call last):
File "/usr/bin/razercfg", line 485, in <module>
exit(main())
File "/usr/bin/razercfg", line 478, in main
devOps.runAll()
File "/usr/bin/razercfg", line 284, in runAll
op.run(self.idstr)
File "/usr/bin/razercfg", line 179, in run
led = filter(lambda l: l.name == ledName, leds)[0]
TypeError: 'filter' object is not subscriptable
As I have no idea of python i cant fix this. Would appreciate a fix for this :)
PerfectGentleman commented on 2014-04-19 17:40
razercfg-0.20-sbin.patch doesn't needed anymore
Blaster_Fr commented on 2014-04-16 19:58
WoW ty it worked i edited in /etc/razer.conf
# Configure first profile
# Resolution: 450, 900, 1800, (3500)
res=1:450
# Frequency: 125, 500, 1000
freq=1:1000
TY ;)
PerfectGentleman commented on 2014-04-16 13:02
Blaster_Fr, have you looked at /etc/razer.conf ?
Blaster_Fr commented on 2014-04-16 08:28
Hi it works perfectly but how can i set to 450DPI for ever ? When i reboot it switches to 900DPI
Blaster_Fr commented on 2014-04-16 02:24
Hi it works perfectly but how can i set to 450DPI for ever ? When i reboot it switch to 900DPI
polyzen commented on 2014-04-10 21:31
Thank you, PerfectGentleman.
dookytek, presumably installing 'python2-pyqt4' will solve that.
The razerd.service is in '/etc/systemd/system/', but should probably be in '/usr/lib/systemd/system/'. will look into this.
PerfectGentleman commented on 2014-04-10 11:55
there is no 'python2-qt', there's 'python2-pyqt4'
polyzen commented on 2014-04-05 02:04
Updated this. Removed the excessive razerd.service.
dookytek commented on 2014-02-02 11:03
i get this error
20 root@dan1el-PC /home/dan1el # qrazercfg :(
Traceback (most recent call last):
File "/usr/bin/qrazercfg", line 18, in <module>
from PyQt4.QtCore import *
ImportError: No module named PyQt4.QtCore
And if i want to change the dpi with razercfg -r i dont know what to type in..
PerfectGentleman commented on 2014-01-22 15:14
razerd[214]: librazer: razer-naga: Command 0300/0104 failed with 03 - what does it mean ?
rhinoceraptor commented on 2014-01-02 21:41
Thank you! This package works great, and now my mouse is useable under arch!
z3bra commented on 2013-06-04 09:17
Ok, here is a modified tarball, with all the files included.
just extract / makepkg :
z3bra commented on 2013-06-04 09:09
Okay, I worked on it, And here is what I came with:
- updated razerd.service :
- updated PKGUILD :
- patch for razercfg files :
I'm not really used to package fixing... So It's not really "clean" actually.
I hope somebody will be able to fix my mess ^^
To reinstall the package under /usr/bin, please do the following :
1/ Download & extract the actual TarBall
2/ Remove the file "razerd.service" so it will be re-downloaded by the new PKGBUILD
3/ Replace the PKGBUILD with my modified version
4/ makepkg -si
5/ watch the magic happen... :)
I am aware that this is actually "a hack". but it works on my machine ATM.
To make it cleaner, a new tarball might be uploaded with the new files included (razerd.service / razercfg-0.20-sbin.patch)
I hope it will help !
fukawi2 commented on 2013-06-04 01:34
Please update PKGBUILD to install binaries to /usr/bin instead of /usr/sbin in line with recent changes:
oblique commented on 2013-06-03 12:29
please move /usr/sbin/razerd to /usr/bin (see)
Anonymous comment on 2013-02-27 22:26
Another satisfied customer! Next!
Anonymous comment on 2013-02-27 16:26
Another satisfied customer! Next!
sistematico commented on 2012-12-19 02:50
sistematico commented on 2012-12-19 02:42
Please, fix or drop.
itti commented on 2012-11-11 12:03
I converted the razerd initscript to a systemd service unit.
See:
This is for a manual install of razercfg 0.19 though. You may have to update the path to razerd and make sure that librazer.so is found.
Anonymous comment on 2012-10-22 01:58
Use new website:
The "" don't work anymore.
Current Version is 0.19. please update.
Reihar commented on 2012-06-25 10:07
The config file doesn't seem to be installed.
Plus, I don't understand why the package is shown as outdated since 0.17 is the last version.
bonko commented on 2012-05-24 23:23
ccharles is right, it works if you change the source to this:
source=(""
bonko commented on 2012-05-24 15:11
ccharles is right, it works if you change the source to this:
source=(""
ccharles commented on 2012-05-22 13:39
It looks like the bu3sch.de domain expired. Did it possibly move here?
ccharles commented on 2012-05-22 13:38
Did it possibly move here?
ccharles commented on 2012-05-22 13:36
It looks like the bu3sch.de domain expired.
Does anybody have a mirror? Does anybody have contact information for the developer?
Synthead commented on 2012-01-26 15:01
@JDiPierro: Nice catch, thank you! Updated to revision 2 that takes care of this.
Anonymous comment on 2012-01-22 22:31
Fails building. in PKGBUILD change the line:
install=('razercfg.install')
to
install='razercfg.install'
to solve.
Synthead commented on 2011-12-05 01:31
Adopted, updated to 0.17, created new python2 patch, added Razer icon file and put it in the .desktop file, made udevadm and ldconfig run on post_remove(), and made the "razerd daemon" install warning a one-liner.
chepaz commented on 2011-10-08 15:53
0.17 seems to be out.
Modified files:
PKGBUILD:
Patch:
Anonymous comment on 2011-08-30 22:52
According to the website () the Abyssus isn't supported by the driver. Have you installed python2-pyqt? That's the package that provides PyQt4 for python2.
Synthead commented on 2011-08-30 19:40
Looks like there's some issues with this config tool. I just got an Abyssus and the driver doesn't see it. Additionally, qrazercfg can't find pyqt4. I have started razerd and pyqt4 is installed.
[max@killterm3 razercfg]$ razercfg -V
No Razer device found in the system
[max@killterm3 razercfg]$ lsusb
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 001 Device 002: ID 0424:2504 Standard Microsystems Corp. USB 2.0 Hub
Bus 001 Device 004: ID 0424:2504 Standard Microsystems Corp. USB 2.0 Hub
Bus 001 Device 005: ID 058f:6377 Alcor Micro Corp. Multimedia Card Reader
Bus 002 Device 002: ID 06e0:0319 Multi-Tech Systems, Inc.
Bus 001 Device 047: ID 093a:2510 Pixart Imaging, Inc. Optical Mouse
Bus 001 Device 050: ID 1532:001c Razer USA, Ltd RZ01-0036 Optical Gaming Mouse [Abyssus]
[max@killterm3 razercfg]$ razercfg -V
No Razer device found in the system
[max@killterm3 razercfg]$ qrazercfg
Traceback (most recent call last):
File "/usr/bin/qrazercfg", line 18, in <module>
from PyQt4.QtCore import *
ImportError: No module named PyQt4.QtCore | https://aur.archlinux.org/packages/razercfg/?comments=all | CC-MAIN-2017-22 | refinedweb | 3,073 | 58.79 |
Executive.
AllowAnonymous – simple and elegant
Thanks for the article
Please show also
1. Roles based authorization
2. How to integrate 1. with Active Directory
Great article first of all Rick! I'm doing this exact approach on my web application. One question I'm looking for best practices on. In my authorize filter, I'm making a call to the database to get the Tenant based on url. and also pulling the User object from the database to populate the custom Principal object. Is there a better way than adding 2 db calls to every single action?
Unfortunately, the longer lines in your code samples are being cut off here, which makes them of little use.
I changed the controller actions a little for my MVC.NET 3 application, such that the login page goes to "" rather than "…/LogOn", and then implemented the technique as shown here. While it works well to block routes, I seem to have run into a problem redirecting users to my "~/Login" page. The redirect instead goes to "~/Account/Login" (note: NOT "~/Account/LogOn" — so, something is going well). My Web.Config <forms> tag reads as follows:
<forms loginUrl="~/Login" timeout="2880" />
Is the "~/Account/" route "hard-coded" somehow into the [AllowAnonymous] attribute?
I found the solution to my issue. It has nothing to do with this article. It turns out that the web.config property <forms loginUrl="~/Login" timeout="2880" /> no longer works for MVC.NET 3 applications. (Apparently this is a "known issue.") Instead, you have to insert a new key under <appSettings>:
<add key="loginUrl" value="~/Login" />
More here:
stackoverflow.com/…/mvc-forms-loginurl-is-incorrect…/mvc3-release-notes
You should definitely consider using the FluentSecurity package offered on NuGet. It offers a clean and flexible approach on how to configure the security restrictions of different controllers.
Thank you. It's really informative. I'm looking for another great tutorial to build my asp.net mvc 3 site. I will wait another great post from you. I have found you and this site too, windows2008hosting.asphostportal.com. Really helpfull too. 🙂
Well done sir. Very well explained.
I changed the skipAuthorization slightly
int[] ClientErrors = new[] {400, 404, 406, 408, 410, 411, 412, 413, 414, 417, 418, 426, 428, 429, 431, 449};
bool skipAuthorization = filterContext.ActionDescriptor.IsDefined(typeof (AllowAnonymousAttribute), true)
|| filterContext.ActionDescriptor.ControllerDescriptor.IsDefined(typeof (AllowAnonymousAttribute), true)
|| ClientErrors.Any(err => err == filterContext.HttpContext.Response.StatusCode)
This will allow requests to get to the error page properly for anonymous users. This is in no way should reduce security as you should not be serving content for those other than perhaps a custom error page with the error prettified.
What’s the best way to secure a MVC application from anonymous users?
While you suggested 2 approaches, but both of them are not recommend. Since putting security attribute on every single action would be a maintenance nightmare, a better solution in this regard would be really appreciated.
Great article!
I have one problem though, how do I allow sitemap.xml to be accessed by anonymous users? Today I have put it under my homecontroller and use [allowanonymous] on the action, but Id like to have it directly in my root of the webpage..
BR Johan
I'm using the Authorize attribute on my controller, however, this doesn't work well in the development environment as you run into a loopback situation and cannot logon.
How do you handle development while using the Authorize attribute?
From the looks of it, this functionality will be baked into MVC4 when it's released, it's present in the recently released Beta.
hi, I'm having a problem while implementing this, just wondering if anyone else facing the same issue?
I've explained my problem stackoverflow.com/…/mutliple-controller-calls-in-asp-net-mvc3 here as well, but couldn't get any answer. can someone tell me is it the desired behavior or what?
Please could you expand on this uncomfortable advice? Amazon.com, for example, does this.
>>> Many web sites log in via SSL and redirect back to HTTP after you’re logged in, which is absolutely the wrong thing to do. Your login cookie is just as secret as your username + password, and now you’re sending it in clear-text across the wire.
Thanks, I put in the AllowAnonymousAttribute but had to remove it with the MVC 4 update as it is present in the framework now.
One potential security issue I've found is that to get this solution to work correctly you need to remove the <authorization><deny users="?" /> entry from the web.config, otherwise if you aren't authorized you still get redirected to the login page before the AuthorizationAttribute code is executed.
By removing that it opens up any handlers from being secured so anyone could get to those handlers without being logged in, including web crawlers. This presents a huge security hole depending on what handlers you have open. I saw it because of Glimpse and Elmah which show server variables on the client and must be secured (see…/aspnet-session-hijacking-with-google.html).
I fixed it by adding a <location> tag in the web.config for each handler and added <authorization><deny users="?" /> to it to make sure a user was logged in before accessing those pages.
Is there a better way to handle this? Possibly to block all handlers unless a user is logged in?
Awesome stuff. Very precise and highly rich information. Thanks for the article.
This is an excellent article
Thanks
Awesome article.
Super, simple stuff. a very elegant solution.
I made a custom filter attribute to check if accounts were approved and belongs to certain roles.. but if an Action/Controller was decorated with AllowAnonymous my Action filter ignored it and still re routed to denied pages. Was looking for that code with bool skipAuthorization for hours. THANKS!!
Just a small editing issue:
I could be wrong but it seems that the first two points in "Limitation of the LogonAuthorize filter approach" are mute because the check for the AccountController is done with the 'is' operator…not on a magic string matching.
Therefore, if someone was to change the name of the AccountController, then the Filter code would no longer compile and one would easily know to update it…
Furthermore, if you have multiple Areas, each with an AccountController, you would be forced to use their namespaces to specify exactly which controller to Whitelist.
Excellent Article. You really put some thoughts to this | https://blogs.msdn.microsoft.com/rickandy/2011/05/02/securing-your-asp-net-mvc-3-application/ | CC-MAIN-2019-39 | refinedweb | 1,087 | 58.38 |
Want to see the full-length video right now for free?Sign In with GitHub for Free Access
Vim allows us to describe the edits we want to make using a concise and expressive language of key mappings that together we refer to as Vim's command language. This language is one of the most powerful and unique aspects of Vim and is worth investing the time needed to master it.
Each command is made up of two parts: an operation and a section of text; much like a sentence in prose is made of a verb and a noun. The following is a list of some of the operator mappings:
After pressing the key-mapping for a given command Vim will wait for you to
identify the text you want the command to operate on. The simplest commands
are made by simply repeating the operator a second time to act on the current
line. For example, where
d is the operator for "delete",
dd will delete
the whole line. Each of
yy,
cc,
>>,
==behave similarly.
Obviously it would be somewhat limiting if we could only operate on lines, so
luckily we can also identify text by using any motion. Just like you can use
w to move to the next word, you can use
dw to delete to the next word.
This also includes more complex motions such as
t which will further wait
for you to specify a charter to go up "un*T*il". Thus,
dt, would delete
up until the next comma on the current line.
The following table shows some of the many variations of the delete operation
you can build by combining the
d operator mapping with a motion:
Further, all of these motion combinations can be used with other operators
like
c or
v to perform those operations on the same text ranges.
See Vim's help page for motions for a full listing:
:h motion.
Combining motions with operations to form commands gives us a huge array of edits we can perform just by remembering a few operators and the motion mappings we are already using to navigate. However, this approach has the limitation that in order to use the motions, you need already have your cursor at one end of the text range you want to edit.
Text objects are another "noun" that can be used in place of motions that can define a range of text from anywhere within it. For instance, given the following text with the cursor on the first "e" in greet:
def greet puts "hello" end
We can use
diw to delete the "inner word", specifically "greet" in this
case. Note that this would not be possible with a motion operation, e.g.
dw
since we are starting in the middle of the word.
Text objects, like motions, can be used with commands to define a single "change". Unlike motions, text objects allow you to run the command from anywhere inside the text object, rather than just at the end. The following is a partial list of the text objects available in Vim:
For a full listing, see
:h text-objects.
The
ciw ("change inner word"), then type "hello", then hit escape, I've now
composed an edit that can change any word to "hello", simply by pressing
iw I can repeat this change with my
cursor anywhere on any other word.
.command in Vim repeats the last "change" command. This may sound limited, but since "changes" in Vim are compound expressions that combine a command and a motion or text object ("verb" and "noun"), this repeatability is actually quite powerful. For instance, if I were to change a word by running
.. Best of all, since I used the text object
As a rule of thumb, try to think about changes in terms of repeatability. At a
minimum this means trying to use text-objects rather than motions as much as
possible. Every time you press
.and Vim just does the right thing, you'll thank yourself.
The beauty and power of Vim comes from this command language. Now that you have a sense of how to move in Vim, what commands are available, and what text-objects you can use, you can combine these in countless ways. You're not limited to a handful of predefined combinations, but instead you're free to combine the Verbs (commands) and Nouns (motions & text objects) in any way you need.
One benefit is not just in the efficiency and speed of these operations, but
also the surprisingly-limited memorization or thinking needed. Vim's language
is designed to closely map the way we naturally think of the changes we want to
make, and as such there is almost no need to "translate" for Vim. "change a
word" becomes
caw, "delete inside the single quotes" becomes
di', and so
on. The language is intuitive and efficient, leaving you free to focus on the
actual work you want to to get done. | https://thoughtbot.com/upcase/videos/onramp-to-vim-command-language | CC-MAIN-2022-21 | refinedweb | 832 | 66.67 |
So, you've noticed that JBoss runs on port 8080 and to access it you need to go to http://<host>:8080 or https://<host>:8443. Instead, you'd like to have JBoss run on port 80 (http) or 443 (https) so that you can access it via http://<host> or https://<host>.
The reason JBoss runs by default on these higher ports is that on some operating systems (linux, unix, etc.), only priviledged users can bind to low ports (below 1024). Typically, people do not want their web or application servers running as priviledged users because of the security implications.
There are at least three ways to effectively let clients access JBoss over standard HTTP ports (80 / 443).
1.) Use a Load Balancer
Most production environments use this approach as it allows for load balancing and failover to the back end server instances. Also, this allows running multiple JBoss servers on the same machine, but all proxied through the standard ports (80 / 443). In some cases though, this is overkill or not an option.
2.) Use TCP Port Forwarding
In this approach, you can basically forward traffic from one port to another. In this way, JBoss will be running on a higher port like 8080, but all traffic coming into port 80 will be automatically redirected to 8080. In this way it will appear to clients that JBoss is runing on port 80.
There is more discussion of this approach here:
If you go this route, make sure that you setup the connector proxy port in $JBOSS_HOME/server/$CONFIG/deploy/jboss-web.deployer/server.xml so that any generated URLs will have the proxy port instead of the actual port. Should look something like this below, as mentioned here:
<Connector port="8080" ...
Here are the linux commands required to setup port forwarding:
iptables -F iptables -X iptables -t nat -A OUTPUT -d localhost -p tcp --dport 80 -j REDIRECT --to-ports 8080 iptables -t nat -A OUTPUT -d <network IP address> -p tcp --dport 80 -j REDIRECT --to-ports 8080 iptables -t nat -A PREROUTING -d <nework IP address> -p tcp --dport 80 -j REDIRECT --to-ports 8080 /etc/init.d/iptables save /etc/init.d/iptables restart
3.) Run JBoss as a Priviledged User (root) Temporarily Such That It Can Bind to Lower Ports (80 / 443)
Running JBoss as the root user is very simple to do. The problem is that if we just run the application server as root, it can be considered a security vulnerability. So, what we're going to show below is one way in which the JBoss server can be started as root (in order to bind to desired ports) and then changed to be running as a different user after the ports have been bound.
It's actually more complex that one might think to accomplish this. First, we'll walk through exactly what you need to do to make this work for you once everything has been built. Second, we'll walk through an overview of how the pieces work together to make this happen. Third, we'll walk through all the gory details of building the pieces necessary to make this work. If you are running a 32 bit JVM on Linux, this third step is just FYI as the pieces have already been created for you and are attached to this wiki.
What do we need to do to get this working?
- Set the HTTP ports to 80 / 443 as desired in: $JBOSS_HOME/server/$CONFIG/deploy/jboss-web.deployer/server.xml
- Copy the attached libsetuid.so to your LD_LIBRARY_PATH (probably /usr/lib)
- Copy the attached setuid.jar to your $JBOSS_HOME/server/$CONFIG/lib directory
- Copy the attached setuid.sar to your $JBOSS_HOME/server/$CONFIG/deploy directory
- Expand the sar and edit jboss-server.xml to have the correct username you want to run the server as
- Login as the user you want to run the server as
- su to root, keeping the other user's environment:
su -m root
- Set the default group for this root shell to the group your other user is in:
newgrp <group>"
- This is so that directories and files created by root while the server is starting will be accessible by the user that the server ends up running as
- Set the umask for the current shell to have user/group priviledges: "umask 0006"
- Delete any temp files if the server has already been started in the past:
rm -rf $JBOSS_HOME/server/$CONFIG/tmp rm -rf $JBOSS_HOME/server/$CONFIG/workrm -rf $JBOSS_HOME/server/$CONFIG/log
- Start the server
- You'll notice (via "top" or "ps") that the server is initially running as root, but switches to the user you configured in #5 before the server finishes starting. You'll also see something like this in the server log:
19:52:28,569 WARN [SetUidService] Changing UID of process to: apestel 19:52:28,571 WARN [SetUidService] Changed process UID to: apestel --> Successfully set uid/gi
That's it! You've bound the server to priviledged ports, but it's now running as an unpriviledged user!
Ok, now let's get a little overview to see how this all works...
Changing the uid and gid of a running process (JVM in this case) requires executing a C API (setuid and setgid) from within the JVM via JNI. So for this example, we have created:
- a Java class containing a native method that we compile into a standalone JAR file (setuid.jar)
- a shared object library (libsetuid.so) based off the native method in "A" that invokes the C setuid() api
- an MxBean that will start after the JBoss Web container has started and will use the Java Library in "A" to change the user of the running application server to a non-root user
Now, let's look at these three pieces in more detail.
Creating the Java Class with a Native Method
The Java class with the native method is attached (SetUid.java) and shown below.
package org.jboss.community; public class SetUid { static { System.loadLibrary("setuid"); } public native Status setUid(String username); public static class Status { private boolean success; private String message; public Status(boolean success, String message) { this.success = success; this.message = message; } public boolean getSuccess() { return success; } public String getMessage() { return message; } } }
To create the C header for this setuid() native method, you simply need to execute this command:
javah -jni org.jboss.community.SetUid
This will create a file called org_jboss_community_SetUid.h that you can just rename from .h to .c, add the C method implementation, and build it into a shared object library which will be discussed in the next step. The last thing we need to do for this step is compile our Java class and add it to a JAR as shown below:
javac -classpath $JBOSS_HOME/lib/log4j-boot.jar -d ./jarBuild SetUid.java jar -cvf $JBOSS_HOME/server/default/lib/setuid.jar -C jarBuild org
Creating the Shared Object Library for the JNI Class
The shared object library (libsetuid.so) is attached. This shared object library was built for 32 bit Linux (specifically built on Fedora 9), but you can build it for whatever your environment happens to be. It needs to be put somewhere in the LD_LIBRARY_PATH so that the JBoss server will be able to find it and load it. For my testing, I put it in /usr/lib.
Building this shared object library (if you need it for a different platform) is actually not that difficult. Here is the code that needs built (attached as org_jboss_community_SetUid.c):
#include <jni.h> #include <pwd.h> #ifndef _Included_org_jboss_community_SetUid #define _Included_org_jboss_community_SetUid #ifdef __cplusplus extern "C" { #endif jobject getStatus(JNIEnv *env, int successCode, const char * message) { jstring message_str = (*env)->NewStringUTF(env, message); jboolean success = successCode; jclass cls = (*env)->FindClass(env, "Lorg/jboss/community/SetUid$Status;"); jmethodID constructor = (*env)->GetMethodID(env, cls, "<init>", "(ZLjava/lang/String;)V"); return (*env)->NewObject(env, cls, constructor, success, message_str); } JNIEXPORT jobject JNICALL Java_org_jboss_community_SetUid_setUid (JNIEnv *env, jobject obj, jstring username) { const char *user_str = (*env)->GetStringUTFChars(env, username, NULL); struct passwd *pwd = (struct passwd *)getpwnam(user_str); if(pwd == 0) { return (jobject)getStatus(env, 0, "Error getting uid/gid information for user."); } int uid = pwd->pw_uid; int gid = pwd->pw_gid; if(setgid(gid)) { return (jobject)getStatus(env, 0, "Error setting gid for user, current user may not have permission."); } if(setuid(uid)) { return (jobject)getStatus(env, 0, "Error setting uid for user, current user may not have permission."); } return (jobject)getStatus(env, 1, "Successfully set uid/gid."); } #ifdef __cplusplus } #endif #endif
To build this into an executable, just execute the following command:
gcc -o libsetuid.so -shared -I/usr/java/jdk1.6.0_11/include -I/usr/java/jdk1.6.0_11/include/linux org_jboss_community_SetUid.c
That's it, now you've built the only platform specific piece of this solution.
Creating the MXBean SAR to Set the Server's UserID
There are three pieces to creating the MXBean SAR.
First, we need to create the ServiceMBean interface (attached as SetUidServiceMBean.java) as shown below:
package org.jboss.community; public interface SetUidServiceMBean { void setUsername(String username); String getUsername(); void start(); void stop(); }
Second, we need to create the implementation of this MxBean (attached as SetUidService) as shown below:
package org.jboss.community; import org.apache.log4j.Logger; public class SetUidService implements SetUidServiceMBean { private Logger log = Logger.getLogger(this.getClass()); private String username; public void setUsername(String username) { this.username = username; } public String getUsername() { return username; } public void start() { log.warn("Changing UID of process to: " + getUsername()); SetUid.Status status = new SetUid().setUid(username); if(!status.getSuccess()) { log.error("Unable to change process UID to: " + getUsername() + " --> " + status.getMessage()); } else { log.warn("Changed process UID to: " + getUsername() + " --> " + status.getMessage()); } } public void stop() { log.info("Stopping SetUidService."); } }
Third, we need to create the jboss-service.xml file for this SAR (attached as jboss-service.xml) as shown below (note that this is where you'll need to set the real username that you want the process to run as):
<?xml version="1.0" encoding="UTF-8"?> <server> <mbean code="org.jboss.community.SetUidService" name="SetUid:service=SetUid"> <attribute name="Username">apestel</attribute> <depends>jboss.web:service=WebServer</depends> </mbean> </server>
Lastly, we need to build these files into a SAR as shown below:
javac -classpath $JBOSS_HOME/lib/log4j-boot.jar:jarBuild -d ./sarBuild SetUidService.java SetUidServiceMBean.java
jar -cvf $JBOSS_HOME/server/default/deploy/setuid.sar META-INF -C sarBuild org -C sarBuild META-INF
That's it! Now you should be able to do all this from scratch if desired. The first time I did this, instead of creating an MxBean SAR to set the new userid for the process, I created a JSP to do it. That actually works fine and I'll past the code in below, but probably more folks will want the userid to be set automatically at startup, which is why I created the MxBean example shown above. Here's a JSP if you want to set the userid that way:
<%@ page <html> <head> <meta http- </head> <body> <% if(message != null) { out.println("<h2>"+message+"</h2>\n"); } %> <form> <table bgcolor="#ccccff" cellspacing="1" border="1"> <tr><td colspan="2">Specify a new user for this server to run as:</td></tr> <tr><td width="50%">Username:</td><td align="right" width="50%"><input type="text" name="username"></input></td></tr> <tr><td colspan="2" align="right"><input type="submit" value="Change Process Owner"></input></td></tr> </table> </form> </body> </html>
Closing Comments
1.) One might ask why have a separate JAR for the file that loads the shared object library. It turns out that you only want that Class file (with the System.loadLibrary() code) loaded by one class loader. If you were to put it in the SAR archive (which I used to do), it works fine the first time. But the next time you tried to republish the SAR without restarting the server, it would complain that that the shared object library is already loaded. But since it's loaded from a different class loader, it needs to be reloaded again or you'll get UnsatisifedLinkErrors. I don't fully understand all that (there are some related bugs on the Sun website), but figured it was safer to put it in a lib dir where it would only get loaded once by a single classloader. | https://developer.jboss.org/wiki/RunningJBossonPort80or443 | CC-MAIN-2017-22 | refinedweb | 2,060 | 53.51 |
> From: Greg Hudson [mailto:ghudson@MIT.EDU]
> Sent: Sunday, May 12, 2002 2:21 AM
> To: Julian Reschke
> Cc: dev@subversion.tigris.org
> Subject: RE: [Issue 701] - Subversion properties live in an XML
> namespacecalled "svn:"
>
>
> On Sat, 2002-05-11 at 17:28, Julian Reschke wrote:
> > The current situation (for which I raised the bug report)
> simply is that the
> > XML content sent by subversion isn't stricly NS-wellformed XML.
>
> Precisely what XML content is in violation here? I don't think we ever
> use Subversion property names as XML attribute names.
Just try a PROPFIND "allprop" on a subversion repository (I was testing
against the Subversion source repository). You will see properties reported
in a namespace called "svn:". I don't know which kind of properties they
are, but they are certainly there.
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org
Received on Sun May 12 11:14:58 2002
This is an archived mail posted to the Subversion Dev
mailing list. | http://svn.haxx.se/dev/archive-2002-05/0454.shtml | CC-MAIN-2015-11 | refinedweb | 175 | 57.77 |
Using the Esplora Arduino means you have a bunch of sensors built into the board. One of these is the Temperature Sensor. Now I am using code directly for the Arduino.cc page.
The Code will return, in the serial window presented by VisualMicro will look like the following. If you put your finger on the temperature sensor, it will increase in value. If you want to cool it down, then put some ice in a leakproof plastic bag and touch it to the sensor carefully. The esplora, unlike the lilypad arduino is not waterproof. I have included the code from: , you might note that the esplora.h file will throw exceptions that don’t make sense, this is likely because you added a floating string in your code.
Reading the temperature
In the Esplora.h there is variables that set the channels for the temperature and how the degrees are presented. If you think about it, this would be an excellent temperature processing system for a long term cyro device to store babies eggs right? Sensors have ranges of accuracies, so for Cryo you would have to check the specifications to see if the temperature sensor works at that low of temperature. Or like the Thermometers used before the idea of electronic monitoring, which were created using hand blown tubes and mercury with humans doing the reading, were not very accurate. This means that those thermometers definitely did not have the ability to be more accurate than 1 degree of precision. The transistor or diode based thermometers might not be any more accurate, that is up to you or the sensor engineer, usually an electrical engineer but could be mechanical to pick the device and advise you as to what the software should do to auto-calibrate the sensor.
Or you could get to pick the temperature sensor for the cryo storage system. Good luck with any law suits…:)
In a later blog, I will discuss how to put this into the Azure cloud.
/*
Esplora Temperature Sensor
This sketch shows you how to read the Esplora’s temperature sensor
You can read the temperature sensor in Farhenheit or Celsius.
Created on 22 Dec 2012
by Tom Igoe
This example is in the public domain.
*/
#include <Esplora.h>
void setup()
{
Serial.begin(9600); // initialize serial communications with your computer
}
void loop()
{
// read the temperature sensor in Celsius, then Fahrenheit:
int celsius = Esplora.readTemperature(DEGREES_C);
int fahrenheit = Esplora.readTemperature(DEGREES_F);
// print the results:
Serial.print(“The Ambient Temperature is: “);
Serial.print(celsius);
Serial.print(” Celsius, or “);
Serial.print(fahrenheit);
Serial.println(” Fahrenheit.”);
// wait a second before reading again:
delay(1000);
} | https://blogs.msdn.microsoft.com/devschool/2014/05/16/internet-of-things-temperature-sensor/ | CC-MAIN-2017-09 | refinedweb | 439 | 55.84 |
- label not showing properly/not showing at all
- Check Compact Framework version in registry
- PropertyGrid - Select Value Cell from code
- How do I change color of row in GridView without a GridView event being fired?
- .Net deployment with Acrobat Reader
- What is wrong with BeginReceive?
- Date problem
- Regular expression to replace including new line
- File Locked after XslCompiledTransform
- C# Changing Color/Font for User Controls based on Windows/Appearance
- ASP.NET 1.1: Country DropDownList
- Overriding a enumerator
- About XML Node Attribute
- Sys.WebForms.PageRequestManagerServerErrorExceptio n
- COM vs NET
- automate email sending via outlook
- Date convertion problem
- Generating Website for API documentation using Sandcastle
- problem in GridView
- Q re Com Interop and threading
- Can we add Collection object to Arraylist in c#?
- How to retain the values entered by the user in home page in other pages asp.net
- extracting from an ArrayList
- Treeview in asp.net2.0
- VS 2008 - test driven?
- Lines of code in c#.net
- tab focus
- date fields
- updating a process
- panel message
- enabling n disabling of panels
- HttpWebRequest GetRequestStream - unable to connect
- Regarding Gravitybox scheduling
- encrypt password and credit card details
- technical help on how to Implement a Content Management System - ASP.NET - C#
- CLR Profiler Question
- How to get a updated custom SecurityToken from a WCF server?
- ArrayList to List<myType> and item disposal
- How do you pass values to a script file?
- Capture CTRL + C and CTRL + V event in aspx
- exception during Active Directory Login
- Help for Flicker free drawing in c#
- Alternate for Exit Sub in C#.net.
- Form Activated event in c#.net
- Calculating age in c#.
- document.getelementbyid -- error
- Extension
- how to copy a row and add it as a new row in the bindingsource?
- VB-WEB:The SqlCommand is currently busy Open, Fetching.
- datatable has the haschanges() method?
- Validation
- VB.NET; HDC difference since VB6?
- C# - Begin/EndReceive Loop Woes
- XML to a file problem
- Retrieving data using SQL stored procedure in C# application
- C# xmlDocument to a string
- (asp.net) usercontrol in a repeater
- (asp.net) usercontrol in a repeater
- Setup should remove existing version
- Access Web.config file usign System.xml
- How to get the process running on computer with the help of asp.net
- I am able to open .tif file And Free Handle Write
- Convert a console application to windows service in VS Exp Edition 08
- problem writing to an ini file
- difference between: Path.GetDirectoryName and Directory.GetCurrentDirectory
- Http 403 error
- static variables
- how to create static variables in c#
- ASP.NET 2.0 Publish Website generates no files
- Browser pop-ups
- microsoft baseline security analyzer 2.1
- ASP.NET (VB) Class syntax
- webmsgbox
- ASMX does not recognize System.Messaging namespace?
- Password encyption
- Problem Generating Proxy using WseWsdl3
- Keep class variable/value in events on page?
- c# app Need to store an escaped directory name into an xml config file
- Why my url does not match my page?
- Visual Studio 2005 installation
- Read image to get information from image
- Bring To Front
- diffrecnce between the database concepts in vb.net and vb
- What is the future of C++ MFC and CLR?
- About groupbox in c#.net
- .NET C# Windows Service stop() problem
- Get members from a Domain global group over LDAP
- C# Problems developing/installing Shared Add-in Word 2003
- VSTO - Word Document is Saved Empty
- WebService
- C#: IPGlobalStatistics Members
- C#: get current user's username and password
- C#-WEB: Cross page postback and HTML Hyperlink
- the last record cannot be deleted
- Retrieve Outlook messages in windows form
- uploading website
- Hindi Keyboard
- grid view problem
- Play button code
- C# program on handheld
- Building VC++.Net CF application in commandline
- Building VC++.Net CF application in commandline
- Data not update after closing the showModalDialog Box
- Access, ASP and a remote server
- Problem with Typecast in Generic class "Cannot convert type 'string' to 'T' "
- CreatObject in C#
- String length in bytes
- Datagrid Search
- .NET windows service fails to start when digitally signed
- No.of Users Which Are Logged In
- Values in text area get cleared
- RDLC report
- Random failures of _wfopen
- sound out
- datagrid color change
- executing a script file
- keybd_event strange behaviour
- Screen Development
- Validating XPath
- problem in datagrid view
- Error in windows service: The service did not respond to the start or control request
- bindingsource positionchanged event bug?
- Scripting and vb.net
- VB-ACCESS How to copy a table from one to another DB
- Chinese character in MFC
- Problem with email sending in asp.net
- Cant Login on a sql2005 machine from VS2005 IDE debug mode
- developing a Content Management System using ASP.NET and C
- C#: DOS commands
- how to interchages rows in the excel file and how to display it in txt file using c#
- Database connection methods and their usablility
- combobox problem
- mapping a particular string between files
- C#: server client...
- error in vb.net
- how to display balloon tooltip in infragistics ultrawebgrid
- How to create ExcelAdd-in using(VSTO) VS2005
- Crystal Report
- How to show only one window on run vb.net exe file
- How to issues a build command to Visual Studio via C# code
- C# WebService: How to force a full open/close tags to an empty element using SOAP?
- WebRequest not showing everything....
- WINDOWS EXCEL 2007
- Connection Problem in .NET
- System.ArgumentException: Keyword not supported: 'datasource'
- C#-FORM: TableLayoutPanel Fixed Row
- C# Web Service - Removing <method + result> element in SOAP response
- When a clr has the input type of SqlString it causes nvarchar(4000) to be the parameter type of the function in sql server. how to modify my CLR function to use nvarchar(max) ?
- asp.net security
- Locale Formatting: Phone Numbers
- How to evaluate a string expression to a boolean...
- disable x button in c#
- Images not showing using localhost but are using IIS with Forms Authentication.
- Dispose Pattern -- why void Dispose() is nonvirtual?
- Spatial data types
- LNK4006 warning
- linkbutton inside datagrid not working C#
- FormsAuthentication and hiding username and password
- c# ListBox X2 DataTableX2 copy all between
- Hosting Web Services
- VB.NET Picture Box Image Pointer?
- Windows form - moving controls
- C# Password
- Regional Settings in IIS
- RecordInsertion
- Embed VB.NET usercontrol DLL in Website
- Using Delphi Dll in VB.Net 2005
- Getting VC++ applcation crashed after closing the COM window object
- How to make text in the text box non selectable??
- opener.location.href does not work in Firefox
- Row Filter...
- How to create table at run TIme???
- update rows
- Need Help in Reading Excel File in ASP.NEt
- got problem with exportGridview Data into Excel format
- problem with taking button in gridview and datalist
- Issue Passing string from C# to C++ dll method and manipulating it.
- progress bar
- error:Session state has created a session id--
- Thistall problem
- Threads and OutOfMemoryException
- How to increase the window form size according to adding controls dynmically
- How to check whether email id exists?
- How to setup vb.net data base application on client machine ?
- Help in updating a aspImage in Repeater with JS
- asp.net program is not executing
- Excel to DatagridView VB.net
- How to access dynimically added controls
- arrays
- Getting the Name property of a menuitem in Windows app using C#
- Bindingsource.Contains()Function is not clear to me.
- run crystal report without using DSN
- Best book to Learn Web designing - ASP.NET
- Best book to Learn Web designing
- Best book to Learn Web designing
- datagridview
- How to determine if dataset is empty?
- Thread Deadlock
- Null Vs String.Empty
- how will i know when a user modifies a record?
- Retriving Value From different form
- Adding Database Source
- problem entering data using a web form
- .NET 2/Safari 5.0 compatibility with hidden fields
- Ongoing printer issues Spooler etc.
- How do I write an add-in for IE?
- Arrow keys quit working within Excel 2003
- C#.net object persistence
- Check for lost session
- Replicate multiple forms on a page
- C# WEB: Strange file download problem
- string not fully populated in sql
- C# Services/Hiding
- Delayed UI Code
- Any libraries for MPEG?
- Creating Directory on a Different Server
- Access 2000
- Docking a form to a panel
- How to proceed
- C#-Web: Cannot unregister updatepanel with ID XXX since it was not registered ....
- Jet 4.0 problem with office 2007
- Updating Resource file dynamically
- Autoroute 2007 motorway junctions
- C#: performance counters
- sql update problem with vb.net ?
- Validate, Search and Transform XML with XML Hammer 1.0 rc-3
- .net listbox control - remove border
- maxdb
- problem with user control inheritance
- Regualr expression for a string
- Upgrade Web Application from 2.0 to 3.5
- Manage PropertyGrid (property window)
- How to disable opener page
- webdatechooser
- footertemplate not showing up at all - ASP.NET
- datagridview
- want to access the folders in the remote PC ,C#.
- Trap 'Current item cannot be removed from the list because there is no current item.'
- Crystal Report With RADAR
- c# SortedList can't inserted items,even not repeated
- 32bit app turns into 64bit in Windows Service
- Change forecolor of Disabled combobox
- source code for zoom window(c#.net)
- How Do I Avoid Code Duplication in Overloaded Functions?
- global.asax issue
- problem in javasrcipt function with asp.net
- Mutex not working under restricted User.
- thread starts python script
- How to get text from iFrame control?
- Check number of concurrent users
- drawing graph in vb.net
- Form Activated Event in c#.Net
- Team GFoundation/Team Explorer 2005 - Workspaces
- PC backup
- Control Moving
- Confusion about ADO.net
- Checking more than 1 radio button (c#)
- Populate web service response from xml file
- How to prevent Found New Hardware Wizard in C#
- Populate web service response from xml file
- vb: dataset.datatable.adddatatablerow("john", "rambo") does not update.
- C# Socket.Send removing carriage return
- VB.NET 2: GetAsyncKeyState on more than one desktop
- Raise an event from a DLL to an application
- override the cookie?
- xhtml and input tag outside form
- asp.net timeout when calling a web service
- client in 1.1, server in 3.5
- How to use cross language compatability ?
- Thread in a asmx page - Problem
- how to open MS Access database in VB 2005
- Click Once Issue
- Check Length of Readstring /reading XML file
- .Net IMAP client
- Trouble obtaining database table values...
- TextBox Array - ASP.NET
- is it possible to create a web appliction without installing iis in visual studio 200
- Creating DataSet in .NET2005 !
- C# Service, unable to write to files.
- How i can get a long web page in differnet a4 size pages using Asp.net2.0
- how to read pdf files vb.net ?
- how to get DateTime Picker control inC#.Net
- Difference !Page.IsPostBack and !IsPostBack
- How I can restart my catalog Programmatically in vb.net or C#.net
- How to bind Gridview with controls ?
- problem regarding detailsview
- what is the alternate to windows indexing service (asp.net 2.0) ?
- Ctrl-c is not working in embedded control in IE
- Return Decimal values problem !!!
- culture and datetime format
- Restart MSSQLSERVER service of MS SQL 2005 in XP (URGENT)
- Outlook Integration ( Creating Appointments )
- Thread Abort Problem (C#, Web Application)
- asp.net: changing Color Of row
- Linking error of CxxFrameHandler3
- How to get a ASCII value for a set of text?
- compare the texts with text Formats
- How to Send Outlook Exchange (2003) Calendar Invite using ASP.Net
- What is missing in my code to addrow in datatable then update?
- Starting a webservice from the command prompt.
- Good Reporting tool
- how to place Image in treeview at rootnode only
- To read an excel file with Excel Class in vb.net
- Modeless form open only on top of its application
- c# report making on Search Based.
- create dynamically uploadfile
- Transform Problem
- Managed c++
- script running error
- To read an excel file with Excel Class in vb.net
- Unchecking a check all checkbox.
- looping through the generic list using c#
- How I Can Get Which Program Runing In Memory Currently And What Is the Memort Address
- any one familiar with webclient class for FTP Upload
- How to monitor a SQL database and its table
- javascript object not found
- Getting the unicode range support by a font
- Https Webservice Client
- Underlying connection closed .net 3.5
- Vb .Net : Reading / Editing XML file using XPath
- Getting a C# program with a manifest file to run in the debugger
- Print multiple copies of crystal report in a page
- Displaying Data in Comboboxes (C#)
- End Process With Windows Service
- DataGrid Paging Question
- Microsoft JScript runtime error: Object expected
- Element Problems in XML FIle
- Create dll in vb.net
- Encryption in vb.net
- Problem with compiling managed c++ code with added c - library with variable named generic
- Problem with compiling managed c++ code with added c - library with variable named generic
- Database application in C# .net
- Restricting of characters
- Get FileUpload control value
- C# WEB: Identity Impersonation Issues
- how to convert microsoft office document,thx
- Is It Hard To Make A Web Site? (new member)
- CrystalDecisions.CrystalReports.Engine.LoadSaveRep ortException when Report is opened
- Regarding Ajax Collapsible panel......
- Desktop bar problem
- Brightness setting problem for 2495 and other wm5 devices
- Image insertion in SQL server !
- Creating a Chm build using Sandcastle in Visual Studio 2008
- Windows Mobile 2003
- Treeview C# winform
- Accessing a form control in a differnt class
- Enabling Gridlayout panel at runtime
- Merge TLB into DLL
- how to do auto generation of numbers in C#.Net
- Passing structure from Managed C++ to C#,
- Who wants to sell its Borland C++ Builder 6 Enterprise license?
- How to get dates from calendar in C#
- How can i redirect from a webmethod? - ASP.NET
- Inserting Nodes between Nodes
- What keys to enable for a number-only textbox aside from backspace
- Alem Free ASP.NET programming software
- Upload more than 2MB files into FTP server directory in ASP.NET(VB) WEb application
- Database Migration
- software document
- How to Transfer data from text file to .CSV format using C # code
- C# :Capturing a user control's graphics...
- How to draw on windows form by draging the mouse and save it as Jpeg file(urgrent)
- Check printer connection
- Revisit, Application takes up 1.5GB of memory
- C#: wmi CPU utilization
- How to get html color code from color pallette in c#.net?
- Log file accessed by an external application
- Display Image
- How to alert user when upload file size limit exceeds
- want to create an application like MS-outlook
- How to give email validation inside a textbox?
- Calling C# function inside VB.NET project
- How to Execute .NET programs into IIS server
- Block Socket Connection Receive Never Throws A Socketexception ????
- Is there a multiple strings.replace function in vb.net?
- How to know a Http request URL is a path or a file name?
- CryptoAPI and .net
- switching between windows display
- [C#] How can I check if an array index is valid?
- C#-Form: Databound controls repeatedly get values from datasource on control change
- Custom Generic List
- .net remoting - remote object
- How to start Web Application automatically afrer IIS is started?
- C#-FORM : Using GetHashCode() to compare files.
- How to use On Click Event of command button
- last website
- Ban Ctrl+V
- what is this error: Invalid number of arguments: function convert().
- how to handle keyboard events in user control .net C#
- Data Reader...
- Make change to combobox in Datagridview
- Sub Columns in Data Grid
- FileSystemWatcher OnChanged
- Would Like to Completely Uninstall .NET
- Passing Parameters For Reporting ASP.NET - C#
- Running an application inside .NET MDI application
- Entourage X Error -2003
- Window application with Internet
- Displaying a .jpg on an .aspx page
- .htaccess and soap
- There is already an open DataReader associated with this Command..
- newtwork printer
- VB C# .. On press key a OR b script..
- multithreading with listview control
- C#-WEB: PDF Reporting solution
- Wpf tooltip exist Direct3d memleak in native code in xp and vista os
- vb to asp
- send string from vb to aspx
- convert xl file to PDF
- Dynamic crystal report
- Fieldset
- split string problem
- c# : control font scaling
- VB.NET : Message Box
- webX Remote Desktop
- how to bind data to form view in asp.net 2.0(not through wizard)
- How to send fax from asp.net 1.1web application?
- XML2Excel: Import to different sheets
- Regular Expressions in C#
- VB.NET -WindowsApp Could not find file
- Sorting Data
- how to load iframe in page?
- linkedcombo box
- what all versions of .net framework support COM isolation (.manifest files)
- I search test user to a Wysiwyg Apache FOP editor
- HELP: Changing ListView ImageList background color for transparent images
- Consecutive Http posts..
- How to run a C# class Main from within a project?
- Disable Button once clicked
- How to read Excel file & save data in array...........
- Insert new XML data using an XPath
- WHERE IS *.ASPX.DESIGNER.VB FILES???????
- Bad Request (400) Error from POX WCF Service When Message Size > 6
- Capture Office Standard buttons Events in .NET
- outlook 2003 tasks list
- Can you create a VISIBLE System.Windows.Forms.Form in a webservice?
- Physical path issue... asp.NET 2.0 - VB
- [ASP.NET 1.1] Infinite Session?
- Sorting by Repeater control
- "The 'maxoccurs' attribute is not supported in this context"
- question about split function
- Resolving references used by a C# COM dll at runtime
- Resolving references used by a C# COM dll at runtime
- Detect process status
- executing Strerd procedures using .net
- executing Stroed procedures using .net
- What does "~" mean in C#?
- Reply me if Possible
- Creating new Office type projects
- C#:How to change the read only textbox's backcolor ?
- sql Database connection
- FTP Maximum Allowed Connections problem
- Incrementing numbers
- C# : How to adjust controls size to form size.
- Encrypting Connection String
- Exception in Crystal Reports
- Programmatically deploy SQL server 2005 reports and Datasource
- C#: how to remotely find if Pc is down
- Grid view sub heading
- how to judge a folder contains one file in C#.net environment ?
- Input Box
- How to use Table Control in C#
- How to use Date control in C#
- Pending Limit Exceeded when asking for Microsoft Learning informat
- use of JavaScript in ASP.NET issues and lessions
- Probelm with FCKEditor
- question on crystal report in Asp.net
- Online Word Document editing
- C# :How to change the focus of Textbox control for Enter key pressed?
- addstring in graphics path +C#...
- saving a voice file in .net
- Child Form Location
- .NET questiones
- how to reference COM dll from C# without the COM dll being registered in the registry?
- Report View C#
- Internet Access Authentication
- when i reference a com object by regsvr32 it then find it in the COM tab, it works ok. but when i reference a com object by referencing the .dll file i get this error in my C# application:
- How to interface scanner in .NET
- How "should" I remember and reuse form size and location?
- Displaying Help from Program
- C# WinApp: AppDomain Unhandled Exception, handler does not force to close app
- Reading IMSI from SimCard
- How to open a file
- Checking File avilabilty
- Setup and deployment project
- C# :changing the caption on a task bar for a form with no title bar
- asp.net w/ Oracle, web page connections not closing
- DNS error on ASPX page - help
- Access querry
- hdc background
- rtl100.pbl
- Manipulating the users "SharePoint permissions " with .NET 2005(C#)
- C# APP .Net 1.1 - Invalid Cast Error using Derived TcpClient with TCPListener
- How do I view/edit values of Macros for Build Commands and Propert
- Listbox is selected the name and corresponding phone number should be should
- How to get the command line of a process in C#
- Converting VB 6 to .net
- Could not find a non-generic method - VB - ASP.NET
- Fill a TreeView with Outlook 2002 folders
- Abstract and Schema
- Looping through a ListView
- How to adjust controls size to form size.
- Deploying .NET web service on Oracle application server
- C# Serialising an array of structs
- Liquid Technologies Announces Availability of Liquid XML 2008 (v6.1)
- Editting an XML file
- Liquid Technologies Announces Availability of Liquid XML 2008 (v6.1)
- Supporting XP Operating system
- sql Database connection
- Ati2evxx.exe - Application Error
- Tab control in web application based C#
- Pop up in web application
- ObjectDataSource and Custom objects
- random exception occurring with Bitmap..ctor() and invalid params
- C# windows datagrid change forecolor of cells in a row
- Line wont show up
- Opacity property for button?
- C#: wmi win32_operatingsystem
- Add column
- Precompiled headers
- WCF COM MSqueue
- CheckBox in Datagrid
- Hi guys, i need simple help for my Project in C++/CLR...
- How to pass parameter to sub report in crystal report run time?
- winapp: how to display the autocomplete source programmatically?
- ChessClock does not 'see' my .NET?
- Asp.Net - Persistent Checkbox in Datagridview
- how to read textfile on network , when i access throgh server it works but when i rea
- get a xml node as-it-is
- Grid computing in c#
- database update
- asp.net to pdf file
- to find coordinates
- How to get the Virtual Directory's physical path...
- Arabic Text. Point Problem
- Create Rows and Columns
- How to add image control at run time in ASP.NET
- log4net reverse pattern
- Managed or unmanaged?
- winapp: inserting a rowcount column in a datagridview
- Simple Help needed in C++ / CLR .. plz ...
- winapp: how to display rowcount in rowheader in datagridview
- How can I get a process id?
- Find the folder
- Store Reference to a Value Type
- so, what is nnext then | https://bytes.com/sitemap/f-312-p-26.html | CC-MAIN-2019-43 | refinedweb | 3,531 | 56.15 |
Another Example—Two-Dimensional Slices
The
bigdigits program described next reads a number entered on the command line (as a string), and outputs the same number onto the console using “big” digits. Back in the twentieth century, at sites where lots of users shared a high-speed line printer, it was common practice for each user's print job to be preceded by a cover page that showed some identifying details such as the username and the name of the file being printed, using this kind of technique.
I'll review the code in three parts: first the imports, then the static data, and then the processing. But right now, let's look at a sample run to get a feel for how it works:
$ ./bigdigits 290175493
Each digit is represented by a slice of strings, with all the digits together represented by a slice of slices of strings. Before looking at the data, here is how we could declare and initialize single-dimensional slices of strings and numbers:
longWeekend := []string{"Friday", "Saturday", "Sunday", "Monday"} var lowPrimes = []int{2, 3, 5, 7, 11, 13, 17, 19}
Slices have the
form []Type, and if we want to initialize them we can immediately follow with a brace-delimited, comma-separated list of elements of the corresponding type. We could have used the same variable declaration syntax for both, but have used a longer form for the
lowPrimes slice to show the syntactic difference and for a reason that will be explained in a moment. Since a slice's
Type can itself be a slice type we can easily create multidimensional collections (slices of slices, etc.).
The
bigdigits program needs to import only four packages.
import ( "fmt" "log" "os" "path/filepath" )
The
fmt package provides functions for formatting text and for reading formatted text. The
log package provides logging functions. The
os package provides platform-independent operating-system variables and functions including the
os.Args variable of type
[]string (slice of strings) that holds the command-line arguments. And the
path package's
filepath package provides functions for manipulating filenames and paths that work across platforms. Note that for packages that are logically inside other packages, we only specify the last component of their name (in this case
filepath) when accessing them in our code.
For the
bigdigits program we need two-dimensional data (a slice of slices of strings). Here is how we have created it, with the strings for digit 0 laid out to illustrate how a digit's strings correspond to rows in the output, and with the strings for digits 3 to 8 elided.
var bigDigits = [][]string{ {" 000 ", " 0 0 ", "0 0", "0 0", "0 0", "0 0", " 000 "}, {" 1 ", "11 ", " 1 ", " 1 ", " 1 ", " 1 ", "111"}, {"222","2 2"," 2"," 2 ","2 ","2 ","22222"}, // ... 3 to 8 ... {" 9999", "9 9", "9 9", " 9999", " 9", " 9", " 9"}, }
Variables declared outside of any function or method may not use the
:= operator, but we can get the same effect using the long declaration form (with keyword var) and the assignment operator
(=) as we have done here for the
bigDigits variable (and did earlier for the
lowPrimes variable). We still don't need to specify
bigDigits' type since Go can deduce that from the assignment.
We leave the bean counting to the Go compiler, so there is no need to specify the dimensions of the slice of slices. One of Go's many conveniences is its excellent support for composite literals using braces, so we don't have to declare a data variable in one place and populate it with data in another.
The
main() function that reads the command line and uses the data to produce the output is only 20 lines.
func main() { if len(os.Args) == 1 { fmt.Printf("usage: %s <whole-number>\n", filepath.Base(os.Args[0])) os.Exit(1) } stringOfDigits := os.Args[1] for row := range bigDigits[0] { line := "" for column := range stringOfDigits { digit := stringOfDigits[column] - '0' if 0 <= digit && digit <= 9 { line += bigDigits[digit][row] + " " } else { log.Fatal("invalid whole number") } } fmt.Println(line) } } | http://www.drdobbs.com/open-source/getting-going-with-go/240004971?pgno=3 | CC-MAIN-2013-48 | refinedweb | 678 | 58.52 |
Universal per portlet instance configuration for all usersM Chan Jul 1, 2008 9:10 AM
When portlet preference is enabled, difference preferences are stored for admin and normal user, for example, I have an attribute called "name" in portlet.xml, when changing the name as an admin user, the changes wouldn't be reflected to normal users. So basically normal user and admin user uses different set of preferences for each portlet instance. How do I overcome that? What I want to achieve is that by changing preferences in admin mode, normal users could also see those changes.
1. Re: Universal per portlet instance configuration for all useThomas Heute Jul 1, 2008 9:30 AM (in response to M Chan)
Portlet preferences at the instance level are shared by anyone, there is no scoping per role.
There are 3 scopes:
- portlet
- portlet instance
- user
2. Re: Universal per portlet instance configuration for all useM Chan Jul 1, 2008 11:08 AM (in response to M Chan)
How do I specify which preference to which scope? I just want to do portlet instance scope. I know for sure that different portlet instances of the same portlet definition can have different preferences. But those preferences varies for different users......thanks and please help
3. Re: Universal per portlet instance configuration for all useThomas Heute Jul 1, 2008 11:28 AM (in response to M Chan)
The doc is made to help you
You can also use the admin portlet.
4. Re: Universal per portlet instance configuration for all useM Chan Jul 1, 2008 3:20 PM (in response to M Chan)
I have already tried to do that, but nothing really work the way i wanted.
e.g. in the news portlet, i have set it to read rss feed from yahoo, in admin mode. But when i logout, the news portlet return to default. So i edit security settings..... but nothing helps....
5. Re: Universal per portlet instance configuration for all useThomas Heute Jul 1, 2008 3:44 PM (in response to M Chan)
If you go to the instances tab of the admin portlet change the preference, all users will see that new preference unless they overrided the value by a personal (user scope) preference.
6. Re: Universal per portlet instance configuration for all useM Chan Jul 1, 2008 4:42 PM (in response to M Chan)
Good, i finally get it now, I used edit mode before, and I think that's why. I did not use the instances tab. Thanks very much!!!!!
7. Re: Universal per portlet instance configuration for all useM Chan Jul 4, 2008 11:38 AM (in response to M Chan)
What if i need some logic to translate information from database into human readable form??
say if I want a portlet to display some user in a usergroup, I cannot ask the admin to input the groupID directly, a drop down box should be there to provide such selection. With edit and admin mode, the preference are not stored on a per / portlet instance basis. It only happens if i use the instance configuration in the admin portlet..... can anyone help??
8. Re: Universal per portlet instance configuration for all useMartin Rostan Dec 16, 2008 3:21 PM (in response to M Chan)
Hi, somebody knows a way to solve this ?
I mean, how can I customize the editor to set the preferences for all the users?
Thanks in advance
Martin
9. Re: Universal per portlet instance configuration for all useMartin Rostan Dec 19, 2008 12:44 PM (in response to M Chan)
Hi,
we have implemented a new CustomizationManager (by extending the default CustomizationManagerService), we have defined "scopes" associated to the instances, if the scope is 'window' the customization is associated to the window (and not the user), so you can edit the preferences (with the customized portlet editor) for a window, and everybody that access that window will see the portlet customized with the same preferences.
We need this feature to allow portal managers to define portlets (and portals) that will be accessed by anonymous users.
Here is our code:
public class ScopeCustomizationManager extends CustomizationManagerService { @Override public Instance getInstance(Window window, User user) throws IllegalArgumentException { if (window == null) { throw new IllegalArgumentException("No window provided"); } Content content = window.getContent(); String instanceId = ((PortletContent) content).getInstanceRef(); if (instanceId == null) { return null; } Instance instance = getInstanceContainer().getDefinition(instanceId); if (instance != null) { String scope = null; try { Object scopePropertyValue = instance.getProperties().get("sharingScope"); if (scopePropertyValue != null && (scopePropertyValue instanceof List)) { List valuesList = (List) scopePropertyValue; if (valuesList.size() > 0) { Object v = valuesList.get(0); if (v instanceof String) { scope = (String) v; } } } } catch (PortletInvokerException e) { e.printStackTrace(); } if (scope == null || scope.equals("user")) { return super.getInstance(window, user); } if (user != null && isDashboard(window, user)) { String dashboardId = window.getId().toString(); instance = instance.getCustomization(dashboardId); } else if (scope.equals("window")) { instance = instance.getCustomization(window.getId().toString()); } else if (scope.equals("portal")) { instance = instance.getCustomization(window.getPage().getPortal().getId().toString()); } else { instance = super.getInstance(window, user); } } return instance; } }
Maybe somebody on the JBoss Portal Team may give us an opinion about this feature, and our implementation, we think it's a very important feature for an enterprise portal.
Thanks in advance,
Martin | https://developer.jboss.org/thread/126269 | CC-MAIN-2017-47 | refinedweb | 867 | 55.95 |
I have an 'older' machine, a Toshiba M400. Vista isn't blazingly fast on this machine. Here are some things I've tried to squeeze more out of this laptop...
(Before you jump in, make sure you get the latest BIOS and Vista drivers for your model of laptop!)
1) Before you install Vista, replace the system drive with a new 7200 rpm drive suitable for your laptop. Make sure it has a decent disk cache on the drive (e.g. 8mb+). Something like a Seagate Momentus 7200rpm 2.5" laptop drive might be suitable, but check the dimensions and heat specs to be sure. This requires some simple screwdriver work to install. After installation of Vista, defrag your drive twice.
2) Make sure you have at least 2gb of system RAM.
3) Match the system RAM with the same amount of ReadyBoost memory. Make sure you get a compatible USB stick because ReadyBoost is picky. Alternatively (and better), use a fast SD card, such as a 2gb Sandisk Extreme 3. I've been told that ReadyBoost gets better under Vista SP1.
4) Switch off Aero glass if your machine doesn't have the latest graphics hardware. You can do this easily by selecting a non-glass theme.
5) Take control of indexing. If you have large folders of email in Outlook 2007 (or 2003) or many files on your hard drive, switching off Windows Desktop Search is a drastic option to try:
Start – Control Panel – Admin Tools – Services – Windows Search – Stop, and Properties, Disable
This is unworkable for me because I rely on Instant Search in Outlook 2007. Instead, I selectively removed file indexing so that only Outlook Instant Search is functioning (no file search). This seems to help performance:
Start – Control Panel – Indexing Options – Modify – uncheck everything except Microsoft Office Outlook
Indexing is CPU-intensive, so it might be better to put indexing on a periodic schedule instead of allowing it to run in the background.
It is difficult to pinpoint performance issues when Vista, Outlook and Windows Desktop Search are all involved. However, your Outlook mailbox size is worth examining. If you have a lot of mail, or you use Outlook for daily RSS feeds, these can swell the size of the Outlook OST (or PST) file. When this file exceeds 2gb, the amount of disk activity by Outlook increases significantly, impacting performance on many common mail and folder operations. There is a fix that alters Outlook's disk behaviour to improve this:
Alternatively ... I suppose you could consider a different integrated search. I've heard about this one:
6) To eliminate the “Connecting...” delay in IE7, switch off the Links toolbar. For some reason, loading these links takes ages while IE7 looks for their icons. This made a lot of difference to my IE7 startup performance, which was driving me crazy.
7) If you don’t really use it, switch off the Windows Sidebar on the desktop. Some people claim these gadgets take memory and other system resources. I only ran 2 clocks as gadgets which can also be achieved in the System Tray using the normal Windows Vista clock, so I switched off the Sidebar (see the Additional Clocks tab under Clock Properties).
8) Wait for Vista Service Pack 1, it's on the way. Meanwhile, be sure to allow Microsoft Update to run on your Vista machine, they are already delivering Vista performance fixes via this channel so things should just magically get faster.
I hope this helps someone else, these are some of the things I've been looking at. It's a very great pity to switch off the Vista cool stuff (like glass, search and sidebar) but on an older machine, some of the above might help Vista performance.
This posting is provided "AS IS" with no warranties, and confers no rights.
I have just installed Outlook 2007 beta 2, because I'm keen to see what new things have been added, to help my work.
So far I haven't found too many changes of significance. This is what seems to stand out:
1) The UI has been 'gilded' - general changes to fonts and the chrome of the application, plus new views in most places. In some cases this is confusing because you can't find familiar things like the Save button on an open message! I am sure I will get used to this. This is part of the new Office look and feel.
2) Tasks due today. In the calendar view, you can now see tasks due at the foot of each day. This is nice - you can drag tasks around and see what has to be done without leaving the calendar. This is closer to how you use a paper diary (wonder why we don't copy this more closely?)
3) RSS feeds. I will be using this. It doesn't have a blog posting mechanism, only reading.
4) Busy!!! It's an extremely busy interface. Ugh. Take a look at the screen shot below, grabbed from my 14" laptop screen. I have deliberately blurred this so you can get a feel for the "information confrontation" which is the new Outlook experience. It's all about information density, and apparently More Is Better! It's amazing the amount of information Outlook pushes at you. There appears to be very little differentiation: what is more important or less important in all this data?
This is very significant. In my opinion, Outlook is an irony - a productivity tool which distracts and overloads. Outlook knows so much about me - my contacts, my activities, my communications - and yet it doesn't use this knowledge to guide what information should be given or hidden in a given context. Instead it just presents as much as possible. I want Outlook to present me with clarity and focus, not information overload. All of this is doubly ironic since Microsoft Research is studying concepts like "continuous partial attention" and attention being the new comodity.
What's missing in Outlook 2007?
Personally I think the big missing ticket is personal project management. Outlook still doesn't understand the idea of hierarchical tasks that contribute to an overall objective I'm trying to reach. This idea is in keeping with David Allen and Sally McGhee's work on getting things done (Sally's MS-Press book is full of ideas on this subject, but the new Outlook doesn't support her recommendations particularly well). Elsewhere there's a very unconvincing blog post by Melissa Macbeth on why better task management and personal projects were omitted from Outlook 2007. She recommends using Microsoft Project instead! Seriously? (I don't want to be mean to Melissa's blog, she does have useful content on using Outlook - worth a visit).
Overall first impressions of Outlook 2007 - not a huge amount of change, a busier UI with new chrome, daily tasks are nice. The lack of personal project management features leaves an open door for 3rd parties to make clean, uncluttered personal productivity software as a much needed add-on to Outlook, including hierarchical task lists like this. I'm in the market for such a product!
As I write this, I am reminded of that classic book by Alan Cooper (The Inmates Are Running The Asylum) which talks about how software UI design lost the plot about 10 years ago. A recommended read.
This posting is provided "AS IS" with no warranties, and confers no rights. Views expressed are those of the author and not of Microsoft. Use of any included script samples are subject to the terms specified at
This drove me mad, but the answer was simple.
If you have a slide with an embedded piece of audio (such as a narrated voice-over), it's not obvious how to extract that to a file. You can try to copy the object but it won't paste into Sound Recorder. And Powerpoint has no menu option to "Save media object."
The solution is to 'Save-As' the Powerpoint deck as a web page (you can choose to save a subset of the slides). Then you'll find the individual sound files in the sub-folder that contains the pieces (graphics, etc) of the web page. You can fetch your sound file from there. You may find there are two versions of each sound file, one in Windows Media Audio (WMA) and the other a wave file (WAV).
(Note that videos aren't embedded into Powerpoint, they are linked files. Also, while narrated sounds are embedded within each Powerpoint slide, other embedded audio files are only embedded up to 99kb - beyond this, they are linked files, just like video content. The idea is to prevent your Powerpoint getting too large. I think you can change this size limit under Options so that bigger sound files will embed directly into your slide deck.)
Hope this helps someone. Sometimes you have to work around the product to get what you want.
This posting is provided "AS IS" with no warranties, and confers no rights. Use of any included script samples are subject to the terms specified at
I can't think of a more depressing thought than that software project management never becomes more productive nor predictable. Ever since I was first on a team that faced "big software failure" I have been keenly interested in this dance. It's something bigger than the coding and the design, invisible and yet tangible in its effects. This is why the Jim McCarthy videos are so funny and poignant (21 Rules of Thumb for Delivering Great Software on Time, 1995). This was why I became involved in MSF in 1998.
Thankfully, the face of software project management is changing because practical methodologies are being integrated into the dev tools. Effective methodologies are just collections of good practices, but when they are adopted by teams and enshrined in tools, they can make a difference to the software crisis that Brooks talks about ("The Mythical Man Month").
While there will never be a world of extreme programmers and we still need far greater automation in software production, nonetheless tools like Visual Studio Team System (VSTS) are a step in the right direction.
This could have a positive impact on businesses that adopt it. Francis Delgado of Avanade agrees.
I recently presented MSF v4.0 (alpha) at TechReady 2, Microsoft's internal technical training event in Seattle. This is the first time I've shared a stage with Bill - but not at the same time ;o)
Microsoft has a long history with the Microsoft Solutions Framework (MSF), which started back in 1993 -- that's a 12 year heritage! There aren't many frameworks that can claim to have added value to the development process for over a decade. I've been involved with MSF since 1998.
There have been some comments on the newsgroups that Microsoft invented MSF in recent times to badge its version of Agile which is present in Visual Studio Team System (VSTS). In fact, the original MSF from 1993 contains many Agile-sounding principles.
My friend Clementino Mendonca located the original MSF v1.0 guidance and highlighted some statements it makes, for example about design documents -- or the lack of them. If you read Agile authors you will hear refreshing statements like "Produce no document unless it's need is immediate and significant". Elsewhere you can read pieces on Agile that ask questions like, "Why do people document?" which suggest that teams should produce "barely enough" design documentation.
Back in 1993, MSF v1.0 was saying something very similar:
So it seems that "agile thinking" isn't new: it's a bunch of good ideas, expressed in MSF and elsewhere. It would be interesting to trace the origins of the daily build and other good dev team practices. Who did it first?
Programming is a bit like music - you can develop your skills while you're away from the keyboard. All you have to do is think! Recently I was wondering (in the shower, actually) how briefly I could write a wildcard function. So just as an exercise, I had a go... here's my attempt.
The aim isn't to implement grep or perl style globbing, but just a simple wildcard match on * and ? (something like SQL Server wildcards):
* means match zero or more characters ? means match any one character
Because this is an exercise in brevity, not performance, my version is recursive. Can you write a shorter version in C++?
bool WildMatch (const char *patt, const char *str){ while (*patt) { if (*patt == '*') { patt++; while (*str) if (WildMatch(patt, str)) return true; else str++; } else if ((*str) && ((*patt == '?') || (*patt == *str))) { patt++; str++; } else return false; } return (!*patt) && (!*str);}
I love the connection between abstract computing and the real world. I've had this fascination ever since 1986 when I built circuits to sample sound using an Atari 800. Plugging stuff together is what invention and integration are all about.
My buddy Matt and I are building some horn loudspeakers with a difference. These are based on the Cornu / Poiram design which mounts four concentric spiral logarithmic horns behind a single loudspeaker driver. This design means you get small, wall-hanging speakers with great sound. I first heard about this design from FullRangeDriver.com who have lots of committed speaker enthusiasts. We are using Fostex FE108 drivers, supplied from Madisound who were cheaper to buy direct from the US than in Australia.
This is what the Cornu speakers, designed by Daniel Ciesinger, look like (click the photo to visit his site):
In Feb 2004, there was a competition in Quebec to build the nicest sounding speakers around the Fostex 103 driver. There were some very fancy entries, but Poiram's simple back-loaded horns were a winner. His design essentially copies the Cornus above. This is what the Poiram speakers look like inside:
Our speakers will be similar to the above, ie 650mm square and 150mm deep, so they can be wall mounted. Our internal spirals will be 2mm polycarb (Poiram seems to use aluminium!) which fits into tracks cut into the wood. You can't just put any old horns together -- the length, mouth and throat size need to be calculated to match the loudspeaker. We used the excellent HORNRESP from David McBean to do this.
Initially we tried to create the speakers by hand, using a home-made metal arm to cut spirals into timber with a router. Matt devised a geometric way to plot spirals using a compass but with a router attached in place of a pencil! The compass was a smart idea but it proved very slow work and we broke some router bits in the process (cheap tools waste time). We didn't like the inaccuracy and realised we needed to cut 128 arcs for one pair of speakers - very tedious.
I wrote about our first attempts here. Matt and I both agreed this was too hard -- and being computer people, there had to be another way...
Enter Excel!
I spent many hours creating a spreadsheet and after four major revisions we had 7000 points of data describing the progression of a 2.4 metre spiral in millimetres.
The great thing about Excel was being able to verify the appearance of the spiral by simply charting it (see below). It was easy to check the length of the spiral and its ability to fit inside a 650mm square. Another reason we abaondoned the compass method above was because it was only capable of producing a spiral with a single expansion coefficient, whereas using a spreadsheet allows us to produce spirals that increase in radius with any logarithmic coefficient we like. Excel's goal-seek and solver add-in tools were critical in managing several parameters at once - the growing spiral length, the moving angle, the number of turns of the spiral, the box size. Try doing this on your own and it does your head in!
We then exported the Excel data into RibbonSoft's qCAD, a cheap and capable 2d CAD tool ($40 AUD). Matt exported our plot data as CSV and turned it into LINETO(x,y) instructions using a FOR statement in a cmd shell! These instructions were then consumed by qCAD and we exported the result into DXF format (AutoCAD R12) and sent it to a local laser cutting shop (ComputerCut).
They charged us about $100 to laser cut our spiral into 0.9mm stainless steel:
Now THAT'S what I call laser printing! I reckon you could get seriously addicted to doing this!
We made two of these templates, one for a 650mm box and another for an 814mm box (for deeper bass).
The next step of the project is to make a 20mm collar to run around the spiral track, which should allow us to guide the router around the timber and cut a very accurate spiral track for our polycarb strips to sit in. We will place the steel template on the timber and cut one spiral, then turn 90 degrees and cut another, until we have four concentric spiral tracks. This should make life much easier, and we can make several pairs of speakers. We have purchased a high quality router bit this time.
We will probably continue this project in 2006, I will blog more if it's of interest? Welcome any comments. Sorry there's no code - but I thought it was an interesting use of Excel.
Here %thisweek %nextweek %lastmonth %thismonth %nextmonth
I hope the above helps. If you try any of the above ideas, please be sure to test the results thoroughly.
There | http://blogs.msdn.com/andrewdelin/default.aspx | crawl-001 | refinedweb | 2,949 | 71.85 |
Hi
I am new to Clojure and specifically the La Clojure plugin for IDEA. I have created some .clj files and have Clojure created as a facet for my IDEA module. I now need to call my Clojure code from Java. How can I get La Clojure to compile my Clojure code? I have tried making the project and individually compiling .clj files, but no related class files appear in my out/production directory.
Any help appreciated.
thanks
Paul C. Meehan
Hi
I'm having a similar issue. In my particular case I've got a gen-class delcaration in one of my Clojure files and I'm trying to refer to that class from another another module however IntelliJ it can't find it. If I produce a jar file and depend on that instead it works but I'd rather just declare a module dependency.
Just in case someone is in search,
go to
file -> settings-> Build, Execution, Deployment -> Complier -> Clojure Complier
Its likely the button is set at "Don't compile clojure namespaces"
All you need do is to change it and you should also select "Show reflection warnings"
It also works for cursive clojure | https://intellij-support.jetbrains.com/hc/en-us/community/posts/206631415-La-Clojure-and-compilation | CC-MAIN-2019-09 | refinedweb | 198 | 65.12 |
Going Native with JCA Adapters
By Antony Reynolds-Oracle on Dec 30, 2013
Formatting JCA Adapter Binary Contents
Sometimes you just need to go native and play with binary data rather than XML. This occurs commonly when using JCA adapters, the file to be written is in binary format, or the TCP messsages written by the Socket Adapter are in binary format. Although the adapter has no problem converting Base64 data into raw binary, it is a little tricky to get that data into base64 format in the first place, so this blog entry will explain how.
Adapter Creation
When creating most adapters (application & DB being the exceptions) you have the option of choosing the message format. By making the message format “opaque” you are telling the adapter wizard that the message data will be provided as a base-64 encoded string and the adapter will convert this to binary and deliver it.
This results in a WSDL message defined as shown below:
<wsdl:types>
<schema targetNamespace=""
xmlns="" >
<element name="opaqueElement" type="base64Binary" />
</schema>
</wsdl:types>
<wsdl:message
<wsdl:part
</wsdl:message>
The Challenge
The challenge now is to convert out data into a base-64 encoded string. For this we have to turn to the service bus and MFL.
Within the service bus we use the MFL editor to define the format of the binary data. In our example we will have variable length strings that start with a 1 byte length field as well as 32-bit integers and 64-bit floating point numbers.
The example below shows a sample MFL file to describe the above data structure:
<?xml version='1.0' encoding='windows-1252'?>
<!DOCTYPE MessageFormat SYSTEM 'mfl.dtd'>
<!-- Enter description of the message format here. -->
<MessageFormat name='BinaryMessageFormat' version='2.02'>
<FieldFormat name='stringField1' type='String' delimOptional='y' codepage='UTF-8'>
<LenField type='UTinyInt'/>
</FieldFormat>
<FieldFormat name='intField' type='LittleEndian4' delimOptional='y'/>
<FieldFormat name='doubleField' type='LittleEndianDouble' delimOptional='y'/>
<FieldFormat name='stringField2' type='String' delimOptional='y' codepage='UTF-8'>
<LenField type='UTinyInt'/>
</FieldFormat>
</MessageFormat>
Note that we can define the endianess of the multi-byte numbers, in this case they are specified as little endian (Intel format).
I also created an XML version of the MFL that can be used in interfaces.
The XML version can then be imported into a WSDL document to create a web service.
Full Steam Ahead
We now have all the pieces we need to convert XML to binary and deliver it via an adapter using the process shown below:
We receive the XML request, in the sample code, the sample delivers it as a web service.
- We then convert the request data into MFL format XML using an XQuery and store the result in a variable (mflVar).
- We then convert the MFL formatted XML into binary data (internally this is held as a java byte array) and store the result in a variable (binVar).
- We then convert the byte array to a base-64 string using javax.xml.bind.DatatypeConverter.printBase64Binary and store the result in a variable (base64Var).
- Finally we replace the original $body contents with the output of an XQuery that matches the adapter expected XML format.
The diagram below shows the OSB pipeline that implements the above.
A Wrinkle
Unfortunately we can only call static Java methods that reside in a jar file imported into service bus, so we have to provide a wrapper for the printBase64Binary call. The below Java code was used to provide this wrapper:
package antony.blog;
import javax.xml.bind.DatatypeConverter;
public class Base64Encoder {
public static String base64encode(byte[] content) {
return DatatypeConverter.printBase64Binary(content);
}
public static byte[] base64decode(String content) {
return DatatypeConverter.parseBase64Binary(content);
}
}
Wrapping Up
Sample code is available here and consists of the following projects:
- BinaryAdapter – JDeveloper SOA Project that defines the JCA File Adapter
- OSBUtils – JDeveloper Java Project that defines the Java wrapper for DataTypeConverter
- BinaryFileWriter – Eclipse OSB Project that includes everything needed to try out the steps in this blog.
The OSB project needs to be customized to have the logical directory name point to something sensible. The project can be tested using the normal OSB console test screen.
The following sample input (note 16909060 is 0x01020304)
<bin:OutputMessage xmlns:
<bin:stringField1>First String</bin:stringField1>
<bin:intField>16909060</bin:intField>
<bin:doubleField>1.5</bin:doubleField>
<bin:stringField2>Second String</bin:stringField2>
</bin:OutputMessage>
Generates the following binary data file – displayed using “hexdump –C”. The int is highlighted in yellow, the double in orange and the strings and their associated lengths in green with the length in bold.
$ hexdump -C 2.bin
00000000 0c 46 69 72 73 74 20 53 74 72 69 6e 67 04 03 02 |.First String...|
00000010 01 00 00 00 00 00 00 f8 3f 0d 53 65 63 6f 6e 64 |........?.Second|
00000020 20 53 74 72 69 6e 67 | String|
Although we used a web service writing through to a file adapter we could have equally well used the socket adapter to send the data to a TCP endpoint. Similarly the source of the data could be anything. The same principle can be applied to decode binary data, just reverse the steps and use Java method parseBase64Binary instead of printBase64Binary. | https://blogs.oracle.com/reynolds/date/20131230 | CC-MAIN-2015-40 | refinedweb | 871 | 51.28 |
Pause Before Exiting a Console Application
A console application should always be designed to do whatever it is supposed to do and then exit without asking the user to press any key. The reason for this is that console applications are often used within scripts to automate several tasks. Obviously, you don't want one console application launched by the script to wait until the user presses any key because that would be against the whole purpose of writing that script in the first place. However, sometimes it might be useful to pause the console application right before exiting. This article will explain a way to add this support to your application elegantly without breaking support for scripting.
The easiest way but not recommended way to add a pause to your console application would be something as follows:
int main() { // <Your application code goes here> // Wait until the user presses any key. system("pause.exe"); return 0; }
What's wrong with the implementation above?
- It will pause execution every time, right before exiting the console application; therefore, it cannot be used inside an automation script.
- pause.exe is Windows specific; the above code will only run on Windows platforms.
- A last but important reason is that you are trusting pause.exe to do exactly what you think it should do. However, some virus or prankster could have replaced pause.exe with a program that wipes out your entire hard drive.
To solve the first problem, support for a command line parameter will be added. The console application will pause only at the end when that parameter is given to it. To solve the second problem, you will implement a pause function yourself. It will run on all platforms that have a C++ compiler; this automatically solves the third problem above as well.
So, all this is implemented in the following example:
#include <conio.h> #include <iostream> using namespace std; static void pause() { cout << "Press any key to continue..." << endl; // On Windows, use _getch() starting with VC++ 2005. // On *nix, you probably can use getch(). _getch(); } int main(int argc, char* argv[]) { // Loop over all supplied parameters and see if /P or /p // was specified. // Start looping at 1 because argv[0] is the executable // name itself. for (int i=1; i<argc; ++i) { if (!strcmp(argv[i], "/p") || !strcmp(argv[i], "/P")) { // By using atexit, the pause() function will be called // automatically when the program exits. atexit(pause); break; } } // <Your application code goes here> // This example will just print something to the console cout << "This is the output of your application." << endl; return 0; }
By using atexit(pause), there is no need to call the pause function manually right before the return statement of the main function. It will be called automatically. Also, note that the atexit call will only be done when the /p or /P command line parameter is given to the console application. For example, suppose the name of the executable is ConsoleWithPause; running it without /p or /P will result in:
c:\> ConsoleWithPause This is the output of your application. c:\>
As you can see, it is not waiting for a key press. Running it with /p or /P will result in:
c:\> ConsoleWithPause /p This is the output of your application. Press any key to continue... c:\>
This console application can be used by scripts when you omit the /p argument. When running outside scripts, you can force it to wait before exiting with /p or /P.
The rest of the article is Windows specific.
When you have a console application on Windows and you launch it by double-clicking its icon in Windows Explorer, a new console will be created but will disappear at the end of the execution and you probably won't be able to read anything from the console. However, when you run your console application by typing the command in an existing console, your application will write all its output to that console and, when finished, you will be returned to the console prompt without the console being closed. There is a trick that can be used on Windows to automatically detect whether a new console window was opened or if your application was started from inside an existing console. The trick is to find out the cursor position in your console at the very beginning of your program. If the cursor position is (0,0), it is highly likely that a new console window was spawned. The following code shows how this can be accomplished:
#include <conio.h> #include <iostream> #include <windows.h> using namespace std; static void pause() { cout << "Press any key to continue..." << endl; // On Windows, use _getch() starting with VC++ 2005. // On *nix, you probably can use getch(). _getch(); } int main() { CONSOLE_SCREEN_BUFFER_INFO csbi = {0}; HANDLE hStdOutput = GetStdHandle(STD_OUTPUT_HANDLE); if (GetConsoleScreenBufferInfo(hStdOutput, &csbi)) { // Check whether the cursor position is (0,0) if (csbi.dwCursorPosition.X == 0 && csbi.dwCursorPosition.Y == 0) { // By using atexit, the pause() function will be called // automatically when the program exits. atexit(pause); } } // <Your application code goes here> // This example will just print something to the console cout << "This is the output of your application." << endl; return 0; }
Note that in some rare cases, this will not work correctly. For example, when you start the ConsoleWithPause application as follows:
c:\> cls & ConsoleWithPause
This command will first clear the console and then launch your application. In this case, the cursor will be at (0,0) and your application will think that a new console was spawned and will wait for a key press before exiting.
Finally a good approach.Posted by TheCPUWizard on 02/19/2009 07:20am
It is refreshing to see a solid approach to this issue. Far to many people take the "hard coded" approach which renders their programs totally unusable in many environments.Reply | http://www.codeguru.com/cpp/misc/misc/consoleapps/article.php/c15893/Pause-Before-Exiting-a-Console-Application.htm | CC-MAIN-2014-52 | refinedweb | 973 | 62.98 |
Feedsearch CrawlerFeedsearch Crawler
Feedsearch Crawler is a Python library for searching websites for RSS, Atom, and JSON feeds.
It is a continuation of my work on Feedsearch, which is itself a continuation of the work done by Dan Foreman-Mackey on Feedfinder2, which in turn is based on feedfinder - originally written by Mark Pilgrim and subsequently maintained by Aaron Swartz until his untimely death.
Feedsearch Crawler differs with all of the above in that it is now built as an asynchronous Web crawler for Python 3.7 and above, using asyncio and aiohttp, to allow much more rapid scanning of possible feed urls.
An implementation using this library to provide a public Feed Search API is available at
InstallationInstallation
The library is available on PyPI:
pip install feedsearch-crawler
The library requires Python 3.7+.
UsageUsage
Feedsearch Crawler is called with the single function
search:
>>> from feedsearch_crawler import search >>> feeds = search('xkcd.com') >>> feeds [FeedInfo(''), FeedInfo('')] >>> feeds[0].url URL('') >>> str(feeds[0].url) '' >>> feeds[0].serialize() {'url': '', 'title': 'xkcd.com', 'version': 'rss20', 'score': 24, 'hubs': [], 'description': 'xkcd.com: A webcomic of romance and math humor.', 'is_push': False, 'self_url': '', 'favicon': '', 'content_type': 'text/xml; charset=UTF-8', 'bozo': 0, 'site_url': '', 'site_name': 'xkcd: Chernobyl', 'favicon_data_uri': '', 'content_length': 2847}
If you are already running in an asyncio event loop, then you can import and await
search_async instead. The
search function is only a wrapper that runs
search_async in a new asyncio event loop.
from feedsearch_crawler import search_async feeds = await search_async('xkcd.com')
A search will always return a list of FeedInfo objects, each of which will always have a url property, which is a URL object that can be decoded to a string with
str(url).
The returned FeedInfo are sorted by the score value from highest to lowest, with a higher score theoretically indicating a more relevant feed compared to the original URL provided. A FeedInfo can also be serialized to a JSON compatible dictionary by calling it's
.serialize() method.
The crawl logs can be accessed with:
import logging logger = logging.getLogger("feedsearch_crawler")
Feedsearch Crawler also provides a handy function to output the returned feeds as an OPML subscription list, encoded as a UTF-8 bytestring.
from feedsearch_crawler import output_opml output_opml(feeds).decode()
Search ArgumentsSearch Arguments
search and
search_async take the following arguments:
search( url: str, try_urls: Union[List[str], bool]=False, concurrency: int=10, total_timeout: Union[float, aiohttp.ClientTimeout]=10, request_timeout: Union[float, aiohttp.ClientTimeout]=3, user_agent: str="Feedsearch Bot", max_content_length: int=1024 * 1024 * 10, max_depth: int=10, headers: dict={"X-Custom-Header": "Custom Header"}, favicon_data_uri: bool=True, delay: float=0 )
- url: str: The initial URL at which to search for feeds.
- try_urls: Union[List[str], bool]: (default False): An optional list of URL paths to query for feeds. Takes the origin of the url paramater and appends the provided paths. If no list is provided, but try_urls is True, then a list of common feed locations will be used.
- concurrency: int: (default 10): An optional argument to specify the maximum number of concurrent HTTP requests.
- total_timeout: float: (default 30.0): An optional argument to specify the time this function may run before timing out.
- request_timeout: float: (default 3.0): An optional argument that controls how long before each individual HTTP request times out.
- user_agent: str: An optional argument to override the default User-Agent header.
- max_content_length: int: (default 10Mb): An optional argument to specify the maximum size in bytes of each HTTP Response.
- max_depth: int: (default 10): An optional argument to limit the maximum depth of requests while following urls.
- headers: dict: An optional dictionary of headers to pass to each HTTP request.
- favicon_data_uri: bool: (default True): Optionally control whether to fetch found favicons and return them as a Data Uri.
- delay: float: (default 0.0): An optional argument to delay each HTTP request by the specified time in seconds. Used in conjunction with the concurrency setting to avoid overloading sites.
FeedInfo ValuesFeedInfo Values
In addition to the url, FeedInfo objects may have the following values:
- bozo: int: Set to 1 when feed data is not well formed or may not be a feed. Defaults 0.
- content_length: int: Current length of the feed in bytes.
- content_type: str: Content-Type value of the returned feed.
- description: str: Feed description.
- favicon: URL: URL of feed or site Favicon.
- favicon_data_uri: str: Data Uri of Favicon.
- hubs: List[str]: List of Websub hubs of feed if available.
- is_push: bool: True if feed contains valid Websub data.
- last_updated: datetime: Date of the latest published entry.
- score: int: Computed relevance of feed url value to provided URL. May be safely ignored.
- self_url: str: ref="self" value returned from feed links. In some cases may be different from feed url.
- site_name: str: Name of feed's website.
- site_url: URL: URL of feed's website.
- title: str: Feed Title.
- url: URL: URL location of feed.
- version: str: Feed version XML values, or JSON feed. | https://libraries.io/pypi/feedsearch-crawler | CC-MAIN-2019-51 | refinedweb | 816 | 57.98 |
Member
131 Points
Jul 11, 2017 08:27 AM|bluMarmalade|LINK
A viewmodel can have many properties to be used by the view. Ideally you want to include only those that will be used.
But sometimes you have many forms on one page. Let's say you have one form that sends only two properties, like a date and a string.
but if you have in the recieving method for that form submit a check for ModelState.IsValid, you will get a false if the viewmodel has another property that has a requred attribute. This requred property is not supposed to be relevant because you want only to send two other properties that has nothing to do with that one.
Is this a bug or, if it's not, why? What are possible solutions to this, because I want validation on my properties(the only solution that make it work is to remove the requred attribute)?
Participant
1660 Points
Jul 11, 2017 09:59 AM|Dmitry Sikorsky|LINK
I suggest you next approach.
For example, you have page that consists of 3 forms: A, B and C.
Create view model for it and name it like CompositeViewModel (you can choose any name). It will contain 3 properties:
public class CompositeViewModel { public AViewModel { get; set; } public BViewModel { get; set; } public CViewModel { get; set; } }
Each property will be defined as separate view model class. Then in your controller you will have 3 actions and each action will accept one view model:
public IActionResult A(AViewModel vm) { } public IActionResult B(BViewModel vm) { } public IActionResult C(CViewModel vm) { }
In this case you won't have any problems with validation and your code will remain clean and clear.
1 reply
Last post Jul 11, 2017 09:59 AM by Dmitry Sikorsky | https://forums.asp.net/t/2124892.aspx?Bug+Required+properties+causes+problems+with+forms+submits | CC-MAIN-2018-34 | refinedweb | 296 | 60.95 |
$ cnpm install react-internal-nav
React component with single
onInternalNav callback for handling clicks on
<a> local to domain.
Using this means you don't need anything special to link within your app, you just render this component near the root of your app, and use plain 'ol
<a href='/some-path'> within your other components to link within your app.
The component will listen to clicks and determine whether the link clicked is internal to the app or not, using the excellent local-links module by @LukeKarrys.
The
onInternalNav callbac will be called with the
pathname as the argument (including beginning slash, i.e.
/some-path) whenever a user navigates by clicking a link within the app as long as...
_target='blank'
If all these conditions are met, then the default is prevented and the callback is called instead.
note: It still works for cases where the
event.target wasn't the
<a> itself, but something wrapped inside an
<a>.
This makes it easy to use any JS router you want in a React app, or even better...
I'm of the opinion that URLs shouldn't really be special. They're just another piece of application state. Combining this component with a simple url setter:
function pushState(url) { // important to check to make sure we're // not already on it if (url !== window.location.pathname) { window.history.pushState({}, '', url) } }
And something that sets the state on
popState events:
window.addEventListener('popstate', () => { // whatever you use to fire actions redux.dispatch(UrlActions.setUrl(window.location.pathname)) }) // running it once on load redux.dispatch(UrlActions.setUrl(window.location.pathname))
Now your main render function can branch and render different things based on that url state, just like it would for any other state changes.
With very little code you get basic routing functionality without installing a big fancy router and dealing with the larger JS bundle that comes with it.
If the browser is old and doesn't support
pushState (though most browsers do these days) the app still works, the URL just won't update.
Also, by making the URL "just another piece of state", it still plays very nicely with doing Universal (a.k.a. Isomorphic) rendering.
React Router is very nice and lots of people absolutely love it. I'm just of the (probably unpopular) opinion that it's too big (in terms of file size) and tries to do a bit too much. Especially for small, simple, apps that don't have very many routes.
This component only defines two props:
onInternalNavthe function that gets called with internal pathname
tagNameyou can use this to change the type of html tag used. It defaults to
'div'.
Other props are just passed through so you can still set other things, like
className or whatnot and they'll be applied as well.
npm install react-internal-nav
var InternalNav = require('react-internal-nav'); var SomeComponent = React.createClass({ onInternalNav (pathname) { // dispatch your URL change action or whatnot }, render () { return ( <InternalNav onInternalNav={this.onInternalNav}> <a href='/something'>i'm local</a> </InternalNav> ) } })
If you like this follow @HenrikJoreteg on twitter.
MIT | https://developer.aliyun.com/mirror/npm/package/react-internal-nav | CC-MAIN-2020-50 | refinedweb | 519 | 54.63 |
DFS Namespaces and DFS Replication Overview
Published: May 9, 2012
Updated: November 13, 2013
Applies To: Windows Server 2012, Windows Server 2012 R2…
- What's New in DFS Replication in Windows Server 2012 R2
- DFS Management (online Help)
- DFS Step-by-Step Guide for Windows Server 2008
- DFS Replication: Frequently Asked Questions
- DFS Namespaces: Frequently Asked Questions
DFS Namespaces and DFS Replication are role services in the File and Storage Services role.
- DFS file shares that are located on different servers and in multiple sites.
- DFS Replication Enables you to efficiently replicate folders (including those referred to by a DFS namespace path) across multiple servers and sites. DFS Replication uses a compression algorithm known as remote differential compression (RDC). RDC detects changes to the data in a file, and it enables DFS Replication to replicate only the changed file blocks instead of the entire file.
You can in Windows Server 2012 R2; for Windows Server 2012, see What's New in DFS Namespaces and DFS Replication in Windows Server 2012..
Before you can deploy DFS Replication, you must configure your servers as follows:
- Update the Active Directory Domain Services (AD DS) schema to include Windows Server 2012, Windows Server 2008 R2, Windows Server 2008, or Windows Server 2003 R2 schema additions. (If you install a domain controller running Windows Server 2012 the schema is automatically updated.) You cannot use read-only replicated folders with the Windows Server 2003 R2 or older schema additions.
- Ensure that all servers in a replication group are located in the same forest. You cannot enable replication across servers in different forests.
- Install DFS Replication on all servers that will act as members of a replication group.
- Contact your antivirus software vendor to check that your antivirus software is compatible with DFS Replication.
- Locate any folders that you want to replicate on volumes formatted with the NTFS file system. DFS Replication does not support the Resilient File System (ReFS) or the FAT file system. DFS Replication also does not support replicating content stored on Cluster Shared Volumes.
DFS Namespaces and DFS Replication are a part of the File and Storage Services role. The management tools for DFS (DFS Management, the DFS Namespaces module for Windows PowerShell, and command-line tools) are installed separately as part of the Remote Server Administration Tools.
To install the role services and the DFS Management Tools, use one of the following methods.
Open Server Manager, click Manage, and then click Add Roles and Features. The Add Roles and Features Wizard appears.
On the Server Selection page, select the server or virtual hard disk (VHD) of an offline virtual machine on which you want to install DFS.
Select the role services and features that you want to install.
- To install the DFS Namespaces and DFS Replication services, on the Server Roles page, select DFS Namespaces and DFS Replication.
- To install only the DFS Management Tools, on the Features page, expand Remote Server Administration Tools, Role Administration Tools, expand File Services Tools, and then select DFS Management Tools.
DFS Management Tools installs the DFS Management snap-in, the DFS Namespaces module for Windows PowerShell, and command-line tools, but it does not install any DFS services on the server.. | http://technet.microsoft.com/en-us/library/jj127250.aspx | CC-MAIN-2014-23 | refinedweb | 536 | 50.87 |
Suppose, we are provided with a matrix of the size n*n, containing integer numbers. We have to find out if all the rows of that matrix are circular rotations of its previous row. In case of first row, it should be a circular rotation of the n-th row.
So, if the input is like
then the output will be True.
To solve this, we will follow these steps −
Let us see the following implementation to get better understanding −
def solve(matrix) : concat = "" for i in range(len(matrix)) : concat = concat + "-" + str(matrix[0][i]) concat = concat + concat for i in range(1, len(matrix)) : curr_row = "" for j in range(len(matrix[0])) : curr_row = curr_row + "-" + str(matrix[i][j]) if (concat.find(curr_row)) : return True return False matrix = [['B', 'A', 'D', 'C'], ['C', 'B', 'A', 'D'], ['D', 'C', 'B', 'A'], ['A', 'D', 'C', 'B']] print(solve(matrix))
[['B', 'A', 'D', 'C'], ['C', 'B', 'A', 'D'], ['D', 'C', 'B', 'A'], ['A', 'D', 'C', 'B']]
True | https://www.tutorialspoint.com/check-if-all-rows-of-a-matrix-are-circular-rotations-of-each-other-in-python | CC-MAIN-2021-39 | refinedweb | 165 | 55.37 |
Rich media extensibility for Windows Phone 8
Applies to: Windows Phone 8 and Windows Phone Silverlight 8.1 only
Rich media apps incorporate data from the local folder or the web to provide a unique experience for viewing or editing the images they have captured. This topic describes how your app can incorporate rich media extensibility and extend the built-in photo viewer experience. Although a lens app can use rich media extensibility, rich media apps are not required to use lens extensibility. For more info about lenses, see Lenses for Windows Phone 8.
To help make sure your app can be certified for the Windows Phone Store, review the rich media app certification requirements. For more info, see Additional requirements for specific app types for Windows Phone.
This topic contains the following sections.
A rich media app links photos stored in the media library to data stored in the local folder to create a unique in-app experience. For example, a social media app might combine the photo it captured and saved to the media library with data that it retrieves from a web service to create an experience that allows a user to view comments that their friends have posted about the photo. A slideshow app might allow the user to view a slideshow that the current photo is part of.
Although you’re limited to storing only JPG files in the media library, you can store additional information about the photo in your app’s local folder. To link the information, you can identify the photo by its path as returned by the GetPath extension method of the Picture class. To use GetPath, add a directive to Microsoft.Xna.Framework.Media.PhoneExtensions.
Don’t use the rich media open link for traditional photo editing, which is limited to only bit manipulations on the JPG file that’s in the camera roll. Rich media extensibility is intended for apps that do “rich editing” using additional information that it has about the photo. For example, allowing the user to choose a different photo from a “time shifted” capture, when additional photos are stored in the app’s local folder, would be considered rich editing. Use the open link only when the user can increase their experience of the photo by viewing it inside your app. For traditional photo editing, such as bit manipulations, cropping, brightening, rotating, etc., use the edit or apps link. The edit link is for Windows Phone 8 photo editing apps, and the apps link is for Windows Phone OS 7.1 photo editing apps.
When your rich media app saves a photo to the media library, the operating system “remembers” that your app captured it. That way, when you view the photo with the photo viewer, the photo viewer displays a captured by subtitle with the photo. In the app bar, the photo viewer presents a special open link specifically for launching your app. The following image shows the open link and the captured by caption.
The following steps describe how to integrate your rich media app into the photos experience.
To declare that your app is a rich media app, register for the Photos_Rich_Media_Edit extension. Extensions are specified in the WMAppManifest.xml file. Just after the Tokens element, inside the Extensions element, the rich media extension is specified with the following Extension element.
The Windows Phone Manifest Designer does not support extensions; use the XML (Text) Editor to edit them. For more info, see How to modify the app manifest file for Windows Phone 8.
When the user taps your app with the open link, a deep link URI is launched to open your app and send a token to the photo that the user was viewing. Your app can detect a rich media launch when the URI contains the strings: RichMediaEdit and token. The following example is a launch URI for a rich media app (as seen from a URI mapper when the default navigation page is MainPage.xaml).
In this example, the token parameter value is the token. That string can be used to retrieve the photo from the media library. This is described later in the topic.
Because the open link targets the default navigation page (in this case, MainPage.xaml), that page will be launched if you don’t implement any URI mapping. If your app’s default navigation page can handle the photo referenced by the token, URI mapping may not be necessary.
However, if you want to launch a different page for a rich media experience, you’ll need to map the URI to that page. The following example shows how this is done with a custom URI mapper class.
using System; using System.Windows.Navigation; namespace CustomMapping { class CustomUriMapper : UriMapperBase { public override Uri MapUri(Uri uri) { string tempUri = uri.ToString(); string mappedUri; // Launch from the rich media Open link. // Incoming URI example: /MainPage.xaml?Action=RichMediaEdit&token=%7Bed8b7de8-6cf9-454e-afe4-abb60ef75160%7D if ((tempUri.Contains("RichMediaEdit")) && (tempUri.Contains("token"))) { // Redirect to RichMediaPage.xaml. mappedUri = tempUri.Replace("MainPage", "RichMediaPage"); return new Uri(mappedUri, UriKind.Relative); } // Otherwise perform normal launch. return uri; } } }
In this example, the URI mapper maps the incoming URI to a page named RichMediaPage.xaml by replacing the page name within the URI. The URI returned by that method is then used by the root frame of the app to launch the first page when the app starts. The root frame uses the custom URI mapper because it is assigned as the app initializes. The following code shows how the URI mapper is assigned in the InitializePhoneApplication method of the App.xaml.cs; // Assign the custom URI mapper class to the application frame. RootFrame.UriMapper = new CustomMapping.CustomUriMapper(); // Ensure we don't initialize again phoneApplicationInitialized = true; }
When the page is launched, the page can access all of the parameters in the URI (that launched the page) using the QueryString property of the page’s NavigationContext object. The following example shows how the value of the token parameter is used with the MediaLibraryGetPictureFromToken(String) method to extract the photo from the media library. This code is from the RichMediaEdit.xaml.cs file of the Photo Extensibility Sample.
protected override void OnNavigatedTo(NavigationEventArgs e) { // Get a dictionary of query string keys and values. IDictionary<string, string> queryStrings = this.NavigationContext.QueryString; // Ensure that there is at least one key in the query string, and check whether the "token" key is present. if (queryStrings.ContainsKey("token")) { // Retrieve the photo from the media library using the token passed to the app. MediaLibrary library = new MediaLibrary(); Picture photoFromLibrary = library.GetPictureFromToken(queryStrings["token"]); // Create a BitmapImage object and add set it as the image control source. // To retrieve a full-resolution image, use the GetImage() method instead. BitmapImage bitmapFromPhoto = new BitmapImage(); bitmapFromPhoto.SetSource(photoFromLibrary.GetPreviewImage()); image1.Source = bitmapFromPhoto; } }
This example uses an extension method of the Picture class, PhoneExtensionsGetPreviewImage(). This method returns a large thumbnail in a resolution that has been optimized to fill the screen of the phone (WVGA, WXGA, or 720p). If you want access to the full-resolution image, use the PictureGetImage() method.
This procedure describes how you can launch the Photo Extensibility Sample from the open link. This sample does not show an example of a rich media app, but it does show how to wire-up the extensibility for one.
To test rich media extensibility with the sample
Download the Photo Extensibility Sample and open it in the Windows Phone SDK.
Select Debug from the menu, and select Start Debugging.
On the main page of the app, tap the link: tap to prep for rich media testing.
On the photo save page, tap the capture and save button.
From the built-in photo app, take a photo by tapping on the screen, pressing the hardware shutter button, or if you are using the Windows Phone 8 Emulator, press F7.
Press accept after you capture the photo. When you do this, the built-in app automatically saves a photo to the media library. However, it is only the photo that your app saves to the library that will be shown with the captured by label. In the Photo Extensibility Sample, the photo-saving code is in the file PhotoSave.xaml.cs. The sample app saves a photo to the media library as soon as the built-in app returns.
After you capture the photo, tap Start to navigate to the Start screen.
On the Start screen, tap the Photos app and then select a photo that has been captured by the Photo Extensibility Sample and view it in the photo viewer.
Tap the three dots at the bottom of the page on the app bar. When the app bar expands, tap the open link. This will launch a URI containing a token to the photo you were just viewing. That URI will ultimately launch the RichMediaEdit.xaml page and display the photo you were just looking at. | https://msdn.microsoft.com/en-us/library/jj662942.aspx | CC-MAIN-2015-32 | refinedweb | 1,489 | 55.84 |
Hello folks,
Today, I want to make an "Admin" section for my web site, in order to provide an easy interface for me to manage my database. I already have the web site running as an ASP.NET 4.0 Web Site project. So what I will do is, create a Dynamic Data Web Application and convert the project from Web Application to Web Site so I can add it into the Admin folder on my existing web site.
This post is not going into the details of the differences between Web Site and Web Application, so here is the official reference: Comparing Web Site Projects and Web Application Projects
Microsoft’s walkthrough demonstrates how to convert web site to web application, but what I want to do is the other way around. First, let’s create the Dynamic Data project:
- Open Visual Studio 2010 and choose to create a new Dynamic Data Web Application project. Name it "Admin".
- Delete *.designer.cs on every folder from all ascx and aspx pages including the master page.
- For every page, replace the first line from "CodeBehing" to "CodeFile". You can use the Replace in Files tool for that.
- Close the project.
- Via Windows Explorer, delete:
- The project files (.csproj and csproj.user)
- The config files from the root folder only, (web.config, web.release.config and web.debug.config)
- App_data, Bin, Properties and obj folders.
- Select File > Open Web Site and select the web site or create a new one.
- Create a new folder: "Admin". Right-click and select Add Existing Item… Add all files from the original project (excluding global.asax and global.asax.cs).
- Now we need to write code onto Global.asax based on the original Global.asax.cs file:
- First, select the web site root and right-click to add an App_Code asp.net folder
- Create a static class into App_Code, name it Global:
using System.Web.DynamicData; public static class Global { private static MetaModel s_defaultModel = new MetaModel(); public static MetaModel DefaultModel { get { s_defaultModel.DynamicDataFolderVirtualPath = "~/Admin/DynamicData/"; returns_defaultModel; } } }
- As you can see, there’s a slightly diference which is the addition of the Admin folder to the DynamicDataFolderVirtualPath. Do the same when adding the route on the Global.asax:
- Add the respective references to the web site project: System.Web.Routing, System.Web.DynamicData
- Replace MyContext to your context. To know more about this, read the Walkthrough: Creating a New ASP.NET Dynamic Data Web Site Using Scaffolding
- Pay attention to the absolute path ~/ change them respectively to ../ or ../../ for the Site.master
- Try to complie now, you will see that it won’t work because we need to change the Admin folder’s Default.aspx.cs to point the DefaultModel on the Global class we just created.
<%@ Page Language="C#" MasterPageFile="~/Site.master" CodeFile="Default.aspx.cs" Inherits="Admin.Default" %>
<%@ Application public static void RegisterRoutes(RouteCollection routes) { Global.DefaultModel.RegisterContext(typeof(MyContext), new ContextConfiguration() { ScaffoldAllTables = true }); routes.Add(new DynamicDataRoute("Admin/{table}/{action}.aspx") { Constraints = new RouteValueDictionary(new { action = "List|Details|Edit|Insert" }), Model = Global.DefaultModel }); } void Application_Start(object sender, EventArgs e) { RegisterRoutes(RouteTable.Routes); } </script>
4 thoughts on “Converting ASP.NET Dynamic Data Web Application to Web Site Project”
Great tutorial..
Saved me a lot of time..
Thanks
thanks for this details tutorial.
This post couldnt be more correct!!
Thank god some bloggers can still write. Thanks for this blog. | https://fabiocosta.ca/2009/12/16/converting-asp-net-dynamic-data-to-web-site-project/ | CC-MAIN-2018-13 | refinedweb | 564 | 51.95 |
Naming issue with CallList __call__ emulation
The call operation in question is here:
{{{ from dingus import DingusTestCase, Dingus import nose import sys
class MockedClass(object): pass
class Class(object):
def __init__(self): self.mocked_class = MockedClass(name='foo')
class ClassTest(DingusTestCase(Class)):
def setup(self): super(ClassTest, self).setup() self.object = Class()
class WhenInstantiatingClass(ClassTest):
def should_be_a_class(self): assert isinstance(self.object, Class) def should_create_mocked_class(self): assert MockedClass.calls('()', name='foo').once()
if name == 'main': nose_args = sys.argv + [r'-vsx', r'-m', r'((?:^|[b_.-])(:?[Tt]est|When|should|[Dd]escribe))'] nose.runmodule(argv=nose_args) }}}
As the test illustrates, the way call is defined, the name arg will conflict with any key word args that may be defined as 'name'. A simple solution for my use was to use a double under in front of definition.
{{{ def call(self, name=NoArgument, args, *kwargs): return CallList([call for call in self if (name is NoArgument or __name == call.name) and self._match_args(call, args) and self._match_kwargs(call, kwargs)]) }}}
I've fixed this in 28db58980063 . I didn't bother adding a test to the suite, though, as it's only a naming change. I'd like to remove the name argument altogether, switching the syntax from
to
(This makes the unfortunate name "calls" stick out a bit more, but that's another problem and I don't know what name would be better. I think that the mock.py library uses called_with, which is nice but wordy.)
(That was me up there!) | https://bitbucket.org/garybernhardt/dingus/issue/13/naming-issue-with-calllist-__call__ | CC-MAIN-2015-22 | refinedweb | 249 | 59.5 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.