text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
A commenter asked, "As an application programmer, can I really ignore DDE if I need to interact with explorer/shell?"
The answer is, "Yes, please!"
While it was a reasonable solution back in the cooperatively-multitasked world of 16-bit Windows where it was invented, the transition to 32-bit Windows was not a nice one for DDE. Specifically, the reliance on broadcasts to establish the initial DDE conversation means that unresponsive programs can jam up the entire DDE initiation process. The last shell interface to employ DDE was the communication with Program Manager to create program groups and items inside those groups. This was replaced with Explorer and the Start menu back in Windows 95. DDE has been dead as a shell interface for over ten years.
Of course, for backwards compatibility, the shell still supports DDE for older programs that choose to use it. You can still create icons on the Start menu via DDE and you can still register your documents to launch via DDE if you really want to, but if you take a pass on DDE you won't be missing anything.
On the other hand, even though there is no technological reason for you to use DDE, you still have to be mindful of whether your actions will interfere with other people who choose to: If you stop processing messages, you will clog up DDE initiation, among other things. It's like driving an automatic transmission instead of a manual transmission. There is no requirement (in the United States, at least) that you own a manual transmission or even know how to operate one. But you still have to know to ensure that your actions do not interfere with people who do have manual transmissions, such as watching out for cars waiting for the traffic light to change while pointed uphill.
Does DDE have legitimate uses that rely on the implementation actually being DDE? If not then can’t you just hijack the API calls to do whatever it is supposed to do in a more sensible way? (Caveat: I don’t recall ever using DDE so I could by posting out of my rear end due to not knowing DDE and its uses intimately.)
Ah, let’s hope someone from the Acrobat team reads"?
Not so much "as the driver of an automatic car," but "as a driver." Americans, at least, tend to roll backwards about 5-10 feet as they switch feet from brake to clutch to gas.
If the manual-transmission vehicle in question is an SUV or minivan, watch."
Seriously? Is that how you think you drive a manual? You release the clutch until it is at biting point, at which point you move your right foot from brake to accelerator. Or if the hill is very steep, you use the accelerator to build up revs, again at biting point on the clutch and then release handbrake. You should never roll back at all and you don’t need split second timing – you just need a basic understanding of how a clutch works and a feel for the car.
Explorer still uses DDE all over the place. Most times, when you ShellExecute or double-click something that doesn’t end in “.exe”, it initiates a DDE broadcast looking for an existing server to handle the request.
5-10 FEET!?! Are you kidding?
Yeah, a lot of drivers in England will slip 6" – 1′ on a steep hill, but you’re never that close anyway. But 5-10 FEET?!? You can’t be serious.
(And it doesn’t take split-second timing. You can hold the car on the biting point for a few seconds while slowly lowering the handbrake/increasing the gas.)
"There is no requirement (in the United States, at least) that you own a manual transmission…"
I think it’s a requirement in Italy.
-Wang-Lo.
All right, DDE is bad
But then… what do you suggest as a replacement ??
I don’t see any that fits my needs :
quit the current instance of an application
have the existing instance open a document
These are useful requests during installation ( see )
Can you post a review of the alternative methods that could be used ?
(Sockets communications seems wrong to me for such needs, plus reserving a specific port for each apps seems also wrong)
OLE? FindWindowEx() plus SendMessage()?
When a debugger (e.g., Visual Studio) is stopped sitting at a breakpoint, I assume this also stops DDE because of the mechanism you describe. Specifically, the application being debugged is obviously not responsive, and so this hoses up DDE for the entire machine?
Also, in Vista, wouldn’t this be considered a security problem, since a rogue application could easily stop all DDE and effectively execute a DOS attack against other apps that rely on DDE?
Finally, I will state my experience, which is that Outlook is the program that I notice as the biggest offender when it comes to stopping DDE. This was especially true in Outlook 2003, but I also see it sometimes in Outlook 2007. Typically if I have an app hosed due to DDE, I’ll first check my debuggers, and then close Outlook.
Adam –
Yeah, it’s kind of weird. In the US, people don’t use their handbrakes on hills. I’ve gotten weird looks in the past because (being British), I do :) But hey, the hills are really steep in Seattle. Rather that than roll back into someone, and it’s better than burning the clutch.
Using your handbrake is not only not done in the U.S., it’s actively discouraged. I’ve heard {anecdotally} of people failing their U.S. driving test for using their handbrake at stop lights etc. Besides, many vehicles don’t even have a handbrake, only a foot-operated parking brake. Completely bizarre in my opinion, but there you go …
@Dave Wood – Don’t all U.S. cars that have manual transmissions also have a handbrake, for the reason that you mentioned? I thought the other forms of parking brakes were only on cars with automatic transmissions.
Right up until Vista Explorer still used DDE to launch folders (ie, even in XP the default action for Folders was specified via DDE). It’s only now that Vista has added the super secret undocumented DelegateExecute stuff that DDE seems to be no longer used for this.
Does anyone out there have a feel for when the Vista API documentation will be published?
"Don’t all U.S. cars that have manual transmissions also have a handbrake, for the reason that you mentioned? I thought the other forms of parking brakes were only on cars with automatic transmissions."
No, they don’t. I used to drive a 1995 GMC Sierra pickup truck with manual transmission. The break was to the left of the clutch and had to be operated with the foot. Seeing that I only have two of those, you couldn’t use that brake to start from a steep hill.
And that should have been brake, not break…
I &^&%^%@ hate the Windows “We send you everything and you’re required to listen and acknowledge them all whether you were interested or not” event model. If I don’t want WM_TIMER, why should I be force-fed it a dozen times a second and be forced to repeat “no thank you, no thank you, no thank you”. It makes debugging window events a real pain, since if you breakpoint something upstream of the WM_message switch, you immediately start hitting it.
I know there are several ex-Amiga programmers who read this — I much preferred the IDCMP event model it and probably countless other windowing systems) used, where one has to specifically request event types before they are sent. It also means the apps spend a LOT less time wasting CPU cycles handling messages they didn’t care about anyway.
Perhaps you could write an article about the motivations behind THAT decision. I know it doesn’t fit the “subclass” design paradigm as well, but it sure seems like it would have worked better.
David Hefferman: it’s been available since Vista went RTM in November:
I’ve never seen a US’ian fall backwards 5-10 feet while starting off in a manual.
Unfortunately, US cars have occasionally liked having pedals for parking brakes, and therefore have not had a handbrake. I believe that all currently built manuals have handbrakes, but that could also be mistaken.
US teaching on driving a manual is often designed to instill the idea that using the handbrake is a bad thing. Yes, US teaching it this way is stupid. Oh well.
On my home machine, Windows Explorer seems to lock up semi-regularly for some number of seconds, and I’m guessing that the problem is that one of the processes on my machine is not responding to the DDE broadcast. Is there any tool or solution for either eliminating this problem, or else for locating the offending program or service?
“[Dude, message-passing is the wave of the future! -Raymond]”
Some masking would be nice, though; RISC OS had that many years ago, with a lot more control than the min/max values you can pass to GetMessage. (You’d pass a bitmask identifying the message types you’re interested in, then you could pass a list of the user message numbers you’ll understand.) Message filters might be able to achieve a similar effect, but probably without the efficiency/performance gains…
"Yeah, a lot of drivers in England will slip 6" – 1′ on a steep hill, but you’re never that close anyway."
Ah, spoken like someone who doesn’t drive in the US. I regularly see traffic at stoplights separated by 1′ or less. That’s why there’s so many crunched rear bumpers here – every fender bender turns into a chain reaction.
Right, and to say that I was going to implement some DDE code tomorrow. I don’t really expect you to know this of hand, but if you could point me in the right direction, places to search etc, I might be saved ;-)
My reason for doing this is with IE7: I’d like to start a browser with a file url with params (…htm?a=b&c=d). IE6 and firefox: no problem just Createprocess with "app.exe…htm?a=b&c=d". But IE7 cuts of the params. Not sure why (some form of attack protection/mitigation?).
So looking at the registry, it also has DDE keys so this was where I was going to go next. Any better ideas/avenues I might explore? Knowing nothing about this (I just want to show my apps help), I never thought DDE wasn’t recommended anymore. What then, OLE Automation? Not sure how that works though.
jeff –
I’ve found this to be tremendously helpful. It helped me find out that Adobe’s PDF IFilter was causing the Indexing Service to do this to me.
"If I don’t want WM_TIMER, why should I be force-fed it a dozen times a second and be forced to repeat "no thank you, no thank you, no thank you"."
So, like Windows does it then? You won’t get WM_TIMER messages unless you ask for them, will you? I mean, if you haven’t asked for them, how often would they arrive?! How does Windows know when to send them? Does it use Raymond’s psychic debugging techniques?
WM_TIMER is a virtual message anyway, iirc – it never goes in the queue, Windows just injects it at the appropriate point as required, doesn’t it? (Or that’s how it was explained to me by a Windows dev at an MS DirectX conference once.)
<sarcasm>
"To demonstrate our superior intellect, we will now regale you with off-topic untruths you have no way of disputing."
In Germany at least, cars with manual-transmission tend to roll backwards a bit when starting up-hill. The matchbox test is a myth. I know this because I worked hard to afford the thousands of Marks for my pink "rag," as a driver’s license is collquially known. Oh, and it’s pink. Just like my greencard. Go figure.
</sarcasm>
Raymond, I am so sorry. The temptation was too great. But, while I’m here, can you sign my copy of your book?
@Legolas
The Win32 function ShellExecute() may be your friend. Call it with the ‘open’ verb and pass it your properly formed URI (with params) as the lpFile parameter. I do this in one of my own programs to let the user check a page on my website to see if there is an upgrade available – I pass the current version as a CGI var, so I guess it should work with file:// URIs too (although I haven’t actually tried it).
Caveat: This will of course open the default web browser, which may or may not be IE – I can’t tell from your question if you’re specifically calling particular browsers, or just want to open the user’s browser of choice.
After reading the blog posting here, and also
I was wondering how it’s possible to use ShellExecuteEx and stop using DDE at the same time. But then:
> Explorer uses DDE when the registry says to
> use DDE.
What are the other methods?
I’ve just taken a glance at the documentation of ShellExecuteEx and a few neighbouring sections of the helpfile that comes with the Vista SDK. I’m not sure yet what else to try searching for.
Off-topic: In the table of contents in the helpfile that comes with the Vista SDK, at first it looks like ShellExecuteEx isn’t documented. The table of contents jumps from SHDoDragDrop to Shell_GetCachedsomething, bypassing ShellE-anything. I thought I read somewhere that Microsoft had developed some corporate standards on sorting?
On the topic of manual transmission – I too owned a manual transmission vehicle with a foot-controlled parking brake (actually, 2, only one of which was a US brand, and that was a rebadged Mazda). I learnt to drive partially on a vehicle with a handbrake, and very quickly realized that if you are on a hill, about to make a turn, and the light goes from red to green, you have to co-ordinate all 4 limbs, and your arms are doing 2 entirely disconnected things (the legs are balancing the clutch and gas, at least). My answer to that was to always own a vehicle with a certain amount of torque. I’ve never actually used a parking brake in traffic in any vehicle I’ve owned, at least as a general rule. (Come to think, my current automatic-transmission vehicle doesn’t have a handbrake).
"DDE is one of those bits of Windows has that warning feeling of "You’re returning to 16-bit land!", and that’s enough to make me avoid it."
Except that the demise of DDE only came after Windows NT 3.1, the first implementation of Win32, was released.
Folks, when you are talking about the history of Windows, remember that Windows NT 3.1 ported a lot of the user interface, API, and applets from Windows 3.1, complete with message broadcasts, DDE, OLE 1.0 (32-bit OLE 2.0 did not come until the Windows NT 3.5 timeframe, I think), IsBadReadPtr and IsBadWritePtr, and a lot of other technologies and APIs that are not optimal for a prementive multitasking environment. Susquent NT 3.x releases use less and less of Windows 3.1, esp. Windows NT 3.51, but the transition to the Windows 95 UI was not complete until NT 4.0.
Ah, Win16 + DDE. Fond memories. Back when a UAE was a real UAE, software came on floppies, and Microsoft C++ For Windows was hopelessly outclassed by Borland C++.
And floppies. The joy of re-installing a corrupt Windows 3.1x installation several times a week.
Has anybody tried Windows 3.1x on a modern machine? It must boot in about a nanosecond.
Raymond is OLE1.0 next?
I never really thought about the foot-handbrake thing in American cars (I guess because I had an auto when I was there – I still thought it was a bit weird, though). But now that it is pointed out, I can’t believe how you could sell a manual without a HAND-brake!
The thing is, once you know how to use the hand brake correctly, there’s really no reason why you should roll back at ALL, even on a really steep hill – it just becomes second nature (like all aspects of driving, really)
I thought Raymond’s initial point about taking manual drivers into account was simply that they might take a bit longer to get started (especially if they’ve taken it out of gear while waiting). I never even thought rolling back would be an issue.
What were you expecting? Everywhere I’ve seen, "_" sorts before letters…
The topic of this post made me smile.
As for broadcasts, I’d always wondered why my only experience of DDE (the dreaded ‘Program groups will now be created’ message displayed by installers that *really* should have been updated by now) is so damn slow, so it’s nice to have some clue.
I only tried to use DDE once, and that was a year or so ago, to see if it was the easiest method of communicating between processes. It wasn’t :-).
DDE is one of those bits of Windows has that warning feeling of "You’re returning to 16-bit land!", and that’s enough to make me avoid it.
I tend to mark .pdf files as "download only" so that the plugin doesn’t get a chance to run and annoy me. Especially handy when I didn’t realise that I was clicking on a .pdf in the first place.
Mir.
Jeremy –
Agreed! Adobe Reader is often a colossal pain. And I eagerly upgraded to 8 hoping it would be fixed, but never got to checking because I reverted immediately when I saw they crippled the UI. I’ve heard good things about Foxit reader or something.
"[The Vista documentation] has been available since Vista went RTM in November."
The documentation seems to be lacking at the moment. For example, the documentation for GetTokenInformation is missing information about TokenIntegrityLevel, TOKEN_MANDATORY_LABEL and the rest of the Vista token stuff, the SystemParametersInfo documentation has several items marked as "TBD" and that documentation you mentioned promises that LoadIconWithScaleDown will become available when Vista RC1 is released.
XP actually seems to *want* to make you use DDE — if you go into the File Types->Advanced section and look at almost any verb, you’ll see the Use DDE checkbox is ticked. And if you uncheck it, click OK, and go back in, it’ll be checked again.
Other than that, I agree with Wizou. Ok, DDE may be “bad”, but there really isn’t a shell-supported replacement technology to avoid starting up multiple instances of a program. So it’s up to the application developer to roll their own means of finding an existing instance of their application and passing command line arguments over to it, which in turn means that some people have just given up (leading to bad UI experiences) and others have implemented all sorts of weird designs, some of which may be buggy or have unintended consequences.
It’s even worse over in the .NET world, since you can’t get at DDE, nor can you use the most common trick (FindWindowByClassName) since you have no idea what your window class name is. Finally in .NET 2.0 something was added in the VB namespace that will handle this sort of thing fairly automatically (which can be used from other languages, albeit somewhat clumsily), but some people have reported that even this doesn’t scale well to opening lots of files simultaneously from the shell.
So, where’s the replacement for DDE?
+1 for DDE :-)
Despite of its drawbacks DDE remains for me, by far the easiest way of implementing most of the time trouble free, inter-process communication and I am really glad that Windows still provides support for."
That won’t work if your application is a client app required to be runnable on a Terminal Server. In this case you can’t use Remoting, at least not TCP/IP.
+10 for DDE and most people would be surprised what it can do, another one in a grave of .NET, Vista, no GDI acceleration and other pretty ‘oh I’m modern all the way to making my machine and x-RAM cry’.
Beside that Toolbar right/hold click + left click bug, do you guys mind fixing bugs or filtering (when appropriate) some messages sent to the explorer.exe as it does like to eat a hell of a lot messages as well as page faults.
M _
@Tim, that was exactly what I was doing, and that is what is not working anymore for IE7, it cute of the params… On to the workarounds!
Many times a day, I kill AcroRd32.exe because it blocks DDE messages and *doesn’t quit when you stop looking at a PDF in IE, nor even when you close IE*. This has been a problem for at least 3 major versions now. Sorry, just had to vent…
"But now that it is pointed out, I can’t believe how you could sell a manual without a HAND-brake!"
Even if you have a foot operated e-brake, you might be able to heal-and-toe your way to a nice, roll-back-less start.
Personally, when I drove manual tranmission cars, the key to starting on a hill was just to be quick about getting moving. If you’re in first with the clutch disengaged, it doesn’t take that much time to get enough engine speed and clutch engagement to avoid both a stall and a significant roll back. Then again, in Seattle or San Fran, where the hills are steeper than in Austin, it might not have worked so well.
@Yuhong Bao
Er, isn’t that pretty much what I implied?
NT 3.1 did have lots of stuff from Win16 (I know, I was there, too, even did some Win32s work for my sins), but it had to, otherwise a load of apps wouldn’t have worked. And Raymond knows a song about that.
Anyone who thinks that the official Vista documentation actually documents Vista is seriously misguided.
As I commented in an earlier post, for example, the only known documentation of the manifest route to Vista DPI awareness is in Raymond’s book (page 467)!
I asked what Raymond’s source for this nugget was and his response was: “I think somebody mentioned it to me, I forget exactly.” which is clearly nonsense – nobody remembers “<asmv3:application xmlns:asmv3=”urn:schemas-microsoft-com:asm.v3”>
<asmv3:windowsSettings
xmlns=”‘>“>
<dpiAware>true</dpiAware>
</asmv3:windowsSettings>
</asmv3:application>” if it was “mentioned to you”.
Clearly this was cut and paste from internal MS documentation. Anti-trust settlement anyone?
I know Raymond isn’t responsible for this, but this is what happens when a corporation starts blogging. You get feedback from your customers which you can then listen to and improve yourselves.
;-)
P.S. Are you suggesting that I should stop telling stories that involve information not otherwise publically available?]
> The whole point of ShellExecuteEx is to
> execute the document according to its
> registration. If the registration says “use
> DDE” then it will use DDE. If you don’t like
> that, then don’t use ShellExecuteEx.
OK. I don’t know how to interpret what part of the registry to figure out if the registration says “use DDE” or not. I see things that resemble command lines for verbs like “open” or “print”, but I thought they always used DDE. Actually
also gives the impression that DDE is a standard part of ShellExecuteEx’s operation. If I was planning to code a call to ShellExecuteEx, but now want to code a workaround when the filetype’s registration tells ShellExecuteEx to use DDE, what should the code look for in the filetype’s registration?
> I also like how you interpret even the
> simplest behavior change as the sign of some
> sort of vast conspiracy.
I like how you interpret something, that I still can’t see in my comment, as an interpretation of vast conspiracy, which I also don’t see in my comment. Where did this come from?
Monday, February 26, 2007 8:21 PM by steveg
> Has anybody tried Windows 3.1x on a modern
> machine?
Can you get a modern hard disk small enough so that the FORMAT command won’t divide by zero? In year Y2K I found a workaround for a friend’s machine, but have forgotten it by now.
> There is no “ShellExecuteEx without DDE”;
> […] It’s like asking for a CreateWindow that
> doesn’t send a WM_CREATE message.
Then in order to feel free to stop using DDE, we have to feel free to stop using ShellExecuteEx, 100%? Or, umm…
>>> If the registration says “use DDE” then it
>>> will use DDE.
If the registration doesn’t say to use DDE, then will ShellExecuteEx
perform without DDE? And if so, then is there a way for an
intending invoker to inspect the registry and figure out if
ShellExecuteEx will use a safer method?
> Perhaps you were instead merely expressing in
> your own way your pleasure with the collation
> rules?)
I merely expressed approximately two observations, one of which I’m
sure of (the effect of the helpfile’s collation) and one which is a
vague recollection (that Microsoft had set a corporate standard for
collation). I neglected to state that the two seem to be
incompatible with each other.
confused or you are just nitpicking for nitpicking’s sake. By “You are
free to stop using DDE” I mean “You are free to stop writing code that
participates in DDE conversations.” As I note in the final
paragraph, you still have to be aware of others that continue
to use DDE. I don’t see why you find it so surprising that there are
multiple conflicting collation algorithms. Isn’t that the whole point
of Michael Kaplan’s blog? (I mean, look at all those flags to the
CompareString function, and that’s just within one language!) -Raymond]
Regarding DDE in a file’s registration:
On 2K and XP at least, ShellExecuteEx first looks up the file’s extension under HKCR. The default value in that key is the name of another key under HKCR, and that key will have a shell<verb>ddeexec subkey if <verb> on the file should use DDE. This ddeexec key will have a few values and subkeys of its own which affect the DDE conversation, but I’m not quite sure how most of that works.
I do not believe ShellExecuteEx will use DDE if you give it an executable, only a "document". But I’m not sure on that.
And I bet that many of these details are subject to change in the future.
In the financial world DDE is still alive and kicking for IPC. If you need data from Bloomberg, Reuters, Telerate etc. then Excel + DDE is often a winning combination.
Raymond,
Many thanks for taking the time to respond to my posts – this is very much appreciated.
I’m surprised that you are as much in the dark as the rest of us. That you have to reverse engineer the Vista binaries yourself is mildly amusing. I don’t think I would be very happy to release software based on information gathered in this way. I’ll wait until I can read the official manifest documentation before I rely on that.
As for "telling stories that involve information not otherwise publically available" please do continue. It’s not the telling of the stories that bothers me, it’s the fact that the documentation has not been published. I can now see that you share our pain and have to dig it out yourself just like us plebs!!
Cheers, David.
> By “You are free to stop using DDE” I mean
> “You are free to stop writing code that
> participates in DDE conversations.”
I guess you mean that we should avoid writing code that acts like a server side of DDE conversations. OK, I’m not sure if I’ve done that or not (I’m not sure if any Visual Studio wizards that I’ve used might have done that). The word “using” made me think of the client side of DDE conversations, i.e. callers of ShellExecuteEx.
> I don’t see why you find it so surprising
> that there are multiple conflicting collation
> algorithms.
Of course there are, because different customers have different needs, some of which are met by some vendors and some of which have to be developed specially. But I thought I’d read somewhere that Microsoft had set a corporate standard for the order in which its own products would display information.
Wednesday, February 28, 2007 8:21 AM by BryanK
Regarding DDE in a file’s registration:
> that key will have a shell<verb>ddeexec
> subkey if <verb> on the file should use DDE.
Thank you. So if a client should be sure to avoid DDE then the client can look for that key and decide to call CreateProcess instead of ShellExecuteEx. (I wonder if the resulting process will try to detect multiple instances of itself and hang itself, but at least it will only hang itself and not the caller.)
Wednesday, February 28, 2007 2:22 PM by David Heffernan
> It’s not the telling of the stories that
> bothers me, it’s the fact that the
> documentation has not been published.
I second that.
I’m still not sure where you’ve seen ‘_’ sort AFTER a letter, though… Unless you’re thinking of the hyphen ‘-‘ and "word sorting"?
Oh the power! What an amazing feeling, to be able to influence Microsoft!
Now, I just have to find it on MSDN…..
Thanks Raymond, don’t feel too guilty.
:-)
[By “You are free to stop using DDE”]
I just ahd this discussion with someone the other day… Its great to say “don’t use DDE”, but what is the alternative? Even the version of Office on my machine (2003) uses DDE, how else do you get Explerer to open a second Word document without creating a whole new process and loading a second copy of Word into it?
To stay on topic: In South Africa if you get your license on an automatic you can only drive an automatic and not a stick shift.
—
Marc
> By “You are free to stop using DDE” I mean
> “You are free to stop writing code that
> participates in DDE conversations.” This
> covers both server and client side.
Then you really do mean I should stop calling ShellExecuteEx? Um, no, because BryanK posted a way to determine if it might be safe to call ShellExecuteEx?
> I think you once again are willfully ignoring
> my last paragraph.
Well I already understood that I have to process messages in order to avoid clogging up innocent bystanders. So now there’s an additional effect, when I process messages I also avoid clogging up guilty bystanders (users of DDE). So what? Since I already process messages, I’m ignoring your last paragraph?
> There is no requirement (in the United States,
> at least) that you own a manual transmission
> or even know how to operate one.
There used to be a requirement (in the United States, at least part thereof) that you know how to operate a manual transmission even if you never own one. Maybe it’s gone now though.
Wednesday, February 28, 2007 10:17 PM by Dean Harding
> I’m still not sure where you’ve seen ‘_’ sort
> AFTER a letter, though… Unless you’re
> thinking of the hyphen ‘-‘ and “word sorting”?
I did think I’d read that punctuation marks were supposed to be either ignored or very low ranked in their effect on sort orders. I didn’t recall reading that the hyphen would be treated differently from the underscore (and why not the apostrophe?).
Come to think of it, if a corporate policy calls for ignoring punctuation marks correctly, then two strings that differ only in punctuation could be sorted in different orders for no reason at all, and Outlook Express’s sorting isn’t as funny as I thought it was. But the MSDN library’s sorting still looks odd.
Hyphen and apostrophe are the only ones that are affected by word sorting – they’re basically ignored. Other punctuation is handled the same whether you’re doing word sort or string sort (in fact, string sort just makes hyphen and apostrophe work like other punctuation).
The only exception that I know of to this rule is how the period is treated in StrCmpLogicalW in Vista, to make filenames that end in numbers sort more "naturally" (whether you think StrCmpLogicalW is more "natural" or not is a matter of preference)
Hi…
in the file-association-properties:
Example: C:Program FilesTesttest.exe "%1" "%2"
this would only open the first and second selected file…
is it posible to set for "%1" another command, to get all files??
We had an issue recently where DDE broadcasts were being blocked on a system. The customer noticed that | https://blogs.msdn.microsoft.com/oldnewthing/20070226-00/?p=27863 | CC-MAIN-2019-09 | refinedweb | 5,612 | 70.43 |
I know that there are questions similar to this, but a lot of them are very open and don't help me too much...
I need to recursively list all directories and files in C programming. I have looked into FTW but that is not included with the 2 operating systems that I am using (Fedora and Minix). I am starting to get a big headache from all the different things that I have read over the past few hours.
If somebody knows of a code snippet I could look at that would be amazing or if anyone can give me good direction on this I would be very grateful.
Thanks
Here is a recursive version:
#include <unistd.h> #include <sys/types.h> #include <dirent.h> #include <stdio.h> void listdir(const char *name, int level) { DIR *dir; struct dirent *entry; if (!(dir = opendir(name))) return; if (!(entry = readdir(dir))) return; do { if (entry->d_type == DT_DIR) { char path[1024]; int len = snprintf(path, sizeof(path)-1, "%s/%s", name, entry->d_name); path[len] = 0; if (strcmp(entry->d_name, ".") == 0 || strcmp(entry->d_name, "..") == 0) continue; printf("%*s[%s]\n", level*2, "", entry->d_name); listdir(path, level + 1); } else printf("%*s- %s\n", level*2, "", entry->d_name); } while (entry = readdir(dir)); closedir(dir); } int main(void) { listdir(".", 0); return 0; } | https://codedump.io/share/NF8KqBzW76cn/1/how-to-recursively-list-directories-in-c-on-linux | CC-MAIN-2018-05 | refinedweb | 218 | 66.64 |
#include <fbxblendshapechannel.h>
Class for blend shape channels.
A blend shape channel is a sub-deformer to help blend shape deformer to organize the target shapes. One blend shape deformer can have multiple blend shape channels in parallel, and each of them can control one or multiple target shapes. If there are multiple target shapes connected to one channel, each target shape could have its own full deformation percentage. For example, given a channel that has 3 target shapes, whose full deform percentage are 30, to 80 to 100 separately, then when the percent changes from 0 to 100, the base geometry will deform from the first target shape to the last one. This is called in-between blend shapes or progressive morph. The property DeformPercent of blend shape channel will control the deform level of each target shape or in-between blend shape on it.
Definition at line 37 of file fbxblendshapechannel.h.
Definition at line 39 of file fbxblendshapechannel.h.
Definition at line 39 of file fbxblendshapechannel.h.
Reimplemented from FbxSubDeformer.
Definition at line 39 of file fbxblendshapechannel.h.
Set the blend shape deformer that contains this blend shape channel.
trueon success,
falseotherwise.
Get the blend shape deformer that contains this blend shape channel.
Add a target shape.
trueon success,
falseotherwise.
Remove the given target shape.
NULLif pShape is not owned by this blend shape channel.
Get the number of target shapes.
Get the target shape at given index.
NULLif index is out of range.
Get the target shape at given index.
NULLif index is out of range.
Get the index of the given target shape.
Get the full weight values of target shape.
To access each value iterate in the array up to GetTargetShapeCount().
Set the array size for the fully deform weights.
This functions pre-allocate the array to pCount size.
Get the type of the sub deformer.
Reimplemented from FbxSubDeformer.
Definition at line 117 of file fbxblendshapechannel.h.
Restore the blend shape channel to the initial state.
Calling this function will do the following:
This property stores deform percent of this channel.
The default value of this property is 0.0.
Definition at line 39 of file fbxblendshapechannel.h. | https://help.autodesk.com/cloudhelp/2018/ENU/FBX-Developer-Help/cpp_ref/class_fbx_blend_shape_channel.html | CC-MAIN-2022-21 | refinedweb | 364 | 69.99 |
#include "device_properties.h"
This class keeps track of the device properties of the client, which are for the most part learned from the UserAgent string.
Set device-based properties that are capture in the request headers (eg. the Accept: header).
SupportsWebpInPlace indicates we saw an Accept: image/webp header, and can rewrite the request in place (using Vary: accept in the result headers, etc.).
SupportsWebpRewrittenUrls indicates that the device can handle webp so long as the url changes - either we know this based on user agent, or we got an Accept header. We can't tell a proxy cache to distinguish this case using Vary: accept in the result headers, as we can't guarantee we'll see such a header, ever. So we need to Vary: user-agent or cache-control: private, and thus restrict it to rewritten urls. | http://modpagespeed.com/psol/classnet__instaweb_1_1DeviceProperties.html | CC-MAIN-2017-30 | refinedweb | 140 | 69.21 |
#include <string>
std::string func();
int main() {
auto str = func();
}
In this example, the type of "
str" is deduced based on the type of the expression "
func()" to be
std::string. You might wonder why this language feature is all that interesting, but"if you recall the post about lambdas, it was noted that lambdas have a type but that type is unique and cannot be spelled by the user. For lambdas that are stored as local variables, use of automatic type deduction is obligatory. For instance:
#include <iostream>
int main() {
auto fn = [](const char *msg) { std::cout << msg << std::endl; };
fn("hello");
fn("world");
}
So why is there a second mechanism for deducing types if auto works so splendidly? Because sometimes the type cannot be fathomed through initialization because there's no place for that initialization to take place. For instance, imagine a generic function that adds two values together and returns the result.
template <typename T1, typename T2>
??? add(T1 lhs, T2 rhs) { return lhs + rhs; }
It's impossible to tell for an arbitrary T1 and T2 what the return type should be. If T1 and T2 are both int, then it's pretty easy. But what if T1 is a class type and T2 is an arithmetic type, and T1 overloads operator+()? You can see how this quickly becomes an unsolvable problem without type inference. To solve this problem, starting in C++11 you can use a trailing return type to signify that the type will be automatically deduced, and this trailing return type is exactly where
decltype() shines.
template <typename T1, typename T2>
auto add(T1 lhs, T2 rhs) -> decltype(lhs + rhs) { return lhs + rhs; }
The auto type specifier signals that the return type is automatically deduced, the -> after the parameter list starts the declaration of the actual return type (hence, "trailing return type"), and decltype is used to automatically deduce what the return type should be. Note that the expression argument to decltype() is not evaluated (much like the expression argument to
sizeof() isn't evaluated).
However, automatic type deduction is useful beyond inferring lambda type information and writing really ugly trailing return types. In fact, this language feature has proven so popular that it was extended in C++14 to allow for deducing the return types of functions without a trailing return type so long as all return statements in the function agree on the type of the returned values, which makes our previous example considerably less redundant:
template <typename T1, typename T2>
auto add(T1 lhs, T2 rhs) { return lhs + rhs; }
As you can imagine, such a powerful change to the type system has lead to three different strategies for when and how to use this construct: always use auto, never use auto, use auto "when appropriate". My personal preference for when to use type deduction falls into the "when appropriate" category because understanding C++ often requires knowledge of the types involved, and type deduction can hide type information. For this reason, I only use type deduction when required (lambdas, generic programming) or when it obviates the need to redundantly spell out the type, e.g.,
auto i = static_cast<int>(blah()); However, there are other reasonable uses of type deduction that may not match my personal preferences. As with other style-based guidance, the important thing is to pick a coding guideline and apply it uniformly.
Automatic type inference is a powerful new feature that changes the way
we write modern C++ code. It allows developers to focus less on the spelling of type names and instead focus on expressions and side effects in the code. As with many powerful tools, it's good to be aware it exists and use it when it's the right tool for the job, but it can also be overused to the possible detriment of code readability and maintenance. In the next post I will discuss some smaller yet useful new features of C++.
| https://resources.grammatech.com/blog-3/new-features-of-c-automatic-type-inference | CC-MAIN-2019-09 | refinedweb | 658 | 53.04 |
Microsoft Corporation
July 13, 1998
With Microsoft® Internet Explorer 4.0, Microsoft introduced its implementation of a revolutionary HTML object model that content providers can use to effectively manipulate HTML properties on the fly. Until now, this object model has primarily been accessed using script technology. The com.ms.wfc.html package of the Windows Foundation Classes for Java (WFC) framework now lets you access the power of Dynamic HTML (DHTML) on a Web page directly from a Java class.
The following topics are covered in this section:
To help you get up and running using the com.ms.wfc.html package to implement Java and DHTML, here are the basic steps you can perform to create a simple DHTML project and add your own dynamic behavior to it. While this is by no means the entire story, it sets the stage for the rest of this topic and for the samples. There are five basic steps when using the com.ms.wfc.html package:
This generates a project containing a class called Class1, which extends DhDocument. This class represents the dynamic HTML document. You add initialization code to its initForm method to control the document's contents and behavior.
Your document class will look something like this:
import com.ms.wfc.html.*;
import com.ms.wfc.core.*;
import com.ms.wfc.ui.*;
public class Class1 extends DhDocument
{
public Class1()
{
initForm();
}
// Step 2: create objects to represent a new elements…
DhButton newElem = new DhButton();
// … as well as elements that already exist in the HTML page.
DhText existElem = new DhText();
private void initForm( )
{
// Set properties to existing elements and newly added elements.
newElem.setText("hello world");
existElem.setBackColor(Color.BLUE);
// Step 3: hook up an event handler to your object.
newElem.addOnClick(new EventHandler(this.onClickButton));
// Step 2: create an object to represent an existing element.
existElem = new DhText();
// Step 4: call setNewElements with an array of new elements.
setNewElements(new Component[] { newElem });
// Step 5: call bindNewElements with an array of existing elements.
setBoundElements(new DhElement[]{ existElem.setBindID("Sample") });
}
// Step 6: implement your event handler
private void onClickButton(Object sender, Event e) {
existElem.setText("Hello, world");
}
}
The Java portion of the exercise is complete. The other part is the HTML code. The following example shows a simplified version of the HTML document generated by the Code-behind HTML project template. There are two HTML elements that connect this HTML to the code in your project:
<HTML>
<BODY>
<OBJECT classid="java:com.ms.wfc.html.DhModule"
height=0 width=0 ... VIEWASTEXT>
<PARAM NAME=CABBASE VALUE=MyProject>
<PARAM NAME=CODECLASS VALUE=Class1>
</OBJECT>
<span id=Sample></span>
<!-- Insert your own HTML here -->
</BODY>
</HTML>
Open Internet Explorer 4.0, point it at your HTML file, and you can see your application run.
The initForm method plays a central role in the programming model for all user interface programming in WFC. When using the Visual J++™ Forms Designer for Win32-based applications, initForm is found in the Form-derived class that represents your main form. In the com.ms.wfc.html package, this method is found in your DhDocument-derived class (for example, Class1 in the code-behind HTML template provided by Visual J++) and is called from the constructor of the class.
You should use the initForm method to initialize the Java components that represent the HTML elements you want to access and code to. As with the initForm method in Form-derived classes, there are certain restrictions when calling WFC methods from initForm in DhDocument. As a rule, you should call only methods in initForm that set properties. Moreover, you should bind only to elements on the HTML page using the setBoundElements method.
Specifically, this means that calling any method that resets or removes a property or element is strictly not supported in initForm. This also applies to any methods that attempt to locate elements on the existing HTML page (such as DhDocument.findElement).
The reason for this is that the document on the existing HTML page is not merged with your DhDocument-derived class until the DhDocument.onDocumentLoad method is called. You can use the onDocumentLoad method to retrieve properties and manipulate or locate elements in the existing HTML document. For information on using the initForm and onDocumentLoad methods on server-side classes, see the section below, "Using the com.ms.wfc.html Package on a Server."
Elements are objects derived from DhElement, which is the superclass of all user interface elements in the com.ms.wfc.html package. There is a certain consistency you can count on when using any object derived from DhElement:
If an element is already on the page when the DhDocument.onDocumentLoad method is called, you can call the document's findElement method and start programming to that element. You can also call setBoundElements from initForm to merge known elements on the page with elements in your DhDocument-derived class. (The findElement method has better performance but specifically requires that onDocumentLoad is called first.)
The searching routine used by findElement and setBoundElements assumes that the element you want to bind to has an ID attribute set to a particular name. Using findElement, you can also enumerate all the elements in the document until you find the one you are interested in.
Containers are elements that can hold other elements. A basic example is the <DIV> element, which can contain any other HTML item. More complex examples include table cells and, of course, the document itself. In most cases, containers can be arbitrarily nested, such as having a table inside a cell of another table.
Containers are like other elements. They are created with a new statement, and many can be positioned and sized on the page. You can position and size elements within a container and set up their z-order relationships. One of the powerful features of DHTML is that you can then change any of these attributes in your code.
Of course, you can also allow elements within a container to be positioned using the normal HTML layout rules. Call either the setLocation or setBounds method of an element to set its absolute position, or call resetLocation to let the HTML layout engine position it (immediately after the last element in the HTML flow layout).
Once you have created a container element, you can add elements to it using either the setNewElements or add method. This mechanism follows the regular pattern of parent-child relationships: the elements, which can also be other containers, added to the container become its children. None of the elements is actually attached to the document until the topmost container, which is not a part of any other container, is added to the document.
You can position and size a container using its setBounds method. For example, to create a container, type:
DhForm myForm = new DhForm();
You can then set various attributes on the container, including the ToolTip that is shown when the mouse hovers over the panel:
myForm.setToolTip("This text appears when the mouse hovers");
myForm.setFont("Arial", 10);
myForm.setBackColor(Color.RED);
myForm.setBounds(5, 5, 100, 100);
Finally, you can add the container you've just created to the document in your DhDocument-derived class (such as Class1.java):
this.add(myForm);
When adding elements to the container, you can specify where they go in the z-order using one of a set of constants provided by the com.ms.wfc.html package. Elements are added with a default size and position. You can call setBounds on the elements to specify a different size.
DhForm myOverLay1 = new DhForm();
DhForm myOverLay2 = new DhForm();
myOverLay1.setBackColor(Color.BLACK);
myOverLay1.setBounds(10, 10, 50, 50);
myOverLay2.setBackColor(Color.BLUE);
myOverLay2.setBounds(20,25, 50, 50);
myForm.add(myOverLay1, null, DhInsertOptions.BEGINNING);
// Black on top of blue.
myForm.add(myOverLay2, myOverLay1, DhInsertOptions.BEFORE);
// Blue on top of black (uncomment below and comment above ).
// myForm.add(myOverLay2, myOverLay1, DhInsertOptions.AFTER);
You can also use the setZIndex method after the elements are added to move the elements around in the z-order. For example, the following syntax does not explicitly set a z-order on the added element but uses the default z-order (that is, on top of all other elements):
myForm.add(myText);
You can set this explicitly as follows, where num is an integer representing the relative z-order of the element within its container:
myText.setZIndex(num);
The element with the lowest number is at the bottom of the z-order (that is, everything else covers it). The element with the highest number is at the top (that is, it covers everything else).
Many elements in a DHTML program can trigger events. The com.ms.wfc.html package uses the same event model as the com.ms.wfc.ui package. If you are familiar with that mechanism, you'll find little difference between the two. A button is a good example. Suppose you want to handle the event that occurs when a user clicks a button on a page. Here's how:
public class Class1 extends DhDocument
{
Class1() { initForm();}
DhButton myButton = new DhButton();
private void initForm()
{
add(myButton);
myButton.addOnClick(new EventHandler(this.myButtonClick));
}
void myButtonClick(Object sender, Event e)
{
((DhButton) sender).setText("I've been clicked");
}
}
In this code, whenever the button triggers the onClick event (that is, when it is clicked), the myButtonClick event handler is called. The code inside the myButtonClick event handler does very little in this example. It just sets the caption on the button to new text.
Most events propagate all the way up a containment tree; this means that the click event is seen by the button's container and by the button itself. Although typically programmers handle events in the container closest to the event, this event bubbling model can be useful in special cases. It provides the programmer with the flexibility to decide the best place to code the event handlers.
Many different events can be triggered by elements in DHTML, and you can catch them all in the same way. For example, to determine when the mouse is over a button, try the following code, which catches mouseEnter and mouseLeave events for the button:
public class Class1 extends DhDocument
{
DhButton button = new DhButton();
private void initForm()
{
button.addOnMouseEnter(new MouseEventHandler(this.buttonEnter));
button.addOnMouseLeave(new MouseEventHandler(this.buttonExit);
setNewElements( new DhElement[] { button } );
}
void buttonEnter(Object sender, MouseEvent e)
{
button.setText("I can feel that mouse");
}
void buttonExit(Object sender, MouseEvent e)
{
button.setText("button");
}
}
All events that can be triggered (and caught) are defined in the event classes, based on com.ms.wfc.core.Event.
You can think of a Style object as a freestanding collection of properties. The term style is borrowed from the word processing world where the editing of a style sheet is independent of the documents to which you apply it. The same is true for using and applying Style objects in this library.
As an example, your boss tells you that the new corporate color is red and you need to change the color of elements in your HTML pages. You can, of course, set properties directly on elements, which is the traditional model for GUI framework programming:
// old way of doing things...
DhText t1 = new DhText();
DhText t2 = new DhText();
t1.setColor( Color.RED );
t1.setFont( "arial");
t2.setColor( Color.RED );
t2.setFont( "arial");
You could, of course, use derivation to save yourself time. For example, you might consider improving this with the following code:
// old way of doing things a little better...
public class MyText extends DhText
{
public MyText()
{
setColor( Color.RED );
setFont( "arial" );
}
This works fine until you decide you also want those settings for buttons, labels, tabs, documents, and so on. And you'll find yourself with even more work when you apply these to another part of your program or to another program.
The answer to this problem is a Style object. While using this library, you can instantiate a Style object and set its properties at any point:
// STEP 1: Create style objects.
DhStyle myStyle = new DhStyle();
// STEP 2: Set properties on style objects.
myStyle.setColor( Color.RED );
myStyle.setFont( "arial" );
Then at any other time in the code, you can apply that style to any number of elements:
DhText t1 = new DhText();
DhText t2 = new DhText();
// STEP 3: Apply styles using the setStyle method.
t1.setStyle( myStyle );
t2.setStyle( myStyle );
When it's time to keep up with the dynamic nature of high-level policy setting at your corporation, the following line sets all instances of all elements with myStyle set on them to change color:
myStyle.setColor( Color.BLUE );
Here is the really powerful part: All this is available during run time. Every time you make a change to the Style object, the DHTML run time dynamically reaches back and updates all elements to which that Style object is applied.
For more information, see the section below, "Understanding Style Inheritance."
The HTML rendering engine can determine the style to use if conflicting styles are set on an element. For example, if an element has the color property set directly on it (DhElement.setColor), the color defined by the color property is used. However, if an element has a Style object on it (DhElement.setStyle) and that object has the color property set, that value is used. Failing to find a color or a style, the same process is used with the element's container (DhElement.getParent), and failing that, with the container of that container and so on.
The process continues up to the document. If the document doesn't have a color property set on it, the environment (either browser settings or some other environment settings) determines the color to use.
This process is called cascading styles because the properties cascade down the containment hierarchy. The underlying mechanism for DhStyle objects is called Cascading Style Sheets (CSS) by the W3C.
Working with tables is actually no different from any other part of the library; the principles and programming model apply to tables as they do to any other type of element. A table, however, is such a powerful and popular element that it is worth discussing.
To use a table, you create a DhTable object, add DhRow objects to that, and then add DhCell objects to the rows. The following are the rules for table usage:
While this may seem restrictive, you can easily create a simple container that emulates a GridBag with the following code:
import com.ms.wfc.html.*;
public class GridBag extends DhTable
{
int cols;
int currCol;
DhRow currRow;
public GridBag(int cols)
{
this.cols = cols;
this.currCol = cols;
}
public void add(DhElement e)
{
if( ++this.currCol >= cols )
{
this.currRow = new DhRow();
super.add(currRow);
this.currCol = 0;
}
DhCell c = new DhCell();
c.add(e);
this.currRow.add( c );
}
}
To use this GridBag class, you just set the number of rows and columns (they must be the same with this implementation) and then assign elements to cells. The following is an example of the code in your DhDocument-derived class that uses this GridBag:
protected void initForm()
{
GridBag myTable = new GridBag(5);
for (int i = 0; i < 25; ++i){
myTable.add(new DhText("" + i));
setNewElements( new DhElement[] { myTable } );
}
}
One of the most powerful uses of the library is the combination of tables and Style objects. This combination enables you to create custom report generators that are powerful, professional looking, and easy to code.
Tables also have data binding capabilities. Using a com.ms.wfc.data.ui.DataSource object, you can bind data to your table, as shown in the following sample code.
import com.ms.wfc.data.*;
import com.ms.wfc.data.ui.*;
.
.
.
void private initForm( ){
DhTable dataTable = new DhTable();
dataTable.setBorder( 1 );
dataTable.setAutoHeader( true );
DataSource dataSource = new DataSource();
dataSource.setConnectionString("DSN=Northwind");
dataSource.setCommandText("SELECT * FROM Products" );
// If you would like to use the table on the server,
// call dataSource.getRecordset() to force the DataSource
// to synchronously create the recordset; otherwise,
// call dataSource.begin(), and the table will be populated
// when the recordset is ready, asynchronously.
if ( !getServerMode() ){
dataSource.begin();
dataTable.setDataSource( dataSource );
} else
dataTable.setDataSource( dataSource.getRecordset() );
setNewElements( new DhElement[] { dataTable } );
}
If you know the format of the data that is going to be returned, you can also specify a template (repeater) row that the table will use to format the data that is returned. The steps to do this are as follows:
DhTable dataTable = new DhTable();
DhRow repeaterRow = new DhRow();
RepeaterRow.setBackColor( Color.LIGHTGRAY );
RepeaterRow.setForeColor( Color.BLACK );
DataBinding[] bindings = new DataBinding[3];
DhCell cell = new DhCell();
DataBinding[0] = new DataBinding( cell, "text", "ProductID" );
repeaterRow.add( cell );
cell = new DhCell();.
DataBinding[1] = new DataBinding( cell, "text", "ProductName" );
cell = new DhCell();.
cell.setForeColor( Color.RED );
cell.add( new DhText( "$" ) );
DhText price = new DhText();
price.setFont( Font.ANSI_FIXED );
DataBinding[2] = new DataBinding( price, "text", "UnitPrice" );
cell.add( price );
repeaterRow.add( cell );
// Set up the table repeater row and bindings.
table.setRepeaterRow( repeaterRow );
table.setDataBindings( bindings );
// Create and set the header row.
DhRow headerRow = new DhRow();
headerRow.add( new DhCell( "ProductID" ) );
headerRow.add( new DhCell( "Product Name" ) );
headerRow.add( new DhCell( "Unit Price" ) );
table.setHeaderRow( headerRow );
DataSource ds = new DataSource();
ds.setConnectionString("DSN=Northwind");
ds.setCommandText("SELECT ProductID, ProductName, UnitPrice FROM Products WHERE UnitPrice < 10" );
table.setDataSource( ds );
ds.begin();
setNewElements( new DhElement[] { table } );
// alternately: add( table );
Your table is now populated with the data from the recordset and formatted like the template row.
The com.ms.wfc.html package can also be used on the server to provide a programmatic model for generating HTML and sending it to the client page. Unlike the client-side Dynamic HTML model, the server-side model is static because the server Java class has no interaction with the client document. Instead, the server composes HTML elements and sends them off sequentially to the client as they are encountered in the HTML template if one is specified.
Although not fully dynamic, this is still a powerful server feature. For example, you can apply DhStyle attributes to all parts of some template HTML code and then generate vastly different looking pages by just changing the DhStyle attributes. You do not have to programmatically generate all the individual style changes. Another advantage is that you can use the same model for generating dynamic HTML for both client and server applications, thereby making the HTML generation easier to learn and remember.
There are currently two modes of generating HTML on the server. Both use Active Server Pages (ASP) scripting and a class based on the com.ms.wfc.html classes. The first is the "bare-bones" approach that relies more on the ASP script. The second uses a class derived from DhDocument and is very similar to the model that you use on the client because it places more control inside the class than in the script.
This approach uses two ASP methods on the server page: getObject and Response.Write. The getObject method is used to instantiate a class based on the WFC com.ms.wfc.html classes; the Response.Write method writes the generated HTML string to the client. The com.ms.wfc.html.DhElement class provides a getHTML method that creates the HTML string; this string is then sent to the client page using the ASP Response.Write method.
For example, you have a class called MyServer that extends DhForm and incorporates some HTML elements. In your ASP script, you first call getObject("java:MyServer") to create a DHTML object. You can then perform whatever actions you want on the object from your ASP script, such as setting properties on the object. When you have finished, you call the object's getHTML method to generate the string and pass that result to the ASP Response.Write method, which sends the HTML to the client. The following code fragments show the relevant ASP script and Java code for creating a DhEdit control in HTML and sending it to the client.
ASP SCRIPT
Dim f,x
set f = getObject( "java:dhFactory" )
set x= f.createEdit
x.setText( "I'm an edit!" )
Response.Write( x.getHTML() )
Response.Write( f.createBreak().getHTML() )
.
.
.
JAVA CODE
public class dhFactory {
public dhFactory(){ }
public DhBreak createBreak() {
return new DhBreak();
}
public DhEdit createEdit(){
return new DhEdit();
}
}
This approach is slightly more sophisticated and closer to the client model. It still uses an ASP script to site the DhDocument class, but the rest of the operational code is in Java. As in the client model, the DhModule class is instantiated as the Java component on the Web page and automatically calls the initForm method in your project class that derives from DhDocument.
As in the client model, you can do all your binding setup in your initForm call. The onDocumentLoad function is also called for your server-side class. In this method, you can access the IIS Response and Request objects (using the DhModule getResponse and getRequest methods) and also append new DhElement items to your document stream. However, it is important to understand that you cannot use document-level functions, such as findElement, or use enumeration operations on a server-side document, except on items that you have explicitly added to your DhDocument-derived class.
To use the HTML-based approach, follow these steps:
The following sample shows an ASP page that uses a template:
<% Set mod = Server.CreateObject( "com.ms.wfc.html.DhModule" )
mod.setCodeClass( "Class1" )
mod.setHTMLDocument( "c:\inetpub\wwwroot\Page1.htm" )
%>
At run time, the framework recognizes that your class is running on a server and acts accordingly.
Once instantiated, you can add elements or text to your DhDocument-derived class. Those items will be appended to any template specified just before the </BODY> tag.
The following sample demonstrates a class that works on either the client or the server.
import com.ms.wfc.ui.*;
import com.ms.wfc.html.*;
public class Class1 extends DhDocument {
public Class1(){
initForm();
}
DhText txt1 = new DhText();
DhForm sect = new DhForm();
private void initForm() {
// Call getServerMode() to check
// if this object is running on the server.
if ( getServerMode() ){
txt1.setText( "Hello from the server!" );
}else{
txt1.setText( "Hello from the client!" );
}
// Size the section, set its background color
// and add the txt1 element to it.
sect.setSize( 100, 100 );
sect.setBackColor( Color.RED );
sect.add( txt1 );
add( sect );
setNewElements( new DhElement[] { sect } );
}
}
If you want to bind to an existing HTML document on the page, use the DhDocument.setBoundElements method, just as you would on the client. For example, if your HTML template contains the following HTML:
<P>
The time is:<SPAN id=txt1></SPAN><BR>
<INPUT type=text id=edit1
</P>
Your initForm method looks like this:
DhText txt1 = new DhText();
DhEdit edit = new DhEdit();
DhComboBox cb = new DhComboBox();
private void initForm(){
txt1.setText(com.ms.wfc.app.Time().formatShortTime());
edit.setText("Hello, world!");
edit.setBackColor( Color.RED );
setBoundElements( new DhElement[]{ txt1.setBindID( "txt1" )
edit.setBindID( "edit1" ) } );
// Create a combo box to be added after the bound items.
cb.addItem( "One" );
cb.addItem( "Two" );
// Add the items to the end of document.
setNewElements( new DhElement[]{ cb });
}
There are very few differences between the interpretation of server and client HTML classes. However, there is one important difference. Once elements are written (sent to the client), they cannot be modified as they can on a client document. The DhCantModifyElement exception, which is relevant only for server applications, is thrown after a write has been performed on an element if an attempt is made to modify that element again. (This underscores the fact that there is no real interoperation between the server Java class and the client document as there is on the client between the Java class and the document: From the server's standpoint, once written, the element is essentially gone.)
One advantage of using the DhDocument-derived method is that you can implement an HTML template that is embedded with attributes recognized by the com.ms.wfc.html classes. By first decorating the HTML elements in the file with ID attributes and then setting the corresponding IDs in the source code using the DhElement.setBindID method, you can bind to these HTML elements, set properties on the elements, add your own inline HTML code, and so forth. This essentially allows you to code and design separate templates ahead of time and populate the template with dynamic data when the document is requested from the server. | http://msdn.microsoft.com/en-us/library/aa260513(VS.60).aspx | crawl-002 | refinedweb | 4,061 | 55.64 |
undraw 1.0.2
undraw: ^1.0.2 copied to clipboard
A new Flutter package for open source illustrations. These illustrations are designed by Katerina Limpitsouni and the application is developed by westdabestdb.
Use this package as a library
Depend on it
Run this command:
With Flutter:
$ flutter pub pub add undraw
This will add a line like this to your package's pubspec.yaml (and run an implicit
dart pub get):
dependencies: undraw: ^1.0.2
Alternatively, your editor might support
flutter pub get.
Check the docs for your editor to learn more.
Import it
Now in your Dart code, you can use:
import 'package:undraw/undraw.dart'; | https://pub.dev/packages/undraw/install | CC-MAIN-2021-17 | refinedweb | 109 | 58.99 |
This is a mess. What I want to do is make 1 program with the 3 loops in it do something different and produce an output of what each loop does.
Do loop: print out every other character
For loop: print out the string reversed
While loop: print out in reverse every other character
Example:
I enter something stupid like: now is the time
I get as a result: Do - nwi h ie
For - emit eht si won
While - ei h iwn
Now, I have no problem doing this in their own separate programs and using an array, but I am not allowed to do that this time. Also, I cannot use any reverse methods or arrays.
This is a mess, and as of posting this, I am still working with my partner trying to figure this out. I do realize I am missing a substantiate amount of code, but we are at a road block.
/* January 19, 2012 Lab 2 Part B Part B Program: Loops.java */ import java.util.Scanner; public class Loops { public static void main(String[] args) { String input, output; int i = 0; Scanner scan = new Scanner(System.in); System.out.println("Enter something: "); input = scan.nextLine(); output = ""; do { i = input.length() - 1; output = output + input.charAt(i); i--; } while (i <= 0); System.out.println("Every other character: " + output); for (i = input.length() - 1; i != -1; i--); { output += input.charAt(i); } while (i >= 0) { output = output + input.charAt(i); i--; } System.out.println("Reversed every other character: " + output); } } | https://www.daniweb.com/programming/software-development/threads/407665/3-loops-in-1-program | CC-MAIN-2017-09 | refinedweb | 252 | 67.76 |
Pick whatever name the client wants to use for the local domain, example SBSCLIENT. In the DNS section, append their name with ".local." The name in DNS will read SBSCLIENT.LOCAL.
With the .local appended on the DNS namespace, it won't be published to the Internet root servers.
Hi
BFilmFan is right. We use Windows 2000 Server with SBS 2000 installed over the top. We set up a domain which is not accessible over the internet, and it has a .local name: for example our server is servername.domainname.local and the pc's are pcname.domainname.local
SBS 2000 Without Domain
In my Client office 39Pc with Windows2000 server. Now i want to install SBS2000 Server; but there is a problem beacuse my clinet didnt register Domain Name & he don't want register the Domain Name. So how can i install the SBS2000? If SBS2000 work without the Domain then plz guide me.
Thanks
This conversation is currently closed to new comments. | https://www.techrepublic.com/forums/discussions/sbs-2000-without-domain/ | CC-MAIN-2021-43 | refinedweb | 164 | 68.67 |
Overview of user controls vs. custom controls
ASP.NET Support Voice column
Overview of user controls vs. custom control..
Introduction
OverviewIn this month's column, I'll discuss the following topics:
- What are user controls?
- What are custom controls?
- What are the basic differences between user controls and custom controls?
What are user. For more details about the Web Forms programming model, visit the following Microsoft Developer Network (MSDN) Web sites:
Introduction to Web Forms pages
Web Forms code model
Web Forms code model:
- Open a text or HTML editor, and create a server-side code block exposing all the properties, methods, and events.
<script language="C#" runat="server"> public void button1_Click(object sender, EventArgs e) { label1.Text = "Hello World!!!"; }</script>
- Create a user interface for the user control.
<asp:Label <br><br><asp:button
How to use a user control in a Web Forms page
- Create a new Web Forms page (.aspx) in Microsoft Visual Studio .NET 2002, Microsoft Visual Studio .NET 2003, Microsoft Visual Studio 2005, or any text editor.
- Declare the @ Register directive. For example, use the following code.Note Assume that the user control and the Web Forms page are in the same location.
<%@ Register TagPrefix="UC" TagName="TestControl" Src="test.ascx" %>
- To use the user control in the Web Forms page, use the following code after the @ Register directive.
<html> <body> <form runat="server"> <UC:TestControl </form> </body> </html>
How to create an instance of a user control programmatically in the code behind file of a Web Forms pageThe previous example instantiated a user control declaratively in a Web Forms page using the @ Register directive. However, you can instantiate a user control dynamically and add it to the page. Here are the steps for doing that:
- Create a new Web Forms page in Visual Studio.
- Navigate to the code behind file generated for this Web Forms page.
- In the Page_Load event of the Page class, write the following code.Note You can add a user control dynamically at certain events of the page life cycle.
// Load the control by calling LoadControl on the page class.Control c1 = LoadControl("test.ascx"); // Add the loaded control in the page controls collection. Page.Controls.Add(c1);
For more information, visit the following Web sites:Adding controls to a Web Forms page programmatically
Control execution lifecycle
Dynamic Web controls, postbacks, and view state, by Scott Mitchell
How a user control is processedWhen a page with a user control is requested, the following occurs:
- The page parser parses the .ascx file specified in the Src attribute in the @ Register directive and generates a class that derives from the System.Web.UI.UserControl class.
- The parser then dynamically compiles the class into an assembly.
- If you are using Visual Studio, then at design time only, Visual Studio creates a code behind file for the user control, and the file is precompiled by the designer itself.
- Finally, the class for the user control, which is generated through the process of dynamic code generation and compilation, includes the code for the code behind file (.ascx.cs) as well as the code written inside the .ascx file.
What are custom controls?Custom controls are compiled code components that execute on the server, expose the object model, and render markup text, such as HTML or XML, as a normal Web Form or user control does.
How to choose the base class for your custom controlTo write a custom control, you should directly or indirectly derive the new class from the System.Web.UI.Control class or from the System.Web.UI.WebControls.WebControl class:
- You should derive from System.Web.UI.Control if you want the control to render nonvisual elements. For example, <meta> and <head> are examples of nonvisual rendering.
- You should derive from System.Web.UI.WebControls.WebControl if you want the control to render HTML that generates a visual interface on the client computer.
In brief, the Control class provides the basic functionality by which you can place it in the control tree for a Page class. The WebControl class adds the functionality to the base Control class for displaying visual content on the client computer. For example, you can use the WebControl class to control the look and styles through properties like font, color, and height.
How to create and use a simple custom control that extends from System.Web.UI.Control using Visual Studio
- Start Visual Studio.
- Create a class library project, and give it a name, for example, CustomServerControlsLib.
- Add a source file to the project, for example, SimpleServerControl.cs.
- Include the reference of the System.Web namespace in the references section.
- Check whether the following namespaces are included in the SimpleServerControl.cs file.
SystemSystem.CollectionsSystem.ComponentModelSystem.DataSystem.WebSystem.Web.SessionStateSystem.Web.UISystem.Web.UI.WebControls
- Inherit the SimpleServerControls class with the Control base class.
public class SimpleServerControl : Control
- Override the Render method to write the output to the output stream.Note The HtmlTextWriter class has the functionality of writing HTML to a text stream. The Write method of the HtmlTextWriter class outputs the specified text to the HTTP response stream and is the same as the Response.Write method.
protected override void Render(HtmlTextWriter writer) { writer.Write("Hello World from custom control");}
- Compile the class library project. It will generate the DLL output.
- Open an existing or create a new ASP.NET Web application project.
- Add a Web Forms page where the custom control can be used.
- Add a reference to the class library in the references section of the ASP.NET project.
- Register the custom control on the Web Forms page.
<%@ Register TagPrefix="CC " Namespace=" CustomServerControlsLib " Assembly="CustomServerControlsLib " %>
- To instantiate or use the custom control on the Web Forms page, add the following line of code in the <form> tags.Note In this code, SimpleServerControl is the control class name inside the class library.
<form id="Form1" method="post" runat="server"> <CC:SimpleServerControl </CC:SimpleServerControl ></form>
- Run the Web Forms page, and you will see the output from the custom control.
- Open any text editor.
- Create a file named SimpleServerControl.cs, and write the code as given in steps 1 through 14.
- In the PATH variable, add the following path:c:\windows (winnt)\Microsoft.Net\Framework\v1.1.4322
- Start a command prompt, and go to the location where SimpleServerControl.cs is present.
- Run the following command:csc /t:library /out: CustomServerControlsLib. SimpleServerControl.dll /r:System.dll /r:System.Web.dll SimpleServerControl.csFor more information about the C# compiler (csc.exe), visit the following MSDN Web site:
- To run the custom control on the Web Forms page, do the following:
- Create a directory under the wwwroot folder.
- Start Microsoft Internet Information Services (IIS) Manager, and mark the new directory as the virtual root directory.
- Create a Bin folder under the new directory.
- Copy the custom control DLL into the Bin folder.
- Place the sample Web Forms page that you created in the previous steps inside the new directory.
- Run the sample page from IIS Manager.
How to expose properties on the custom controlI will build on the previous example and introduce one or more properties that can be configured while using the custom control on the Web Forms page.
The following example shows how to define a property that will display a message from the control a certain number of times, as specified in the property of the control:
- Open SimpleServerControl.cs in a text editor.
- Add a property in the SimpleServerControl class.
public class SimpleServerControl : Control{ private int noOfTimes; public int NoOfTimes { get { return this.noOfTimes; } set { this.noOfTimes = value; } } protected override void Render (HtmlTextWriter writer) { for (int i=0; i< NoOfTimes; i++) { write.Write("Hello World.."+"<BR>"); } }}
- Compile the custom control.
- To use the custom control on the Web Forms page, add the new property to the control declaration.
<CC:SimpleServerControl</CC:SimpleServerControl>
- Running the page will display the message "Hello world" from the custom control as many times as specified in the property of the control.
How to apply design-time attributes on the custom control
Why design-time attributes are neededThe custom control that you built in the previous example works as expected. However, if you want to use that control in Visual Studio, you may want the NoOfTimes property to be automatically highlighted in the Properties window whenever the custom control is selected at design time.
To make this happen, you need to provide the metadata information to Visual Studio, which you can do by using a feature in Visual Studio called attributes. Attributes can define a class, a method, a property, or a field. When Visual Studio loads the custom control's class, it checks for any attributes defined at the class, method, property, or field level and changes the behavior of the custom control at design time accordingly.
To find more information about attributes, visit the following MSDN Web site:
- Open SimpleServerControl.cs in a text editor.
- Introduce some basic attributes at the class level, for example, DefaultProperty, ToolboxData, and TagPrefixAttrbute. We'll build our sample on these three attributes.
[ // Specify the default property for the control. DefaultProperty("DefaultProperty"), // Specify the tag that is written to the aspx page when the // control is dragged from the Toolbox to the Design view. // However this tag is optional since the designer automatically // generates the default tag if it is not specified. ToolboxData("<{0}:ControlWithAttributes runat=\"server\">" + "</{0}:ControlWithAttributes>") ] public class ControlWithAttributes : Control { private string _defaultProperty; public string DefaultProperty { get { return "This is a default property value"; } set { this._defaultProperty = value; } } protected override void Render(HtmlTextWriter writer) { writer.Write("Default Property --> <B>" + DefaultProperty + "</B>"); } }
- There is one more tag called TagPrefixAttrbute. It is an assembly-level attribute that provides a prefix to a tag when you drag the control from the Toolbox to the designer. Otherwise, the designer generates a prefix such as "cc1" by default. TagPrefixAttrbute is not directly applied to the control class. To apply TagPrefixAttrbute, open AssemblyInfo.cs, include the following line of code, and then rebuild the project.Note If you want to build the source using the command line, you need to create the AssemblyInfo.cs file, place the file in the directory that contains all the source files, and run the following command to build the control:
[assembly:TagPrefix("ServerControlsLib ", "MyControl")]>:
Control.Render method
Control.RenderControl method
Control.RenderChildren method
Control.RenderControl method
Control.RenderChildren method:
WebControl.RenderBeginTag method
WebControl.RenderContents method
WebControl.RenderEndTag method
WebControl.RenderContents method
WebControl.RenderEndTag method.
Properties
Article ID: 893667 - Last Review: 11/14/2007 09:20:09 - Revision: 1.8
Microsoft ASP.NET 1.1, Microsoft ASP.NET 1.0
- kbhowto kbasp KB893667 | https://support.microsoft.com/en-us/kb/893667 | CC-MAIN-2017-04 | refinedweb | 1,772 | 50.12 |
Write components that are easy to test
Vue Test Utils helps you write tests for Vue components. However, there's only so much VTU can do.
Following is a list of suggestions to write code that is easier to test, and to write tests that are meaningful and simple to maintain.
The following list provide general guidance and it might come in handy in common scenarios.
Do not test implementation details
Think in terms of inputs and outputs from a user perspective. Roughly, this is everything you should take into account when writing a test for a Vue component:
Everything else is implementation details.
Notice how this list does not include elements such as internal methods, intermediate states or even data.
The rule of thumb is that a test should not break on a refactor, that is, when we change its internal implementation without changing its behavior. If that happens, the test might rely on implementation details.
For example, let's assume a basic Counter component that features a button to increment a counter. We could write the following test:
<template> <p class="paragraph">Times clicked: {{ count }}</p> <button @increment</button> </template> <script> export default { data() { return { count: 0 } }, methods: { increment() { this.count++ } } } </script>
We could write the following test:
import { mount } from '@vue/test-utils' import Counter from './Counter.vue' test('counter text updates', async () => { const wrapper = mount(Counter) const paragraph = wrapper.find('.paragraph') expect(paragraph.text()).toBe('Times clicked: 0') await wrapper.setData({ count: 2 }) expect(paragraph.text()).toBe('Times clicked: 2') })
Notice how here we're updating its internal data, and we also rely on details (from a user perspective) such as CSS classes.
TIP
Notice that changing either the data or the CSS class name would make the test fail. The component would still work as expected, though. This is known as a false positive.
Instead, the following test tries to stick with the inputs and outputs listed above:
import { mount } from '@vue/test-utils' test('text updates on clicking', async () => { const wrapper = mount(Counter) expect(wrapper.text()).toContain('Times clicked: 0') const button = wrapper.find('button') await button.trigger('click') await button.trigger('click') expect(wrapper.text()).toContain('Times clicked: 2') })
Libraries such as Vue Testing Library are build upon these principles. If you are interested in this approach, make sure you check it out.
Build smaller, simpler components
A general rule of thumb is that if a component does less, then it will be easier to test.
Making smaller components will make them more composable and easier to understand. Following is a list of suggestions to make components simpler.
Extract API calls
Usually, you will perform several HTTP requests throughout your application. From a testing perspective, HTTP requests provide inputs to the component, and a component can also send HTTP requests.
TIP
Check out the Making HTTP requests guide if you are unfamiliar with testing API calls.
Extract complex methods
Sometimes a component might feature a complex method, perform heavy calculations, or use several dependencies.
The suggestion here is to extract this method and import it to the component. This way, you can test the method in isolation using Jest or any other test runner.
This has the additional benefit of ending up with a component that's easier to understand because complex logic is encapsulated in another file.
Also, if the complex method is hard to set up or slow, you might want to mock it to make the test simpler and faster. Examples on making HTTP requests is a good example – axios is quite a complex library!
Write tests before writing the component
You can't write untestable code if you write tests beforehand!
Our Crash Course offers an example of how writing tests before code leads to testable components. It also helps you detect and test edge cases. | https://test-utils.vuejs.org/guide/essentials/easy-to-test | CC-MAIN-2022-40 | refinedweb | 637 | 56.35 |
Have you purposefully missed showing how to create a new Qualifier for @Fancy or is it not needed?
Posted by Sreekanth on April 24, 2011 at 10:30 PM PDT #
Sreekanth,
That is pretty straight forward and boiler-plate code so left it out. Moreover NetBeans let you generate it with a single click anyway :-)
Posted by Arun Gupta on April 24, 2011 at 11:07 PM PDT #
Just to make it complete for a newbie :-)
Posted by Sreekanth on April 24, 2011 at 11:13 PM PDT #
Posted by guest on May 30, 2011 at 06:00 PM PDT #
Thanks for the post. It covers exactly what I was looking for. However, I'm not getting the expected results when using @Inject @Any with Instance.
When I try to access the Instance, it is always null. Any suggestions? Does it matter if the other implementations are in a separate Java EE Module? I'm using Glassfish 3.1 and NetBeans 7.0.1 JDK 1.6.0_27
e.g.
Module A
@Stateless
@LocalBean
@Interceptors({LogInterceptor.class})
@TransactionAttribute(TransactionAttributeType.MANDATORY)
public class GalleryService {
@EJB private CRUDService crudService;
@Inject @Any private Instance<OperationService> providers;
...
public void invoke(String galleryId, OperationDescriptor descriptor) {
for (OperationService provider : providers) {
...
}
}
}
Module B
@Named
@Singleton
@Startup
@Local(OperationService.class)
@Interceptors({LogInterceptor.class})
@TransactionAttribute(TransactionAttributeType.SUPPORTS)
public class BacktestService extends ABOperationService<BacktestDescriptor> {
...
}
Posted by guest on May 21, 2012 at 12:21 AM PDT #
Should you be injecting ABOperationService instead ? You can try asking at.
CDI recommends specifying interceptors in "beans.xml" as that is more loosely coupled so you may like to fix that as well.
Posted by Arun Gupta on May 21, 2012 at 09:45 AM PDT #
The issue was a missing beans.xml, thus CDI was not working in that module. Now that I've got my beans.xml (Doh!), I'll look into moving the interceptor there. Thanks for the tip.
Again, great post. There are not a lot of places where all the combinations are pulled together into a single explanation like this.
Posted by Mike on May 21, 2012 at 03:04 PM PDT #
Arun,
"The @Default qualifier... exists on all beans without an explicit qualifer, except @Named."
"Similarly
@Named public class SimpleGreeting implements Greeting { . . . }
is equivalent to:
@Named @Default public class SimpleGreeting implements Greeting..."
Isn't this a contradiction?
Posted by guest on December 29, 2012 at 03:26 AM PST # | https://blogs.oracle.com/arungupta/entry/totd_161_java_ee_6 | CC-MAIN-2015-40 | refinedweb | 401 | 51.44 |
pathex alternatives and similar packages
Based on the "Macros" category.
Alternatively, view pathex alternatives based on common mentions on social networks and blogs.
OK8.9 0.0 pathex VS OKElegant error/exception handling in Elixir, with result monads.
pipes8.4 0.0 pathex VS pipesMacros for more flexible composition with the Elixir Pipe operator
shorter_maps7.5 0.0 pathex VS shorter_mapsElixir ~M sigil for map shorthand. `~M{id, name} ~> %{id: id, name: name}`
expat7.2 0.0 pathex VS expatReusable, composable patterns across Elixir libraries
eventsourced6.8 0.0 pathex VS eventsourcedFunctional domain models with event sourcing in Elixir
ok_jose6.2 0.0 pathex VS ok_josePipe elixir functions that match ok/error tuples or custom patterns.
crudry5.8 2.8 pathex VS crudryCrudry is an Elixir library for DRYing CRUD in Phoenix Contexts and Absinthe Resolvers.
pipe_to5.7 0.0 pathex VS pipe_toThe enhanced elixir pipe operator which can specify the target position
pattern_tap5.1 0.0 pathex VS pattern_tapMacro for tapping into a pattern match while using the pipe operator
mdef5.0 0.0 pathex VS mdefEasily define multiple function heads in elixir
happy4.6 0.0 pathex VS happythe alchemist's happy path with elixir
pipe_here4.5 0.0 pathex VS pipe_hereAn Elixir macro for easily piping arguments at any position.
named_args3.7 0.0 pathex VS named_argsAllows named arg style arguments in Elixir
pit3.2 0.0 pathex VS pitElixir macro for extracting or transforming values inside a pipe flow.
rulex2.9 0.0 pathex VS rulexThis tiny library (2 macros only) allows you to define very simple rule handler using Elixir pattern matching.
guardsafe2.9 0.0 pathex VS guardsafeMacros expanding into code that can be safely used in guard clauses.
anaphora2.6 0.0 pathex VS anaphoraThe anaphoric macro collection for Elixir
apix1.8 0.0 pathex VS apixSimple convention and DSL for transformation of elixir functions to an API for later documentation and or validation.
unsafe1.6 0.0 pathex VS unsafeGenerate unsafe (!) bindings for Elixir functions
Bang1.1 0.0 pathex VS BangBang simply adds dynamic bang! functions to your existing module functions with after-callback.
kwfuns0.8 0.0 pathex VS kwfunsFunctions with syntax for keyword arguments and defaults
backports0.6 0.0 pathex VS backportsEnsure backwards compatibility even if newer functions are used
lineo0.3 0.0 pathex VS lineoparse transform for accurate line numbers
rebind0.3 0.0 pathex VS rebindrebind parse transform for erlang
Scout APM: A developer's best friend. Try free for 14-days
Do you think we are missing an alternative of pathex or a related project?
Popular Comparisons
README
Pathex
Fast. Really fast.
What is Pathex?
Pathex is a library for performing fast actions with nested data structures in Elixir. With pathex you can trivially set, get and update values in structures. It provides all necessary logic to manipulate data structures in different ways
Why another library?
Existing methods of accesssing data in nested structures are either slow (
Focus for example)
or do not provide much functionality (
put_in or
get_in for example).
For example setting the value in structure with Pathex is
70-160x faster than
Focus or
2x faster than
put_in and
get_in
You can checkout benchmarks at
You can find complete documentation with examples, howto's, guides at
Installation
def deps do [ {:pathex, "~> 1.0.0"} ] end
Usage
You need to import and require Pathex since it mainly operates macros
require Pathex import Pathex, only: [path: 1, path: 2, "~>": 2]
Or you can just
usePathex!
# This will require Pathex and import all operators and path/2 macro use Pathex
You need to create the path which defines the path to the item in elixir structure you want to get:
path_to_strees = path :user / :private / :addresses / 0 / :street path_in_json = ~P"users/1/street"json
This creates closure with
fnwhich can get, set and update values in this path
Use the path!
{:ok, "6th avenue" = street} = %{ user: %{ id: 1, name: "hissssst", private: %{ phone: "123-456-789", addresses: [ [city: "City", street: "6th avenue", mail_index: 123456] ] } } } |> Pathex.view(path_to_streets) %{ "users" => %{ "1" => %{"street" => "6th avenue"} } } = Pathex.force_set!(%{}, path_in_json, street)
Features
- Paths are really a set of pattern-matching cases. This is done to extract maximum efficency from BEAM's pattern-matching compiler
elixir # code for viewing variables for path iex> path 1 / "y" # almost equals to case do %{1 => x} -> case x do %{"y" => res} -> {:ok, res} _ -> :error end [_, x | _] -> case x do %{"y" => res} -> {:ok, res} _ -> :error end t when is_tuple(t) and tuple_size(t) > 1 -> case x do %{"y" => res} -> {:ok, res} _ -> :error end end
- Paths for special specifications can be created with sigils
elixir iex> mypath = ~P[user/name/firstname]json iex> Pathex.over(%{"user" => %{"name" => %{"firstname" => "hissssst"}}}, mypath, &String.capitalize/1) {:ok, %{"user" => %{"name" => %{"firstname" => "Hissssst"}}}}
elixir iex> mypath = ~P[:hey/"hey"]naive iex> Pathex.set([hey: %{"hey" => 1}], mypath, 2) {:ok, [hey: %{"hey" => 2}]}
- You can use variables inside paths
elixir iex> index = 1 iex> mypath = path :name / index iex> Pathex.view %{name: {"Linus", "Torvalds"}}, mypath {:ok, "Torvalds"} iex> index = 0 # Note that captured variables can not be overriden iex> Pathex.view %{name: {"Linus", "Torvalds"}}, mypath {:ok, "Torvalds"}
- You can create composition of lenses
elixir iex> path1 = path :user iex> path2 = path :phones / 1 iex> composed_path = path1 ~> path2 iex> Pathex.view %{user: %{phones: ["123-456-789", "987-654-321", "000-111-222"]}}, composed_path {:ok, "987-654-321"}
- Paths can be applied to different types of structures
elixir iex> user_path = path :user iex> Pathex.view %{user: "hissssst"}, user_path {:ok, "hissssst"} iex> Pathex.view [user: "hissssst"], user_path {:ok, "hissssst"}
No Magic
Pathex paths are just closures created with
fn.
Any
path or
~P is a macro for creating a closure.
Pathex.view/2,
Pathex.set/3,
Pathex.over/3 and etc are just macros for calling these closures.
Pathex.~>/2 is a simple macro which creates composition of two closures
Contributions
Welcome! You can check existing
TODO's
- If you have any suggestions or wan't to change something in this library don't hesitate to open an issue
- If you have any whitepapers about functional lenses, you can add them in a PR to the bottom of this readme | https://elixir.libhunt.com/pathex-alternatives | CC-MAIN-2021-43 | refinedweb | 1,029 | 62.98 |
Created on 2009-02-10 23:19 by mark.dickinson, last changed 2010-11-28 22:40 by gumtree. This issue is now closed.
In the 'coercion rules' section of the reference manual, at:
it says:
"""Over time, the type complex may be fixed to avoid coercion."""
In 3.x, the complex type has (necessarily) been fixed to avoid coercion,
and it ought to be a fairly easy task to backport that fix to 2.7, for
someone who wants to get his or her feet wet with some CPython hacking.
As far as I can see, there's no great benefit in such a change, except
that the presence of coercion for the complex type causes confusion
occasionally: see issue 3734 for an example of this.
Comment by gumtree copied from issue3734 discussion:
> While Mark Dickinson's patch fixes the documentation, it does not
offer
> a solution to the original problem, which was rooted in a need to
> provide special behaviour based on the numeric types. I made the
> original posting because I hoped that this problem could be resolved.
gumtree, would you be interested in working on a patch for this feature-
request? As mentioned above, the necessary changes are already present
in 3.x, so all that's entailed is figuring out which bits of the
Object/complexobject.c in the py3k source need to be transferred to the
trunk, and making sure that everything's properly tested.
Alternatively, could you explain in a little more detail why this change
is important to you? Subclassing complex doesn't seem like a very
common thing to want to do, so I'm curious about your use case.
I am happy to collaborate in finding a solution, but I do not have
enough knowledge or skill to work with the code alone.
Simply documenting it does not remove the frustration that a few people
will enounter. The key point being that you can subclass the other
numeric types, but not complex. Worse, you may think that you have
succeeded in subclassing complex too, because it is only when the
__ropn__ binary operators are used that the type reverts to the base
class type.
Would you want to subclass a numeric type? I agree, it is a bit obsure,
but I did a search on this before making the post and there have been
others who found the problem too.
In my case, I think that the motivation may seem a bit obscure. I had
developed some numerical-like types (from scratch -- no subclassing)
and I wanted to be able to write functions taking as possible arguments
these types and Python numerical types indifferently. I realised that I
could not achieve exactly what I wanted, however, by subclassing float,
int, etc I could add a few methods that would allow my generic
functions to work with either my types or the subclassed Python types.
At the same time, the subclassed numerical types could still be used as
numerical quantities (float, int,...). It seemed like a pretty elegant
solution.
If that explanation does not make sense, then I suppose other simpler
motivations could be, eg, to subclass float so that only positive
values are acceptable; to subclass complex so that only values lying
within the unit circle are acceptable, etc. That is, one might like to
define a type that can only take on physically meaningful values (mass
cannot be negative, a complex reflection coeffcieint cannot have a
magnitude greater than unity, ..)
So, my feeling is that this is worth fixing because the work done on
float, int etc, is clearly useful and it appears (to me) that the
complex case is an oversight.
Mark,
Is this still of interest?
I found the relevant changes in py3k, but I am not sure it is the behavior that gumtree is expecting. Since py3k removes coercion completely, the test case from issue 3734 would just issue:
Traceback (most recent call last):
File "test-5211.py", line 34, in <module>
print type(z + xz)
File "test-5211.py", line 5, in __coerce__
t = complex.__coerce__(self,other)
AttributeError: type object 'complex' has no attribute '__coerce__'
Where as I think gumtree wants the xcomplex case to behave as the xfloat case, e.g.
<class '__main__.xfloat'>
<class '__main__.xcomplex'>
Since coercion is getting axed in py3k, I don't think it makes sense to provide this behavior.
On the other hand, as you mentioned, the removal of coercion from complex could be backported. However, if we are going to do that then we might as well just backport the whole removal of coercion instead of just the bits from 'complexobject.c'. Bascially checkins r51431 and r58226.
Yes, I'd certainly be interested in reviewing a patch. Though the current behaviour is at most a minor wart, and since it's gone in Python 3.x the motivation to fix it isn't huge. :)
> Where as I think gumtree wants the xcomplex case to behave as
> the xfloat case, e.g. ...
Yes, that was what I was proposing. But as you point out, the new behaviour wouldn't even match the behaviour of Python 3.x, so it really wouldn't be a terribly useful change.
> However, if we are going to do that then we might as well just
> backport the whole removal of coercion.
That's not really an option: it has the potential to break existing code that uses coercion. Removing coercion in Python 2.x would require at least a PEP plus one version's worth of DeprecationWarning. And given that it currently looks as though Python 2.8 isn't likely to happen at all, that would likely be a wasted effort.
I.
> Mark
>
>.
OK. I have gone back to the beginning to refresh my memory and I see a possible point of misunderstanding. I am not sure that we are really talking about the problem that prompted my initial report (msg72169, issue 3734).
Immediately following my message, Daniel Diniz confirmed the bug and expanded on my code with an xfloat class of his own that uses __coerce__.
In fact, if I had submitted an xfloat class it would have been the following
class xfloat( float ):
def __new__(cls,*args,**kwargs):
return float.__new__(cls,*args,**kwargs)
def __add__(self,x):
return xfloat( float.__add__(self,x) )
def __radd__(self,x):
return xfloat( float.__radd__(self,x) )
My xfloat works fine in 2.6.4 and it was my wish, at the time, to write a class for xcomplex that behaved in a similar way. According to the Python manual, that should have been possible, but it wasn't.
So, I guess coercion is not really the problem.
However, there does seem to be something wrong with the complex type.
I have looked at the manual for Python 3 and see that the same rules apply for classes that emulate numeric types, namely:
."
The question I have then is will the following work in Python 3 (it doesn't in 2.6.4)?
class xcomplex( complex ):
def __new__(cls,*args,**kwargs):
return complex.__new__(cls,*args,**kwargs)
## def __coerce__(self,other):
## t = complex.__coerce__(self,other)
## try:
## return (self,xcomplex(t[1]))
## except TypeError:
## return t
def __add__(self,x):
return xcomplex( complex.__add__(self,x) )
def __radd__(self,x):
return xcomplex( complex.__radd__(self,x) )
xz = xcomplex(1+2j)
xy = float(10.0)
z = complex(10+1j)
print "would like xcomplex type each time"
print type(xz + z)
print type(xz + xy)
print type(xz + 10)
print type(xy + xz)
print type(10 + xz)
print type(z + xz)
Blair: I don't think you'll have any problems getting the behaviour you in Python 3. For example:
Python 3.2a0 (py3k:77952, Feb 4 2010, 10:56:12)
[GCC 4.2.1 (Apple Inc. build 5646) (dot 1)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> class xcomplex(complex):
... def __add__(self, other): return xcomplex(complex(self) + other)
... __radd__ = __add__
...
>>> xz = xcomplex(1+2j)
>>> all(type(xz + y) is type(y + xz) is xcomplex for y in (1, 10.0, 10+1j, xz))
True
So I don't consider that the removal of coerce and the __coerce__ magic method is a step backward: it still allows mixed-type operations, but without coerce the rules for those operations are significantly cleaner.
The real problem case in 2.6.4 seems to be when doing <instance of complex> + <instance of xcomplex>: in this case, Python first calls complex.__coerce__ (which returns its arguments unchanged), then complex.__add__. None of the xcomplex methods even gets a look in, so there's no opportunity to force the return type to be xcomplex.
If you look at the source (see particularly the binary_op1 function in Objects/abstract.c ) you can see where this behaviour is coming from. The complex type (along with its subclasses) is classified as an 'old-style' number, while ints, longs and floats are 'new-style' (note that this has nothing to do with the distinction between old-style and new-style classes). Operations between new-style numbers use the scheme described in the documentation, but where old-style numbers are involved there's an extra coercion step. In particular, when adding a complex to an xcomplex, the rule you quoted (about the case when the right operand is an instance of a subclass of the class of the left operand) isn't applied.
It's too risky to change the general behaviour for old-style numbers: I don't think there are any old-style numbers besides complex in the Python code, but there may well be some in third party extensions. Certainly documenting it better would be an option, and making complex a new-style number for Python 2.7 seems like a reasonable thing to consider.
>.)
> Yes,.
> if complex were made 'new-style' in 2.7, then it *would* be possible to > write fairly obvious code (not using coerce or __coerce__) that operated > in the same way in both 2.7 and 3.2. So I still think it's worth
> considering.
Agreed. I have attached a patch with src, test, and doc changes.
Thanks. I'll try to find time to look at this tomorrow.
Apologies for the delay; tomorrow was a long time coming...
The patch looks great---thank you! I added a ".. versionchanged" note to the documentation, and fixed a couple of whitespace issues; apart from that I didn't change anything. Applied in r78280.
I must have been operating on autopilot; not only did I forget to put the issue number in the checkin message, but I forgot to acknowledge Meador Inge for the patch. Fixed now, with apologies to Meador.
> I added a ".. versionchanged" note to the documentation, and fixed a
> couple of whitespace issues;
Thanks. I checked out the changes you made so that I will know what to do next time :).
> Fixed now, with apologies to Meador.
No worries. Thanks for applying the patch!
r78280 didn't remove the implicit coercion for rich comparisons; that's now been done in r81606.
Hi Mark,
I thought that this had all been fixed, but it seems not.
Consider the following:
class xcomplex( complex ):
def __new__(cls,*args,**kwargs):
return complex.__new__(cls,*args,**kwargs)
def __add__(self,x):
return xcomplex( complex.__add__(self,x) )
def __radd__(self,x):
print "larg: ", type(x),"returning: ",
return xcomplex( complex.__radd__(self,x) )
class xfloat(float):
def __new__(cls,*args,**kwargs):
return float.__new__(cls,*args,**kwargs)
def __add__(self,x):
return xfloat( float.__add__(self,x) )
def __radd__(self,x):
print "larg: ", type(x),"returning: ",
return xfloat( float.__radd__(self,x) )
z = 1j
xz = xcomplex(1j)
f = 1.0
xf = xfloat(1.0)
print
print "-----------"
print "expect xcomplex:", type(z + xz)
print "expect xcomplex:", type(f + xz)
print "expect xfloat:", type(f + xf)
print "expect ???:", type(z + xf)
When this runs, the first three conversions are fine, the last is not: there
is no call to xfloat.__radd__. It seems that the builtin complex type simply
thinks it is dealing with a float. Here is the output
-----------
expect xcomplex: larg: <type 'complex'> returning: <class
'__main__.xcomplex'>
expect xcomplex: larg: <type 'float'> returning: <class
'__main__.xcomplex'>
expect xfloat: larg: <type 'float'> returning: <class '__main__.xfloat'>
expect ???: <type 'complex'>
The last line shows that no call to __radd__ occurred.
Is there anything that can be done now about this now, or is it just too
late?
Regards
Blair
At 01:13 a.m. 31/05/2010, you wrote:
Mark Dickinson <dickinsm@gmail.com> added the comment:
r78280 didn't remove the implicit coercion for rich comparisons; that's now
been done in r81606.
----------
_______________________________________
Python tracker <report@bugs.python.org>
<>
_______________________________________Python tracker <
report@bugs.python.org>
I think that's expected behaviour. Note that int vs float behaves in the same way as float vs complex:
>>> class xint(int):
....
I see your point Mark, however it does not seem to be the right way to do
this.
Are you aware that Python has formally specified this behaviour somewhere? I
could not find an explicit reference in the documentation.
The problem that has been fixed is covered in the documentation:
(3.4.8. Emulating numeric types:.)
This rule is needed so that mixed-type arithmetic operations do not revert
to the ancestor's type. However, one would expect that different numeric
types (int float complex) would all behave in a similar way. For example,
xi = xint(3)
3 + xi # is an xint(6)
3.0 + xi # is float(6)
This is the same problem as the one that has been fixed from a practical
point of view. Such behaviour is not going to be useful (IMO).
It seems to me that xint.__radd__ would need to be called if the left
operand is a subclass of any of the number types (in this case,
isinstance(left_op,numbers.Complex) == True).
Am I missing something?
Mark Dickinson <dickinsm@gmail.com> added the comment:
I think that's expected behaviour. Note that int vs float behaves in the
same way as float vs complex:
....
----------
_______________________________________
Python tracker <report@bugs.python.org>
<>
_______________________________________
I'd like to add a few more observations to the mix.
I have run the following in both 2.6.6 and in 2.7
class xfloat(float):
def __new__(cls,x):
return float.__new__(cls,x)
def __radd__(self,lhs):
print "__radd__ got: %s" % type(lhs)
if isinstance(lhs,(int,float)):
return xfloat( float(self) + lhs )
else:
return NotImplemented
xf = xfloat(9.0)
cases = dict(int=1,float=1.0,complex=1.0+1j)
for k,v in cases.items():
y = v + xf
print "%s + xfloat" % k
print type(y)
print y
In 2.7 this gives:
__radd__ got: <type 'int'>
int + xfloat
<class '__main__.xfloat'>
10.0
__radd__ got: <type 'float'>
float + xfloat
<class '__main__.xfloat'>
10.0
complex + xfloat
<type 'complex'>
(10+1j)
In 2.6.6 I get:
__radd__ got: <type 'int'>
int + xfloat
<class '__main__.xfloat'>
10.0
__radd__ got: <type 'float'>
float + xfloat
<class '__main__.xfloat'>
10.0
__radd__ got: <type 'complex'>
complex + xfloat
<type 'complex'>
(10+1j)
They are the same except for the last case.
My feeling is that the behaviour of 2.6.6 (for subclassing float) is
correct.
The behaviour of 2.6.6 is needed to enable you to implement the commutative
property of addition (ie, you expect to get the same outcome from x+y or
y+x), which I would say is a pretty fundamental requirement.
I have also tried the following
class xint(int):
def __new__(cls,x):
return int.__new__(cls,x)
def __radd__(self,lhs):
print "__radd__ got: %s" % type(lhs)
if isinstance(lhs,(int,)):
return xint( float(self) + lhs )
else:
return NotImplemented
print
print "-------------------"
xf = xint(9)
cases = dict(int=1,float=1.0,complex=1.0+1j)
for k,v in cases.items():
y = v + xf
print "%s + xint" % k
print type(y)
print y
In 2.6.6 I get
__radd__ got: <type 'int'>
int + xint
<class '__main__.xint'>
10
float + xint
<type 'float'>
10.0
__radd__ got: <type 'complex'>
complex + xint
<type 'complex'>
(10+1j)
and in 2.7
-------------------
__radd__ got: <type 'int'>
int + xint
<class '__main__.xint'>
10
float + xint
<type 'float'>
10.0
complex + xint
<type 'complex'>
(10+1j)
In my opinion, 2.6.6 was faulty in the float + xint case, for the same
reasons as above, and 2.7 is faulty in both float + xint and complex + xint.
> In my opinion, 2.6.6 was faulty in the float + xint case, for the same
> reasons as above, and 2.7 is faulty in both float + xint and complex +
> xint.
Well, I disagree: Python is behaving as designed and documented in these cases. If you want to argue that the *design* decisions are the wrong ones, then I'd suggest opening a discussion on the python-ideas mailing list, where more people are likely to get involved---this tracker isn't really the right place for that sort of discussion.
Leaving complex out of the mix for the moment, it sounds to me as though you'd like, e.g.,
<float> + <subclass of int>
to call the int subclass's __radd__ method (if it exists) before calling the float's __add__ method. Is that correct?
Or are you suggesting that float's __add__ method shouldn't accept instances of subclasses of int at all? (i.e., that float.__add__ should return NotImplemented when given an instance of xint).
In the first case, you need to come up with general semantics that would give you the behaviour you want for float and xint---e.g., when given numeric objects x and y, what general rule should Python apply to decide whether to call x.__add__ or y.__radd__ first?
In the second case, I'd argue that you're going against the whole idea of object-oriented programming; by making xint a subclass of int, you're declaring that its instances *are* 'ints' in a very real sense, so it's entirely reasonable for float's __add__ method to accept them.
In either case, note that Python 2.x is not open for behaviour changes, only for bugfixes. Since this isn't a bug (IMO), such changes could only happen in 3.x.
Please take further discussion to the python-ideas mailing list.
Just>
> <>
> _______________________________________
>
> Just to keep this discussion as clear as possible Mark, it was your
> first option that I suggest is needed.
Okay, so you want <float instance> + <xint instance> to try xint.__radd__ before float.__add__.
How do you propose this be achieved? Can you explain exactly what semantics you'd like to see? You've indicated what you want to happen for float and xint, but how should Python behave in general here?
In particular, when evaluating 'x + y' for general Python objects x and y, what rule or rules should Python use to decide whether to try x.__add__(y) or y.__radd__(x) first?
It seems you want some general mechanism that results in xint being 'preferred' to float, in the sense that xint.__radd__ and xint.__add__ will be tried in preference to float.__add__ and float.__radd__ (respectively). But it's not clear to me what criterion Python would use to determine which out of two types (neither one inheriting from the other) should be 'preferred' in this sense.
I am not really the person (I don't know how Python is implemented) to
explain how the correct behaviour should be achieved (sorry). I do
appreciate that this may seem like exceptional behaviour. Numbers are a bit
different.
However, for what its worth, I think that the 'correct behaviour' was
implemented for subclasses of float and was working in Python 2.6, but not
now in 2.7. I don't know how the earlier implementation was done, but it
does work (I have used it to develop a nice little math library). Would
there be any documentation about the implementation?
I would say that the semantics do not need to apply to arbitrary Python
objects. The problem arises for numeric type subclasses when they are mixed
with non-subclassed numeric types.
In that case:
For 'x opn y' any binary operator (like +,*, etc), if (and only if) 'x' is
a built-in numeric type (int, long, float, complex) and 'y' is a subclass of
a built-in numeric type, then y.__ropn__(x) (if it exists) should be called
before x.__opn__(y).
If that were done, then subclasses of number types can implement commutative
operator properties. Otherwise, I don't think it works properly.
I see this as 'special' behaviour required of the int, long, float and
complex classes, rather than special behaviour for all Python objects.
If both 'x' and 'y' are subclasses of built in number types the required
behaviour seems too complicated to specify. I would be inclined to do
nothing special. That is, if both 'x' and 'y' are derived from built in
numeric classes (possibly different types) then x.__opn__(y) would be
called.
And should this apply to non-number types? I think not. Numbers deserve
special treatment.
I hope this helps.
On Mon, Nov 29, 2010 at 9:48 AM, Mark Dickinson <report@bugs.python.org>wrote:
>
> Changes by Mark Dickinson <dickinsm@gmail.com>:
>
>
> Removed file:
>
> _______________________________________
> Python tracker <report@bugs.python.org>
> <>
> _______________________________________
> | http://bugs.python.org/issue5211 | CC-MAIN-2017-09 | refinedweb | 3,577 | 74.79 |
There are a lot of articles and blog posts out there about how to get Unity working with ASP.NET. After all of those, I found the best reference for doing this was the MSDN Unity documentation. However, there are a few things that you have to change in order to get it working with ASP.NET 4.5.1.
Here is a page on MSDN that describes what to do:
However, in order to make this work, you will need to change the following lines in Web.config (in the
To use the following:
This will load the correct HttpModule that will perform the dependency injection. This HttpModule is defined on this page:
Don’t forget to follow all of the steps in the above document. This includes a link to the code for extending HttpApplication to allow you to retrieve the container.
I am also using Unity dependency injection with a WebAPI that is also in this project. I am using the same custom resolver with this, so I have created a Module that contains a public method for setting up the dependencies:
Module UnityRegistrationModule Public Sub RegisterTypes(ByRef container As IUnityContainer) container.RegisterType(Of ITestDtoRepository, TestRepository)(New HierarchicalLifetimeManager()) End Sub End Module
I can then call this method from the two places where dependencies need to be set up. That gives me a single place for setting up dependencies. In VB.NET, this is a module, this could be done in C# as a static method on a public Utility class.
Here is my WebAPIConfig that calls this method…
Public Module WebApiConfig Public Sub Register(ByVal config As HttpConfiguration) ' Web API configuration and services ' Set up dependency injection with Unity Dim container As New UnityContainer() RegisterTypes(container) config.DependencyResolver = New UnityResolver(container) ' Web API routes config.MapHttpAttributeRoutes() config.Routes.MapHttpRoute( name:="DefaultApi", routeTemplate:="api/{controller}/{id}", defaults:=New With {.id = RouteParameter.Optional} ) End Sub End Module
I had a little trouble getting all of this to work because my custom HttpModule was not executing. This is because I did not have it configured in the Web.Config correctly. After some messing around with the namespaces and project name, I was finally able to get it working.
Here is some code that uses a dependency from within a web page (the page has a label on it named “lblTest1”):
Public Class Default Inherits Page
As you can see, once you get the configuration correct, using dependencies from within an ASP.NET page is easy. Now, I can simply add a public property to the page of the desired interface and the DI resolver will inject an instance of it for me at runtime. All I have to do is code to the interface. If the required class changes in the future, all I have to do is associate the new class with the interface in the RegisterTypes() method. This is the beauty of dependency injection. I’m completely sold on this idea.
It took me a while to fully grasp the importance of DI, but now that I have, I can’t imagine life without it. | https://cerkit.com/2015/01/26/getting-unity-dependency-injection-working-with-asp-net-4-5-1/ | CC-MAIN-2017-26 | refinedweb | 518 | 52.39 |
Re: confused about extern use
- From: "Nick Keighley" <nick_keighley_nospam@xxxxxxxxxxx>
- Date: 21 Feb 2007 01:46:48 -0800
On 21 Feb, 08:36, "Lalatendu Das" <lalat...@xxxxxxxxx> wrote:
On Feb 21, 12:22 pm, Ian Collins <ian-n...@xxxxxxxxxxx> wrote:>Lalatendu Das wrote:
Here in the above example I am confused about, what extra the coder
going to achieve by declaring it as extern in the header file a.h.
it would have helped if you'd left the example in...
So it can be used in another compilation unit.
Actually what u mean by another compilation unit.
abbreviations like "u" don't add to the clarity.
"another compilation" unit is basically another C file (actually it's
the C file and its associated includes). A complete program is
composed on one or more compilation units.
If my assumption is
not wrong do u mean If I will try to compile another C-program no need
to include this header file or what ?
you'll need the header file to included in each C file that
uses the shared variable.
I don't think so it will work in that case. And in any case if I have
to include this header file to define one variable of structure type
"abc" then why to declare it extern there.
Actually I might have some wrong notion, please explain through
example if u want to?
thanks for ur replay.
each external object may have many declaration but only one
definition. Essentially you *declare* the type wherever it is
needed (often in an H file) but *define* the storage used in
only one place (usually a C file).
a.h
-------
/* declare struct abc type */
struct abc {
unsigned long a;
unsigned long b;
};
/* declare abc_var to be type struct abc */
extern struct abc abc_var;
file.c
-------
/* pull in declarations */
#include <a.h>
/* define abc_var in one place */
struct abc abc_var;
void f()
{
/* use abc_var */
abc_var.a = 1;
}
/* another compilation unit */
file2.c
-------
/* pull in declarations */
#include <a.h>
/* DON'T re-define abc_var */
void g()
{
/* use abc_var again */
abc_var.a = 0;
}
the program consists of two compilation units that both
access abc_var. How they are joined together ("linked")
is implementaion dependent (see your compiler
documentation).
--
Nick Keighley
.
- Follow-Ups:
- Re: confused about extern use
- From: Lalatendu Das
- References:
- confused about extern use
- From: Lalatendu Das
- Re: confused about extern use
- From: Ian Collins
- Re: confused about extern use
- From: Lalatendu Das
- Prev by Date: Re: Pointers in C
- Next by Date: Re: What is the use of Function Pointers in C language?
- Previous by thread: Re: confused about extern use
- Next by thread: Re: confused about extern use
- Index(es): | http://coding.derkeiler.com/Archive/C_CPP/comp.lang.c/2007-02/msg03244.html | crawl-002 | refinedweb | 446 | 64.2 |
How to Build a Webstore Using Modern Stack (Nest.js, GraphQL, Apollo) Part 2
How to Build a Webstore Using Modern Stack (Nest.js, GraphQL, Apollo) Part 2
Zebra: an open source webstore.
Join the DZone community and get the full member experience.Join For Free
This is the second article in this series. The first can be found here. Before moving on to build new functionality for the store, we still need to cover several topics that must be added in order to create a good product:
Descriptive and useful documentation.
Unit tests.
Integration tests.
That's of course not an ultimate list of missing things, but they are very crucial and I will focus on adding them in this section. As this functionality is missing in part one and has to be as part of the source code, I'll continue using the 0.1.x-maintenance branch of the main project.
You may also like: Nest.js Brings TypeScript to Node.js and Express.js, Part 1.
Documentation
For keeping the documentation a bit more fancy than the usual Markdown in Readme.md I decided to use some open source project that:
Can do the job pretty well.
Are easy to spin up.
Have minimum hassle in customization.
Support markdown files as a source for generating the content.
After some investigation, I found a nice project loved by thousands of developers (at the moment of creating this article, it has slightly more than 13,000 stars and is used across well known open source projects. It was enough to convince myself to stop my choice at.
I have quite broad experience in software development, as it comes to almost 14 years. I can say that documentation is a weak point for developers; it's always done in the last moment and mostly forced to be done. And that's not about user-facing documentation on how to use a product or service, but how to:
Work with the code.
Configure the development environment.
Align with the best practices chosen exactly in that project.
Troubleshoot known issues.
And many more related exclusively to developers. So, why is creating and maintaining solid documentation so important?
First of all — the bus factor. If only one person has certain knowledge about a key part of a project and that person gets hit by a bus, the whole team be in jeopardy, as nobody knows how that part of the project works. In the best-case scenario, another team member will spend significant time to understand how it really works. In the worst, the code has to be rewritten or duplicated.
Second — Help your team be as efficient as you are. If you create a script/command that simplifies development, it most likely will be hidden from others, especially if it resides in a huge codebase that's already difficult to comb through. Creating documentation for the snippet would allow for other team members to make use of it in the same way you do.
Third — Keep onboarding easier, especially for junior-level developers. If you don't want to spend most of your time answering the same questions again and again, it can be a good motivator for you to invest your time once in creating documentation so that you can hand it to each new developer on day one and spend your time focusing on larger issues.
Having all these reasons in mind let's configure Docusaurus.
How I did this in the project:
I ran
yarn global add docusaurus-initto install a project generation command.
I executed it
docusaurus-initand as a result created docs and website folders.
- You can then run this with
yarn start.
As you can see, it's super easy to run a default setup.
Let's have a look at the generated code. You'll find out that it's React, that's fantastic, isn't it? :) You can literally update everything that you can see on the site. As the beginning, I started by adding my content in the Docs section. For that, you need to create your *.md file in the Docs folder and have a special header inside.
--- id: frontend title: Frontend ---
id is used for linking it with layout, and
title will be visisble on side panel.
To make that item appear in sidebar, you have to update
website/sidebars.json and specify it in a proper structure, depending on where you want to see it. It will look something like this:
{ "docs": { "Getting started": [ "introduction", { "type": "subcategory", "label": "Technology stack", "ids": [ "circle-ci", "gulp", "lerna", "webpack", "nest", "typeorm", "graphql", "frontend", "react-graphql" ] } ] } }
So, as you can see, you can create subcategories too by creating an item, which has its own "type," "label," and "ids." These will contain the link to the article that will be included in that subcategory.
To control header links visible on the right side of the top panel, you need to check
siteConfig.js :
headerLinks: [ {doc: 'introduction', label: 'Docs'}, {doc: 'api', label: 'API'}, {page: 'help', label: 'Help'} ],
Note: Bear in mind that after changing the configuration which modifies the layout, you need to restart Docusaurus to apply the changes!
Unit Testing
As there is a lot of functionality to unit test, let's break it up into parts:
Nest.js Controller Unit Testing
To run tests on that level, you can find a configuration in the
src/server/package.json line.
... "test:e2e": "jest --config ./test/jest-e2e.json" ...
You can run it with
yarn test:e2e in the
server folder.
If you are using any IDE tool, you need to provide that config file to jest command to be able to run it inside of it. For example, in IntellijIDEA it will look like this
Testing Nest.js controller
import * as request from 'supertest'; import {Test} from '@nestjs/testing'; import {AppModule} from '../src/app.module'; import {INestApplication} from '@nestjs/common'; describe('Status Controller', () => { let app: INestApplication; beforeAll(async () => { const module = await Test.createTestingModule({ imports: [AppModule], }).compile(); app = module.createNestApplication(); await app.init(); }); it(`/GET ping`, () => { return request(app.getHttpServer()) .get('/ping') .expect(200) .expect('pong'); }); afterAll(async () => { await app.close(); }); });
In
beforeAll, we first initialize the entire Nest application with the modules we are interested in. Afterward, we can send HTTP calls to verify the output. You can pay attention that there are two expects (lines 21 and 22), which are testing different things.
Supertest is matching the result based on type. If it is an integer, it will check on the return HTTP status code, and if it is anything else, it will match the body.
After the test is finished, don't forget to release resources (lines 26-28).
Testing Nest.js GraphQL endpoint
Here, we'll have a look on the concept of testing any GraphQL endpoint. The initialization phase is absolutely the same as in the previous test and the body of test case is
it('Get all products', () => { return request(app.getHttpServer()) .post('/graphql') .send({ operationName: null, variables: {}, query: GET_ALL_PRODUCTS.loc.source.body }) .expect(200) .expect({data: {products: []}}); });
As you can see, we can send requests just to the
/graphql endpoint with interested for us GQL query. Line seven expects a string for the query. To not manually type a string and not change all the time test code when you change a query in source code, it's better to point to a query constant.
As in our case
GET_ALL_PRODUCTS. And as it is not a query but GQL, we need to get a query in a string representation. By calling
x.loc.source.body, it will fetch what we need.
Integration Testing
For intergration testing, I decided to use Cypress. It's a quite popular testing framework, as it supposed to be easy to use and handy in debugging. I didn't have any experience with it before, as Cypress doesn't support Firefox or Internet Explorer and at my work, we have to support it. For that we are using Protractor, as I'm already familiar with the issues present in Protractor and how to handle them.
Before we dive in, that let's have a look at what was done to configure Cypress for this project.
First that's of course dependencies on it in package.json:
{ ... "@cypress/webpack-preprocessor": "4.1.0", "cypress": "3.3.1", ... }
Webpack preprocessor is needed for Cypress to run on its own version of Node.
Second is the configuration of Cypress by creating a cypress.json file in the root of the project:
{ "baseUrl": "", "defaultCommandTimeout": 5000, "fixturesFolder": "src/client/test/e2e/cypress/fixtures", "integrationFolder": "src/client/test/e2e/cypress/specs", "pluginsFile": "src/client/test/e2e/cypress/plugins", "screenshotsFolder": "src/client/test/e2e/cypress/screenshots", "videosFolder": "src/client/test/e2e/cypress/videos", "viewportHeight": 768, "viewportWidth": 1024 }
Here, we specify the URL on which your application is running, how long to wait for a command to execute, default folders, and the resolution of the screen on which tests will be running.
To run Cypress, I've defined two commands in package.json
{ ... "scripts": { "cypress:open": "cypress open", "cypress:run": "cypress run" } ... }
You need cypress:open if you want to open the Cypress panel and run tests manually from there.
On the top right side, you can click on "Run all specs," and you will see how these tests will be running in when they're live. Any issues will be shown on the left panel.
Cypress:run will run tests in a headless mode, and you can see the results in the console. That works faster and in case of any issues, you can check logs, screenshots on the step where the test failed, and a video. Folders to them were specified in cypress.json.
If nothing is running at your local computer, and you want to spin up the backend server, frontend server, and run e2e tests against it, you can execute
gulp e2e. That command is also used in CircleCI to run the same tests.
Now about the test itself.
Scenarios to test:
Add product.
Check that it is added to the store.
Delete product.
Check that the message "No records to display" is visible on the second form. (This is an indirect check that that item was deleted.)
What we have for that in a spec:
import {Application} from '../dsl/application'; import {AllProductsDsl} from '../dsl/all-products-dsl'; describe('My First Test', function() { before(function() { Application.open(); // opening the application on the default page }); it('should be able to create a new product', function() { AllProductsDsl.addProduct('Milk', '0.95', '1 liter'); AllProductsDsl.checkProductInStore('Milk', '0.95', '1 liter'); AllProductsDsl.removeProduct(1); AllProductsDsl.checkThatNoProductsInStore(); }); });
To make spec human-readable, I moved all Cypress and DOM specific logic to a DSL class. It will help me in the future to:
Not duplicate the code.
Make scenarios more easily readable and maintainable.
Make composing more complicated logic easier by utilizing DSL in DSL.
Let's check what is inside
AllProductsDsl:
export class AllProductsDsl { static addProduct(name, price, description) { cy.get('.product-form input[name="name"]').type(name); cy.get('.product-form input[name="price"]').type(price); cy.get('.product-form input[name="description"]').type(description); cy.get('.product-form button[type="submit"]').click(); } static checkProductInStore(name, price, description) { cy.get('.product-list tbody tr td:nth-child(1)').contains(name); cy.get('.product-list tbody tr td:nth-child(2)').contains(price); cy.get('.product-list tbody tr td:nth-child(3)').contains(description); } static removeProduct(lineNumber) { cy.get(`.product-list tbody tr:nth-child(${lineNumber}) td:nth-child(4) button`).click(); } static checkThatNoProductsInStore() { cy.get('.product-list tbody tr td').contains('No records to display'); } }
To make such CSS selectors easier, I added
product-form and
product-list CSS classes to forms. I don't like lines 10-12, as their columns are checked by order, not name; any change in the structure of Product might involve changes in that test if new columns are added between existing ones. There is no easy way to add custom CSS classes for columns, as the table doesn't provide it via configuration, so I left it as-is for now.
By the way, for creating selectors, I recommend using Chrome Dev Tools where you can open the application and test your selector.
$$ denotes that you are using CSS selector search. There are other options, like $ for jquery search, of $x for XPath. Pay attention that when you have a valid selector, Dev Tools will display under your query an existing HTML element or array of HTML elements; you can click on it, and Chrome will display that element in the whole application DOM structure and highlight that element in the application.
So back to the Cypress problem. If you will run the posted test in the article, it will fail. And it will fail on the last check. Why? Because after clicking on the "delete" icon, it can't find a specified HTML element with the text "No records to display."
Though documentation regarding
cy.get and
cy.contains says that it will try to find an element by selector during the specified time and then only blow off. But even when the element is already visible, the browser still can't catch it. So, what you can do in that case?
The worst anti-pattern solution is:
cy.wait(500).Although this will create an immediate result, it's far from best-practice. First, you can delay your tests significantly with unnecessary waiting. Second, if it works on your computer, there is no guarantee that it will work on a slower model. This also depends on how your computer loaded other tasks.
So, having such delays will introduce flakiness into and slow down your tests. A better approach is to wait for some element before it will be displayed and only then check some condition.
One thing which crept into my mind is that having a chain of
ci.get(...).contains(...) has a problem that I found here. You can check it out if you're curious. The solution is to use
ci.contains(selector, '...text...'). With any tool, you need to know what you have to avoid and where there are problems.
Please leave your feedback in the comments section; I'd really appreciate hearing your opinion about the article and the code!
Related Articles
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/how-to-build-a-webstore-using-modern-stack-nestjs?utm_medium=feed&utm_source=feedpress.me&utm_campaign=Feed%3A+dzone%2Fwebdev | CC-MAIN-2020-34 | refinedweb | 2,415 | 66.03 |
Posted 30 Mar 2015
Link to this post
Posted 02 Apr 2015
Link to this post
See What's Next in App Development. Register for TelerikNEXT.
Posted 02 Jul 2015
Link to this post
Hi,
I am experiencing the same problem.
Here is how I have defined my RadAsyncUpload
<telerik:RadAsyncUpload
<telerik:RadButton
But it's failing for all except PDF? Do I need to add something?
Please see attached output after Files have been selected.
Thanks in advance
Posted 06 Jul 2015
Link to this post
Posted 22 Oct 2015
in reply to
Konstantin Dikov
Link to this post
Hello,
I am experiencing the same thing, however I cannot explain it. It is in the javascript client-side validation that it is happening.
So far, it is happening just for "JPG" vs "jpg". For instance both "txt" and "TXT" work and I only have "txt" in the allowed file extensions list. If the file's extension is "JPG" and I only have "jpg" in the allowed file extensions, the javascript function detects -1 and throws the "This file type is not supported." error.
Here is the js function that I am using:
function getErrorMessage) {
return ("This file type is not supported.");
}
else {
return ("This file exceeds the maximum allowed size of 80 KB.");
}
}
else {
return ("not correct extension.");
}
}
I am using Telerik.Web.UI v2015.2.623.45.
If I set the allowed file extensions like this:
AllowedFileExtensions="txt,jpg,JPG"
it does not throw the error.
Any suggestions as to why this is happening would be appreciated.
Thanks,
Ray Goforth
Posted 26 Oct 2015
Link to this post
if
(sender.get_allowedFileExtensions().toLowerCase().indexOf(fileExtention.toLowerCase()) == -1) {
return
(
"This file type is not supported."
);
}
Posted 26 Oct 2015
in reply to
Konstantin Dikov
Link to this post
Hi Konstantin,
Thanks very much for the code. This does resolve the issue but I see why some people are experiencing this case sensitivity when others are not. The issue only happens when the maximum size has been exceeded. For instance:
Lets say that my allowed file types is "jpg" and my max size is 80kb.
If I try to upload a file "test.jpg" that is 85kb, the error says "This file exceeds the maximum allowed size of 80 KB.", which makes sense.
If I rename the file to "test.JPG" and try to upload it, the error says "This file type is not supported."
Thanks for your help!
Ray Goforth
Posted 25 Jan
Link to this post
Is there any progress on this issue? Ray's post in October 2015 is still happening in the demos. (see attached image).
It is easy to resolve the error in aspx by just duplicating our 20+ allowed types, but this seems like a workaround the bug.
Also, is there any interest in adding a DISallowed file extensions property?
Andrew
Posted 30 | http://www.telerik.com/forums/allowedfileextensions-case-insensitive | CC-MAIN-2017-13 | refinedweb | 478 | 75.2 |
Check out this quick tour to find the best demos and examples for you, and to see how the Felgo SDK can help you to develop your next app or game!
The SpriteSequence contains a list of Sprite elements and allows switching between them with only one active at a time. More...
SpriteSequence renders and controls a list of sprite animations. It allows defining multiple sprites within a single image and allows switching between the Sprite items with only one active at a time. The first Sprite child is visible and played by default.
A SpriteSequence is useful if you have a game entity that can have multiple exclusive states like walking, jumping and dying. Exclusive state means that only a single state can be active at a single time, and each state should have a different Sprite.
If you only have one state, or a single animated sprite, an AnimatedSprite is the better choice. The AnimatedSprite has a reduced set of properties compared to SpriteSequence because it does not need to switch between the sprites, but it also has finer animation control with the option to advance single frames manually.
Note: For sprites with improved performance use TexturePackerSpriteSequence instead.
SpriteSequence enhances the Qt 5 component SpriteSequence with Content Scaling and Dynamic Image Switching Support & Density Independence.
Specifically, it allows you to:
See the Felgo Script for Multi-Resolution Asset Generation how to create sprite sheets from single sprite frames and scale them down automatically.
Take for example a game entity that has multiple states: walking, jumping, whirling and dying. The game entity can only be in one state at once, and this state is switched by the game logic. Each logical state has a different Sprite animation to be played. You can now pack all the sprites into one texture to gain the maximum performance, that looks like this image:
Because multiple sprites are contained in a single image, this is called a SpriteSheet. The different Sprite animations can then be defined like this:
import QtQuick 2.0 import Felgo 3.0 GameWindow { Scene { Row { spacing: 4 SimpleButton { text: "walk" onClicked: { // NOTE: setting goalSprite only works if there is a way with the to-property to the target sprite squaby.jumpTo("walk") } } SimpleButton { text: "whirl" onClicked: squaby.jumpTo("whirl") } SimpleButton { text: "jump" onClicked: squaby.jumpTo("jump") } SimpleButton { text: "die" onClicked: squaby.jumpTo("die") } } SpriteSequence { id: squaby // NOTE: the goalSprite MUST be an empty string initially, otherwise the app crashes! qt5 issue! // so this doesnt work: goalSprite: "walk" // the first sprite in the list is the one played initially defaultSource: "squabySpritesheet.png" // this only must be set for the original Qt 5 SpriteSequence component because otherwise nothing is shown // with SpriteSequence this is set to the size of the currentSprite, as convenience compared to SpriteSequence from Qt5 // if you set a custom width or height, the current sprite is scaled to that dimension //width: 32 //height: 32 Sprite { name: "walk" frameWidth: 32 frameHeight: 32 frameCount: 4 startFrameColumn: 1 frameRate: 20 // does not exist anymore in qt5, was Felgo specific //running: true // optionally provide a name to which animation it should be changed after this is finished //to: "whirl" // if there is no target with to with goalSprite, it wouldnt work! thus a weight of 0 must be set to: {"jump":0, "walk": 1} } Sprite { name: "whirl" frameWidth: 32 frameHeight: 32 frameCount: 2 startFrameColumn: 14 // this tests if another sprite source is displayed, by not using the default spriteSheetSource property source: "squatanBlue.png" frameRate: 20 // after a while, with 10% probability, switch to die animation to: {"die":0.1, "whirl":0.9} } Sprite { name: "jump" frameWidth: 32 frameHeight: 32 frameCount: 4 startFrameColumn: 5 frameRate: 10 // returns to walking animation after a single jump animation (100% weight moving to walk) to: {"walk":1} } Sprite { name: "die" frameWidth: 32 frameHeight: 32 frameCount: 3 startFrameColumn: 10 frameRate: 10 // play die animation once and then stay at the last frame to: {"dieLastFrame":1} } Sprite { name: "dieLastFrame" startFrameColumn: 12 frameWidth: 32 frameHeight: 32 // frameCount is set to 1 by default to: {"dieLastFrame":1} } }// SpriteSequence }// Scene }// GameWindow
This example shows most of the possible interactions with SpriteSequence. The default animation that is played is the walk animation, because it is the first child. All the sprites share the same image (the same SpriteSheet) defined with the defaultSource property. For demonstration purpose, in this example the whirl animation is set to another Sprite::source which is still possible to overwrite the defaultSource with a custom source for a sprite.
To switch between sprite animations, set the goalSprite property to
whirl for example, which will start the whirling animation until another goalSprite is set. Another way of changing animations is by calling jumpTo(), which has the same effect as changing goalSprite with the added benefit that no transition to the sprite with the Sprite::to property needs to be defined.
If the Sprite::to property is set, the animation will switch to this animation.
The sprites are scaled to the size of SpriteSequence. Note that it is required to define a width & height. You can use the size of SpriteSequence from outside components like in this example:
EntityBase { SpriteSequence { id: spriteSequence // in here multiple sprites are defined } MouseArea { // anchoring is possible, as the size changes with the current animation anchors.fill: spriteSequence // onClicked: ... } }
The name of the Sprite which is currently animating.
If this property is set, the source property of all the contained images are assigned this value by default. If a Sprite child assigned the source property explicitly, this value and not the defaultSource is used.
You can use this property to avoid defining the same source property for all sprites in a SpriteSequence when they are the same. The default value is an empty url.
The name of the sprite which the animation should move to. By default, it must be set to an empty string and can be changed after loading.
Note: If you set the goalSprite property to a non-empty string, the app will stall at loading this SpriteSequence. This is a Qt 5 issue.
Changing the goalSprite to a target Sprite referenced with its Sprite::name property only has an effect if there is a valid path defined with the Sprite::to property. If the target sprite cannot be reached by transitioning through the to-path, use jumpTo() instead.
See also jumpTo() and SpriteSequence::goalSprite.
If true, interpolation will occur between sprite frames to make the animation appear smoother.
Default is
false - this is different than the normal SpriteSequence, which has set interpolate to true. In most cases interpolating between sprite frames is not wanted
as it makes the animation look blurred, which is why it is set to
false by default in Felgo..
Whether the active Sprite is animating or not. Default is
true.
The list of Sprite sprites to draw. Sprites will be scaled to the size of this item. This property is set to the Sprite children of this SpriteSequence item.
As a performance improvement, you can also set it explicitly to the sprites property of another SpriteSequence. This avoids creation of multiple Sprite objects and is thus faster.
Example for shared sprites property:
import Felgo 3.0 import QtQuick 2.0 EntityBase { SpriteSequence { id: s1 // any SpriteSequence properties Sprite { // any sprite properties } Sprite { // any sprite properties } } SpriteSequence { // any SpriteSequence properties, can be different than s1 // the sprites are shared from s1 sprites: s1.sprites } }
This property was introduced in Felgo 2.1.1.
This function causes the SpriteSequence to jump to the specified sprite immediately, intermediate sprites are not played. The sprite argument is the Sprite::name of the sprite you wish to jump to. | https://felgo.com/doc/felgo-spritesequence/ | CC-MAIN-2021-04 | refinedweb | 1,277 | 60.85 |
clamav on-access scan in 14.04
I have asked the same question in askubuntu, but did not get any reply. Maybe too specific.
I try to use clamav for on-access virus scanning for my home directory and all mounted drives. I found some rather old (2005) instructions in http://
Since dazuko was replaced by fanotify, the parameters in clamd.conf are slightly different. Here are my relevant clamd.conf entries:
ScanOnAccess true
# ClamukoScanOnOpen true
# ClamukoScanOnExec true
OnAccessIncludePath /home
OnAccessIncludePath /mnt
OnAccessIncludePath /media
VirusEvent /opt/clamdazer %v &
If I restart clamd (by "sudo invoke-rc.d clamav-daemon restart"), the log has the following:
ERROR: ScanOnAccess: fanotify_init failed: Operation not permitted
ScanOnAccess: clamd must be started by root
I tried to change the "User clamav" line in clamd.conf to "User root", but then the start of clamd will fail with "ERROR: initgroups() failed".
I found some bug reports which maybe relevant here: Ubuntu Bug #1404762 and possibly Debian bug 749027 (https:/
Unfortunately, I did not succeed using the solutions described there.
Presently, it seems, on-access scanning does not work at all.
How to get this to work?
Question information
- Language:
- English Edit question
- Status:
- Answered
- For:
- Ubuntu clamav Edit question
- Assignee:
- No assignee Edit question
- Last query:
- 2015-04-09
- Last reply:
- 2015-04-09
Nobody can answer?
Can you point me to an alternative, instead?
From: Launchpad Janitor <email address hidden>
To: <email address hidden>
Sent: Wednesday, March 18, 2015 4:22 PM
Subject: Re: [Question #263109]: clamav on-access scan in 14.04
Your question #263109 on clamav in Ubuntu.
based on https:/
What are the contents of
/etc/apparmor.
(remark: contents of a Debian version http://
My /etc/apparmor.
So, I added in that file "capability setgid," and installed apparmor-utils to run aa-complain clamd, as advisewd in https:/
My actual usr.sbin.clamd:
# vim:syntax=apparmor
# Author: Jamie Strandboge <email address hidden>
# Last Modified: Sun Aug 3 09:39:03 2008
#include <tunables/global>
/usr/sbin/clamd flags=(complain) {
#include <abstractions/base>
#include <abstractions/
# LP: #433764:
capability dac_override,
# needed, when using systemd
capability setgid,
@{PROC}
owner @{PROC}
/etc/
/usr/sbin/clamd mr,
/tmp/ rw,
/tmp/** krw,
/var/lib/clamav/ r,
/var/
/var/log/clamav/* krw,
/{,var/
/{,var/
/var/
/var/
/var/
/var/
# For amavisd-new integration
/var/
# For mimedefang integration
/var/
/var/
# For use with exim
/var/
# Allow home dir to be scanned
@{HOME}/ r,
@{HOME}/** r,
# Site-specific additions and overrides. See local/README for details.
#include <local/
}
On restart the log still has
ERROR: ScanOnAccess: fanotify_init failed: Operation not permitted
ScanOnAccess: clamd must be started by root
If you try adding " capability setuid," as well, reload apparmor and restart clamav demon, do you still have the same error message?
with changes in /etc/clamav/
# User clamav
User root
restarting clamd does not have error messages anymore, clamav.log:
ScanOnAccess: Protecting directory '/home'
ScanOnAccess: Protecting directory '/mnt'
ScanOnAccess: Protecting directory '/media'
ScanOnAccess: Max file size limited to 5242880 bytes
But, I can copy an Eicar file without problems (scanning that file on-demand finds a virus).
I have expected an error message (clamdazer).
Anything in /var/log/
What options with respect to log and virus alert have you set in your clamd.conf ?
I do not have /var/log/
- type=1400 audit(142676224
- type=1400 audit(142676224
- type=1400 audit(142676224
The line in clamd.conf which should activate a virus message:
VirusEvent /opt/clamdazer %v &
I found at a restart, that on-access was not started, restarting clamd failed with "ERROR: initgroups() failed"
However, if I turn aa complaints on (sudo aa-complain clamd), restarting clamd goes without problems, on-access starts according to clamav.log.
Any way to do that automatically?
another try:
- in clamd.conf
# User clamav
User root
- in usr.sbin.clamd
capability setuid
reload apparmor (sudo invoke-rc.d apparmor reload)
aa-enforce clamd (sudo aa-enforce clamd)
restart clamd (sudo invoke-rc.d clamav-daemon restart) leads to
ERROR: Failed to change socket ownership to group clamav
Closing the main socket.
The only way to get something running seems to be with aa-complain clamd.
However, this should produce some messages in /var/log/
Also, aa-complain clamd needs to be done at every start-up.
This will run clamd, with on-access enabled.
But, I still have no indication that it finds viruses (Eicar file can be opened).
Sorry, I do not know.
Maybe you better try asking at a clamav forum.
Thank you for your efforts. I was under the impression that this would be the right place to ask.
Can you point me to a better place?
This question was expired because it remained in the 'Open' state without activity for the last 15 days.
During my attempts to solve the problem I found I had a file "usr.sbin(
So, it was the backup file in /etc/apparmor.d which caused the trouble.
Still to find out how to let clamav on-access find an Eicar file.
Now, that on-access scan seems to be working, I tried some cases:
1. No detections when I copied some Eicar files around in subfolders of /home/hartwig. However, I got a detection when I placed an Eicar file directly into that folder (mentioned in /var/log/
Any way to include subfolders?
2. The following found in an old post from 2005 was supposed to give me an error message at detection:
- in clamd.conf: VirusEvent /opt/clamdazer %v &
- /opt/clamdazer:
#!/bin/sh
#Clamdazer script by Gabor Igloi (2005) GPL
v=`tail -n 1 /var/log/
v=${v#*: }
v=${v%:*}
f=${v##*/}
zenity --title ClamDazer --warning --text '"'"$f"$'" CONTAINS A VIRUS!\n[ '"$1"$' ]\nWould you like to delete it?'
if [ $? -eq 0 ]; then
rm $v
zenity --title ClamDazer --info --text '"'"$f"$'"\nRemoved successfully!'
fi
Unfortunately, no such message comes up.
It's possible that clamav isn't properly using the fanotify API; note the FAN_MARK_ADD line here: http://
be created. The flag has no effect when marking mounts. Note
that events are not generated for children of the
I suspect this was never tested beyond one level of directories.
That would be an explanation. But, it does not solve the problem.
I have problems to accept that with such a big effort to introduce on-access scanning for clamav, a glitch like that would go unnoticed, effectively making on-access scanning pointless.
Hello
Wow, can't say more. I ve been trying for 1 hour to make that onaccess scan work, as it is an extremly basic and essential for any antivirus software, but no recursion just nullify the daemon. It's useless if it can't scan subfolders !!
This question was expired because it remained in the 'Open' state without activity for the last 15 days. | https://answers.launchpad.net/ubuntu/+source/clamav/+question/263109 | CC-MAIN-2019-39 | refinedweb | 1,132 | 56.05 |
ico petrini92 Points
i don't know what i'm doing wrong with naming it and the return value
i really dont know whats happening
// Enter your code below func coordinates(for location: String) -> (Double, Double) { return } func coordinates(for location: String) -> (Double, Double) { return } func coordinates(for location: String) -> (Double, Double) { return }
2 Answers
Brandon MahoneyiOS Development with Swift Techdegree Graduate 30,149 Points
Here is the final step. I would go back to the video on switch statements. You need to switch on location and create a case for each of the locations and then return the longitude/latitude Doubles.
func coordinates(for location: String) -> (Double, Double) { switch location{ case "Eiffel Tower": return (48.8582, 2.2945) case "Great Pyramid": return (29.9792, 31.1344) case "Sydney Opera House": return (33.8587, 151.2140) default: return (0, 0) }
}
Aaron Glaesemann11,303 Points
Hi Rico,
You only need to create one function. Then cycle through the 3 locations using a Switch statement and the locations as Strings of text.
rico petrini92 Points
rico petrini92 Points
thank you guys so much for your time | https://teamtreehouse.com/community/i-dont-know-what-im-doing-wrong-with-naming-it-and-the-return-value | CC-MAIN-2022-27 | refinedweb | 184 | 52.39 |
If you are writing an application of any size, it will most likely require a number of files to run – files which could be stored in a variety of possible locations. Furthermore, you will probably want to be able to change the location of those files when debugging and testing. You may even want to store those files somewhere other than the user's hard drive.
Any engineer worth his salt will recognise that the file locations should be stored in some kind of configuration file and the code to read the files in question should be factored out so that it isn't just scattered at points where data is read or written. In this post I'll present a way of doing just that by creating a virtual filesystem with PyFilesystem.
You'll need the most recent version of PyFilesystem from SVN to run this code.
We're going to create a virtual filesystem for a fictitious application that requires per-application and per-user resources, as well as a location for cache and log files. I'll also demonstrate how to mount files located on a web server. Here's the code:
from fs.opener import fsopendir app_fs = fsopendir('mount://fs.ini', create_dir=True)
That's, all there is to it; two lines of code (one if you don't count the import). Obviously there is quite a bit going on under the hood here, which I'll explain below, but lets see what this code gives you…
The
app_fs object is an interface to a single filesystem that contains all the file locations our application will use. For example, the path
/user/app.ini references a per-user file, whereas
/resources/logo.png references a per application file. The actual physical location of the data is irrelevant because as far as your application is concerned the paths never change. This abstraction is useful because the real path for such files varies according to the platform the code is running on; Windows, Mac and Linux all have different conventions, and if you put your files in the wrong place, your app will likely break on one platform or another.
Here's how a per-user configuration file might be opened:
from ConfigParser import ConfigParser # The 'safeopen' method works like 'open', but will return an # empty file-like object if the path does not exist with app_fs.safeopen('/user/app.ini') as ini_file: cfg = ConfigParser() cfg.readfp(ini_file) # ... do something with cfg
The files in our virtual filesystem don't even have to reside on the local filesystem. For instance,
/live/ may actually reference a location on the web, where the version of the current release and a short ‘message of the day’ is stored.
Here's how the version number and MOTD might be read:
def get_application_version(): """Get the version number of the most up to date version of the application, as a tuple of three integers""" with app_fs.safeopen('live/version.txt') as version_file: version_text = version_file.read().rstrip() if not version_text: # Empty file or unable to read return None return tuple(int(v) for v in version_text.split('.', 3)) def get_motd(): """Get a welcome message""" with app_fs.safeopen("live/motd.txt") as motd_file: return motd_file.read().rstrip()
You'll notice that even though the actual data is retrieved over HTTP (the files are located here and here), the code would be no different if the files were stored locally.
So how is all this behaviour created from a single line of code? The line
fsopendir("mount://fs.ini", create_dir=True) opens a MountFS from the information contained within an INI file (
create_dir=True will create specified directories if they don't exist). Here's an example of an INI file that could be used during development:
[fs]
user=./user
resources=./resources
logs=./logs
cache=./user/cache
live=./live
The INI file is used to construct a MountFS, where the keys in the
[fs] section are the top level directory names and the values are the real locations of the files. In above example,
/user/ maps on to a directory called
user relative to the current directory – but it could be changed to an absolute path or to a location on a server (e.g. FTP, SFTP, HTTP, DAV), or even to a directory within a zip file.
You can change the section to use in a mount opener by specifying it after a # symbol, i.e. mount://fs.ini#mysection
There are a few changes to this INI file we will need to make when our application is ready for release. User data, site data, logs and cache all have canonical locations that are derived from the name of the application (and the author on Windows). PyFilesystem contains handy openers for these special locations. For example,
appuser://examplesoft:myapp detects the appropriate per-user data location for an application called “myapp” developed by “examplesoft”. Ditto for the other per-application directories. e.g.:
[fs]
user=appuser://examplesoft:myapp
resources=appsite://examplesoft:myapp
logs=applog://examplesoft:myapp
cache=appcache://examplesoft:myapp
The
/live/ path is different in that it needs to point to a web server:
live=
Of course, you don't need to use the canonical locations. For instance, let's say you want to store all your static resources in a zip file. No problem:
resources=zip://./resources.zip
Or you want to keep your user data on a SFTP (Secure FTP) server:
user=s
Perhaps you don't want to preserve the cache across sessions, for security reasons. The
temp opener creates files in a temp directory and deletes them on close:
cache=temp://
Although, if you are really paranoid you can store the cache files in memory without ever writing them to disk:
cache=mem://
Setting /user/ to mem:// is a useful way of simulating a fresh install when debugging.
I hope that covers why you might need – or at least want – a virtual file system in your application. I've glossed over some the details and other features of PyFilesystem. If you would like more information, see my previous posts, check out the documentation or join the PyFilesystem discussion group.
Great stuff. Is it possible to take this concept and make it work on Google AppEngine?
Kevin, I haven't tried it on GAE, but assuming it runs Python2.5 it should work!
Assuming GAE runs Py2.5 that is…
I think right now GAE can only run 2.5 and not 2.6 or later. It looks like /live/ must be either a zip file or live some where on the web since GAE does not have an accessible file system.
FUSE is also well worth exploring if virtual file systems is your thing. The fusepy bindings are very easy to work with as well. From looking at the documentation it seems like pyfilesystem also can create a whole new file system (such as expose printers directories, where putting any file into it causes it to be printed, or what have you.
It would be nice to see a null/loopback file system, such as this one for fusepy: [code.google.com]
Maybe it's in the docs somewhere, but I did not see it at least.
Svend, there is FUSE support in PyFilesystem, and Dokan (Windows equivelent), [packages.python.org]
Not sure what a loopback file system would do in this context, the MounFS has that capability. A null filesystem might be useful though…
Seen django-fuse?
I read this post with great interest…
Indeed, I am looking for a way to create a file system that would allow to browse the attachments of zotero [zotero.org] collections in the finder (or explorer).
Ideally, this would consist in creating a read-only virtual system in which I would add links to the relevant files.
I am completely new to VFS programing (although I am a convinced user)… So I have several questions:
- what kind of file system would be more appropriate? FUSE?…
- is it possible to create a read-only VFS where only links to files are specified, or do I have to store files in a temp location? If yes, a minimal example would be of great help!
- how to mount a VFS in the finder/explorer?
Thank you in advance for your help, Thomas.
Julou, That should be possible. See the Guide for implementers [packages.python.org]. Once you have created a filesystem you can mount it with FUSE. See the docs [packages.python.org] and join the mailing list if you need more information.
Thanks for the quick reply.
This helped me to understand that FUSE is not a file system ;) and how I can use it to expose a FS in the finder/explorer…
Nonetheless, the second question stands open in my mind:
Julou,
would you eventually share your code? I am a beginner in VFS and try to make the same kind of thing with my blog.
The VFS has a really huge potential.
Cheers
I too am interested in making a VFS that I could use from Windows to access Zotero. It seems on the surface that this shouldn't be a difficult project, because Zotero uses SQLLite. Although of course I know that programming always gets more complex and takes longer than it seems it would. Has anyone started on this project already, and wants to collaborate? Is Python really the best language to do this in? (I'm not against Python, I just honestly don't know the advantages and disadvantages of using it as the language for this type of project.)
Sir, can you give me full code….? | http://www.willmcgugan.com/blog/tech/2011/3/20/creating-a-virtual-filesystem-with-python-and-why-you-need-one/ | CC-MAIN-2014-52 | refinedweb | 1,607 | 62.78 |
Welcome to part 13 of my Android Development Tutorial! In this tutorial I will continue building the Android Address Book App previously covered in parts 10, 11 and 12. If you haven’t seen them, check them out first.
Specifically in this tutorial I will create the Java that will power the ListView for the Main Activity and I’ll also cover the New Contact code needed to add a contact to the database. All of the heavily commented code after the video should help make everything understandable.
If you like videos like this, it helps to tell Google+ with a click here [googleplusone]
Code From the Video
MainActivity.java
package com.newthinktank.contactsapp; import java.util.ArrayList; import java.util.HashMap; import com.newthinktank.contactsapp.DBTools; import com.newthinktank.contactsapp.NewContact; import android.os.Bundle; import android.app.ListActivity; import android.content.Intent; import android.view.View; import android.widget.AdapterView; import android.widget.AdapterView.OnItemClickListener; import android.widget.ListAdapter; import android.widget.SimpleAdapter; import android.widget.TextView; import android.widget.ListView; public class MainActivity extends ListActivity { // The Intent is used to issue that an operation should // be performed Intent intent; TextView contactId; // The object that allows me to manipulate the database DBTools dbTools = new DBTools(this); // Called when the Activity is first called protected void onCreate(Bundle savedInstanceState) { // Get saved data if there is any super.onCreate(savedInstanceState); // Designate that activity_main.xml is the interface used setContentView(R.layout.activity_main); // Gets all the data from the database and stores it // in an ArrayList ArrayList<HashMap<String, String>> contactList = dbTools.getAllContacts(); // Check to make sure there are contacts to display if(contactList.size()!=0) { // Get the ListView and assign an event handler to it ListView listView = getListView(); listView.setOnItemClickListener(new OnItemClickListener() { public void onItemClick(AdapterView<?> parent, View view,int position, long id) { // When an item is clicked get the TextView // with a matching checkId contactId = (TextView) view.findViewById(R.id.contactId); // Convert that contactId into a String String contactIdValue = contactId.getText().toString(); // Signals an intention to do something // getApplication() returns the application that owns // this activity Intent theIndent = new Intent(getApplication(),EditContact.class); // Put additional data in for EditContact to use theIndent.putExtra("contactId", contactIdValue); // Calls for EditContact startActivity(theIndent); } }); // A list adapter is used bridge between a ListView and // the ListViews data // The SimpleAdapter connects the data in an ArrayList // to the XML file // First we pass in a Context to provide information needed // about the application // The ArrayList of data is next followed by the xml resource // Then we have the names of the data in String format and // their specific resource ids ListAdapter adapter = new SimpleAdapter( MainActivity.this,contactList, R.layout.contact_entry, new String[] { "contactId","lastName", "firstName"}, new int[] {R.id.contactId, R.id.lastName, R.id.firstName}); // setListAdapter provides the Cursor for the ListView // The Cursor provides access to the database data setListAdapter(adapter); } } // When showAddContact is called with a click the Activity // NewContact is called public void showAddContact(View view) { Intent theIntent = new Intent(getApplication(), NewContact.class); startActivity(theIntent); } }
NewContact.java
package com.newthinktank.contactsapp; import java.util.HashMap; import android.app.Activity; import android.content.Intent; import android.os.Bundle; import android.view.View; import android.widget.EditText; public class NewContact extends Activity{ // The EditText objects EditText firstName; EditText lastName; EditText phoneNumber; EditText emailAddress; EditText homeAddress; DBTools dbTools = new DBTools(this); @Override public void onCreate(Bundle savedInstanceState) { // Get saved data if there is any super.onCreate(savedInstanceState); // Designate that add_new_contact.xml is the interface used setContentView(R.layout.add_new_contact); // Initialize the EditText objects firstName = (EditText) findViewById(R.id.firstName); lastName = (EditText) findViewById(R.id.lastName); phoneNumber = (EditText) findViewById(R.id.phoneNumber); emailAddress = (EditText) findViewById(R.id.emailAddress); homeAddress = (EditText) findViewById(R.id.homeAddress); } public void addNewContact(View view) { // Will hold the HashMap of values HashMap<String, String> queryValuesMap = new HashMap<String, String>(); // Get the values from the EditText boxes queryValuesMap.put("firstName", firstName.getText().toString()); queryValuesMap.put("lastName", lastName.getText().toString()); queryValuesMap.put("phoneNumber", phoneNumber.getText().toString()); queryValuesMap.put("emailAddress", emailAddress.getText().toString()); queryValuesMap.put("homeAddress", homeAddress.getText().toString()); // Call for the HashMap to be added to the database dbTools.insertContact(queryValuesMap); // Call for MainActivity to execute this.callMainActivity(view); } public void callMainActivity(View view) { Intent theIntent = new Intent(getApplication(), MainActivity.class); startActivity(theIntent); } }
Derek, thank you so much for all of these tutorials. By far the clearest available. If I wanted to also include an ImageView in the list layout, say, a contact photo, and I was storing these pngs in a drawable folder, how would I alter the adapter?
I was thinking of having a column in the database which contained a string of the filename. Now just to connect it…
You’re very welcome 🙂 I’ll cover adding video, images, animations, etc. as soon as possible. My next Android video will be on fragments and nice interfaces. I’ll get it up as soon as possible
Best tutorial in android available in hundreds of tutorial in Youtube . I am new in android but just watching tutorials and code i am now able to start code in JUST 3 Days ..
Thank you 🙂 I’m very happy to hear that I was able to help. Many more Android tutorials are coming
Hi Derek,
What if my table has multiple datatypes? How would the hashmap be different then? FOr e.g. my table has INTEGER AND TEXT fields that I need to insert to db. Can you please advise?
Thanks!
You could store everything as a string in the hash map and then cast to the needed data types. Does that help?
I run StockQuotes no problem with my Bionic device but I cannot get the EditContact.java to run. Would you see inside my logCat and direct me to fix? Thx in advance.
09-09 10:17:54.608: E/Trace(22126): error opening trace file: No such file or directory (2)
09-09 10:17:54.694: E/AndroidRuntime(22126): FATAL EXCEPTION: main
09-09 10:17:54.694: E/AndroidRuntime(22126): java.lang.RuntimeException: Unable to start activity ComponentInfo{com.theratutoring.contactsapp/com.theratutoring.contactsapp.MainActivity}: java.lang.RuntimeException: Your content must have a ListView whose id attribute is ‘android.R.id.list’
09-09 10:17:54.694: E/AndroidRuntime(22126): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2136)
09-09 10:17:54.694: E/AndroidRuntime(22126): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2174)
09-09 10:17:54.694: E/AndroidRuntime(22126): at android.app.ActivityThread.access$700(ActivityThread.java:141)
09-09 10:17:54.694: E/AndroidRuntime(22126): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1267)
09-09 10:17:54.694: E/AndroidRuntime(22126): at android.os.Handler.dispatchMessage(Handler.java:99)
09-09 10:17:54.694: E/AndroidRuntime(22126): at android.os.Looper.loop(Looper.java:137)
09-09 10:17:54.694: E/AndroidRuntime(22126): at android.app.ActivityThread.main(ActivityThread.java:5059)
09-09 10:17:54.694: E/AndroidRuntime(22126): at java.lang.reflect.Method.invokeNative(Native Method)
09-09 10:17:54.694: E/AndroidRuntime(22126): at java.lang.reflect.Method.invoke(Method.java:511)
09-09 10:17:54.694: E/AndroidRuntime(22126): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:792)
09-09 10:17:54.694: E/AndroidRuntime(22126): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:555)
09-09 10:17:54.694: E/AndroidRuntime(22126): at dalvik.system.NativeStart.main(Native Method)
09-09 10:17:54.694: E/AndroidRuntime(22126): Caused by: java.lang.RuntimeException: Your content must have a ListView whose id attribute is ‘android.R.id.list’
09-09 10:17:54.694: E/AndroidRuntime(22126): at android.app.ListActivity.onContentChanged(ListActivity.java:243)
09-09 10:17:54.694: E/AndroidRuntime(22126): at com.android.internal.policy.impl.PhoneWindow.setContentView(PhoneWindow.java:263)
09-09 10:17:54.694: E/AndroidRuntime(22126): at android.app.Activity.setContentView(Activity.java:1893)
09-09 10:17:54.694: E/AndroidRuntime(22126): at com.theratutoring.contactsapp.MainActivity.onCreate(MainActivity.java:44)
09-09 10:17:54.694: E/AndroidRuntime(22126): at android.app.Activity.performCreate(Activity.java:5058)
09-09 10:17:54.694: E/AndroidRuntime(22126): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1079)
09-09 10:17:54.694: E/AndroidRuntime(22126): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2100)
09-09 10:17:54.694: E/AndroidRuntime(22126): … 11 more
Is it merely adding the list to R and if yes, what is best approach?
It looks like the R file isn’t auto updating. Make sure you don’t have any import R imports in any of your java files in src. If you do delete them and then restart Eclipse. You may also want to check for Updates (Help -> Check for Updates)
hello there, I am also running into the same problem. My application failed to start. Please share if you got it solved. Thanks
hello there, I was also running into the same kind of error, only that my app wont even start at all. But it throws the same log errors as yours. Please kindly help if you have found the solution to it. Thanks
My opinion is that when you populate a list with data from a SQLite database, the simplest is to extend a cursor adapter, with bindView and NewView, in bindView you don’t need to care for recycling of layout inflation…
Thanks for the input 🙂
hi Banas there’s problem with setListAdapter. Could you please help me with this problem. Thanks
What errors are you getting in the LogCat panel?
Hello. I have recently started to learn android development (and programming in general) and your guide series helped me a lot.
Right now I am trying to create a endless listview (listview expands as you scroll down to avoid load time) for a large set of data. After some research, I gathered that I should use a CursorAdapter and extends commonware’s endlessadapter.
Am I on the right track? I am not expecting a long answer, but I just don’t want to spent a lot of time looking for the wrong thing 🙂
Helpfully you see this, and thank you!
Hello,
Have you seen this tutorial on Android ListView? I think that is what you are looking for.
I hope that helps 🙂
Derek
Thank you. I’ll start watching that part of the tutorial 🙂
Hello Derek,
Firstly, I want to thank for this great tutorials,
Secondly, I have problem I am copy your code of mainActivity and activity_main.xml but in the catlog the following error arise:
12-04 22:29:45.982: E/AndroidRuntime(2462): cannot open customer xml file
and when I make a debugging the run stop at this line of code
setContentView(R.layout.activity_main);
Please I need Your Help.
Hello,
I have the whole package for the Android App here. Tell me if you have any problems with it.
Hi Derek!
Just wanted to start off by saying great tutorials and am so grateful for the time you take to make them and also answer questions.
I have a question for you, I’ve googled and googled and searched and haven’t been able to find an answer.
I’m designing and app that has a stopwatch aspect to it. Now my question is regarding storing the values of the splits.
I.e. I don’t know how many splits a user needs because it varies from race to race.
I’ve thought of 2 solutions:
1. make 100 columns and hope the user doesn’t need more than 98 splits/race. (although I think this would make it too bulky esp. if they only need 1 or 2 splits).
2. make 2 tables and reference the second table splits. 1 with (|id|finaltime|splitsrefnum|) and the other with (|split|) so if it had 5 splits, it would have 5 rows. etc…
I believe #2 is foreignkeys?
You don’t need waste your time and write code, I’d just appreciate a push in the right direction so I could know what the “right way” of doing this is.
Thanks again!
Hi Miguel,
Why not save the data to a file and then you won’t need to worry about defining a maximum size. Anytime I need to store a ton of information and I want to keep it organized I use SQLite. I have tutorials on both on my site. I hope that helps 🙂
hi Derek, thank you for your great effort,
and i have a little question, i wanna make my first android app 🙂 and i have a small idea i wanna make “Taking Notes app” just to apply what i learned from you, but i have no idea about what my app should looks like, and if you have any idea what the app should looks like plz tell me and i will be grateful to you, thank you in advance and thank you for your great effort again 🙂
Thank you 🙂 I’m going to be making more Java Android videos very soon. I’ll cover everything I didn’t cover before. I’ll spend a lot of time on interfaces.
hi, thanks for ur tutorial…
but can i know how to add a dialog box and show the details confirmation whether to save or cancel before save.?
thanks
I’ll be making a new Android tutorial in a week or 2. I’ll cover everything then. | http://www.newthinktank.com/2013/06/android-development-tutorial-13/ | CC-MAIN-2021-25 | refinedweb | 2,249 | 52.46 |
Category: Quines
A quine in Fortran 90
While Fortran has been though to develop scientific applications, it can also be used for programmer amusement, here is a nice quine written in Fortran: Program QUINE character*51::code(16),li integer::lin,qn,lqn...
Quines in C
Here is a Quine in ANSI C.
And here's another (much shorter) one:
#include <stdio.h> char x[]="#include <stdio.h>%cchar x[]=%c%s%c;%cint main() {printf(x,10,34,x,34,10,10);return 0;}%c"; int main() {printf(x,10,34,x,34,10,10);return 0;}...
Quines: a new Hello World
A quine, in computing, is a program producing its complete source code as its only output without simply opening the source file of the program and printing the contents (such actions are considered cheating). This...
Latest Comments | http://www.xappsoftware.com/wordpress/category/tower-of-babel/quines/ | CC-MAIN-2018-05 | refinedweb | 140 | 53.41 |
This action might not be possible to undo. Are you sure you want to continue?
Naser Shoukat Firfire
Master of Business Administration - MBA Semester 2 MB0045 – Financial Management - 4 Credits (Book ID: B1134) Assignment Set- 1 (60 Marks)
Note: Each question carries 10 marks. Answer all the questions. Q.1 What are the 4 finance decisions taken by a finance manager. Modern approach of financial management provides a conceptual and analytical framework for financial decision making. According to this approach there are 4 major decision areas that confront the Finance Manager these are:1. 2. 3. 4. Investment Decisions Financing Decisions Dividend Decisions Financial Analysis, Planning and Control Decisions
a) Investment Decisions; Investment decisions are made by investors and investment managers. Investors commonly perform investment analysis by making use of fundamental analysis, technical analysis, screeners and gut feel. Investment decisions are often supported by decision tools. The portfolio theory is often applied to help the investor achieve a satisfactory return compared to the risk taken. b) Financing Decisions; What are the three types of financial management decisions? For each type of decision, give an example of a business transaction that would be relevant. · There are three types of financial management decisions: Capital budgeting, Capital structure, and Working capital management. · Capital budgeting is the process of planning and managing a firm's long-term investments. The key to capital budgeting is size, timing, and risk of future cash flows is the essence of capital budgeting. For example, yesterday I received a call from our manager over our Sand & Gravel Operations. He is looking into buying a new crusher (to crush stone into gravel and sand). I helped him today evaluate the return on investment for this opportunity. It quite a lot of work, but we determined that buying the new crusher would bring in 60,000 more tons of production/sales within the 1st year of owning the machine. · Capital Structure refers to the c) Dividend Decisions The Dividend Decision is a decision made by the directors of a company. It relates to the amount and timing of any cash payments made to the company's stockholders. The decision is an important one for
MB0045
Naser Shoukat Firfire. A key criticism of this theory is that it does not explain the observed dividend policies of real-world companies. Most companies pay relatively consistent dividends from one year to the next and managers tend to prefer to pay a steadily increasing dividend rather than paying a dividend that fluctuates dramatically from one year to the next. These criticisms have led to the development of other models that seek to explain the dividend decision. Dividend clienteles A particular pattern of dividend payments may suit one type of stock holder more than maximise its stock price and minimise its cost of capital by catering to a particular clientele. This model may help to explain the relatively consistent dividend policies followed by most listed companies. A key criticism of the idea of dividend clienteles is that investors do not need to rely upon the firm to provide the pattern of cash flows that they desire. An investor who would like to receive some cash from their investment always has the option of selling a portion of their holding. This argument is even more cogent in recent times, with the advent of very low-cost discount stockbrokers. It remains possible that there are taxation-based clienteles for certain types of dividend policies. Information signalling.
MB0045
Naser Shoukat Firfire
When investors have incomplete information about the firm (perhaps due to opaque accounting practices) they will look for other information that may provide a clue as to the firm's future prospects. (e.g. a full order book) are more likely to increase dividends. Investors can use this knowledge about managers' behaviour to inform. As managers tend to avoid sending a negative signal to the market about the future prospects of their firm, this also tends to lead to a dividend policy of a steady, gradually increasing payment
d) Financial Analysis, Planning and Control Decisions Introduction Management has been define d a s ―the art of askin g significant questions.‖ The same a pp s to lie financial analysis, planning and control, which should be targeted toward finding meaningful answers to these significant questions—whether or not the results are fully quantifiable. This seminar. Seminar Objectives The seminar provides delegates with the tools required to find better answers to questions such as:
• • • • • •
•
What is the exact nature and scope of the issue to be analyzed? Which specific variables, relationships, and trends are likely to be helpful in analyzing the issue? Are there p ossible w ays to o btain a quic k ―b a llpark‖ estimate of the likely result? How precise an answer is necessary in relation to the importance of the issue itself? How reliable are the available data, and how is this uncertainty likely to affect the range of results? Are the input data to be used expressed in cash flow terms—essential for economic analysis—or are they to be applied within an accounting framework to test only the financial implications of a decision? What limitations are inherent in the tools to be applied, and how will these affect the range of results obtained?
MB0045
•
Naser Shoukat Firfire
How important are qualitative judgments in the context of the issue, and what is the ranking of their significance?
Who Should Attend? This semin ar is a ‗mu st‘ for Chief Financial Officers, Financial Controllers, Finance Exec utives, Accountants, Treasurers, Corporate Planning and Business Development Executives and Sales and Marketing Professionals. Middle and junior personnel will also find this seminar highly useful in their career advancement. All participants will be able to offer their input, based on their individual experiences, and will find the seminar a forum for upgrading and enhancing their understanding of best corporate practices in the areas examined. Competencies Emphasised
• • • •
Obtaining the relevant information, given the context of the situation Choosing the most appropriate tools Knowing the strengths and limitations of the available tools Viewing all analysis, planning and control decisions in the context of their impact on shareholder value
Personal Impact Delegates will acquire the ability, when involved in decisions about business investment, operations, or financing, to choose the most appropriate tools from the wide variety of analytical techniques available to generate quantitative answers. Selecting the appropriate tools from these choices is clearly an important part of the analytical task. Yet, experience has shown again and again that first developing a proper perspective for the problem or issue is just as important as the choice of the tools themselves. Organisational Impact This seminar provides an integrated conceptual backdrop both for the financial/economic dimensions of systematic business management and for understanding the nature of financial statements. All the topics on the seminar are viewed in the context of creating shareholder value—a fundamental concept that is consolidated on the final day of the seminar. Training Methodology The training methodology combines lectures, discussions, group exercises and individual exercises. Delegates will gain both a theoretical and a practical knowledge of the topics covered. The emphasis is on the practical application of the topics and as a result delegates will return to the workplace with both the ability and the confidence to apply the techniques learned, in carrying out their duties. All delegates will receive a comprehensive set of notes to take back to the workplace, which will serve as a useful source of reference in the future. In addition, all delegates will receive a CD-ROM disk containing additional reference material and Excel templates, related to the seminar.
MB0045
Naser Shoukat Firfire
Seminar Outline Day 1 – The Challenge of Financial/Economic Decision-making
• • • • • •
The practice of financial/economic analysis The value-creating company A dynamic perspective of business What information and data to use The nature of financial statements The context of financial analysis
Day 2 – Assessment of Business Performance
• • • • • • •
Ratio analysis and performance Management‘s point of view O wners‘ p oint of view Lende point of view rs‘ Ratios as a system Integration of financial performance analysis Some special issues
Day 3 – Projection of Financial Requirements
• • • • • • • • •
Interrelationship of financial projections Operating budgets Standard costing and variance analysis Cash forecasts/budgets Sensitivity analysis Dynamics and growth of the business system Operating leverage Financial growth plans Financial modelling
Day 4 – Analysis of Investment Decisions
• • • • • • •
Applying time-adjusted measures Strategic perspective Economic value added (EVA) and net present value (NPV) Refinements of investment analysis Equivalent annual cost (EAC) Modified internal rate of return (MIRR) Dealing with risk and changing circumstances
Day 5 – Valuation and Business Performance
• • •
Managing for shareholder value Shareholder value creation in perspective Evolution of value-based methodologies
MB0045
• • • • •
Naser Shoukat Firfire
Creating value in restructuring and combinations Financial strategy in acquisitions Business valuation Business restructuring and reorganisations Management buy-outs and management buy-ins
Q.2 What are the factors that affect the financial plan of a company? The various other factors affecting financial plan are listed down in figure 2.2
MB0045
Naser Shoukat Firfire
A well established company enjoys a good market share, for its products normally commands investors‟ confidence. Such a company can tap the capital market for raising funds in competitive terms for implementing new projects to exploit the new opportunities emerging from changing business
MB0045
Naser Shoukat Firfire
Q.3 Show the relationship between required rate of return and coupon rate on the value of a bond. The fair price of a straight bond (a bond with no embedded option; see Embeddedinstruments. The two main approaches,. Present value approach Below is the formula for calculating a bond's price, which uses the basic present value (PV) formula for a given discount rate:
[1]
(This formula assumes that a coupon payment has just been made; see below for
adjustments on other dates.) F = face value iF = contractual interest rate C = F * iF = coupon payment (periodic interest payment) N = number of payments i = market interest rate, or required yield, or yield to maturity (see below) M = value at maturity, usually equals face value P = market price of bond
MB0045
Naser Shoukat Firfire
If the market price of bond is less than its face value (par value), the bond is selling at a discount. Conversely, if the market price of bond is greater than its face value, the bond is selling at a premium. Relative price approach Under this approach, the bond will be priced relative to a benchmark, usually a government security; see Relative valuation. Here, the yield to maturity on the bond is determined based on the bond's Credit rating relative to a government security with similar maturity or duration; seeCredit spread (bond). The better the quality of the bond, the smaller the spread between its required return and the YTM of the benchmark. This required return, i in the formula, is then used to discount the bond cash flows as above to obtain the price. Arbitrage-free pricing approach Under this approach, the bond price will reflect its arbitrage-free price. Here, each cash flow (coupon or face) is separately discounted at the same rate as a zero-coupon bond corresponding to the coupon date, and of equivalent credit worthiness (if possible, from the same issuer as the bond being valued, or if not, with the appropriate credit spread). Here, in general, abit. See Rational pricing: Fixed income securities. Stochastic calculus approach The following is a partial differential equation (PDE) in stochastic calculus which is satisfied by any zerocoupon bond. This methodology recognises that since future interest rates are uncertain, the discount rate referred to above is not adequately represented by a single fixed number.
[2]
MB0045
Naser Shoukat Firfire
The solution to the PDE is given in
[3]
where is the expectation with respect to risk-neutral probabilities, and R(t,T) is a random variable representing the discount rate; see also Martingale pricing. Practically, to determine the bond price, specific short rate models are employed here. However, when using these models, it is often the case that no closed form solution exists, and a lattice- or simulationbased implementation of the model in question is employed. The approaches commonly used are:
the CIR model the Black-Derman-Toy model the Hull-White model the HJM framework the Chen model.
Clean and dirty price - which relate the price of the bond to its coupons - can then be determined. Yield to Maturity The yield to maturity is the discount rate which returns the market price of the bond; it is identical to:
MB0045
Naser Shoukat Firfire
buy the bond at price P0, hold the bond until maturity, and redeem the bond at par.
Coupon yield The coupon yield is simply the coupon payment (C) as a percentage of the face value (F). Coupon yield = C / F Coupon yield is also called nominal yield. Current yield The current yield is simply the coupon payment (C) as a percentage of the (current) bond price (P). Current yield = C / P0. amt
Q.4 Discuss the implication of financial leverage for a firm. In finance, leverage is a general term for any technique to multiply gains and losses. attain leverage are borrowing money, buying fixed assets and using derivatives. are:
[2] [1]
Common ways to
Important examples
A public corporation may leverage its equity by borrowing money. The more it borrows, the less equity capital it needs, so any profits or losses are shared among a smaller base and are proportionately larger as a result.
[3]
A business entity can leverage its revenue by buying fixed assets. This will increase the proportion of fixed, as opposed to variable, costs, meaning that a change in revenue will result in a larger change in operating income.
[4][5]
MB0045
Naser Shoukat Firfireinvestments and corporate finance, and has multiple definitions in each field.
[7]
[edit]Investments Accounting leverage is total assets divided by total assets minus total liabilities.
[8]
Notional leverage is
[1]
total notional amount of assets plus total notional amount of liabilities divided by equity.
Economic
[9]
leverage is volatility of equity divided by volatility of an unlevered investment in the same assets. To understand the differences, consider the following positions, all funded with $100 of cash equity.
Buy $100 of crude oil. Assets are $100 ($100 of oil), there are no liabilities. Accounting leverage is 1 to 1. Notional amount is $100 ($100 of oil), there are no liabilities and there is $100 of equity.. Notional amount is $200,. You now have $100 cash, $100 of crude oil and owe $100 worth of gasoline. Your assets are $200, liabilities are $100 so accounting leverage is 2 to 1. You have $200 notional amount of assets plus $100 notional amount ofyear.
MB0045 Corporate finance
Naser Shoukat Firfire
Degree of Operating Leverage (DOL)= (EBIT + Fixed costs) / EBIT; Degree of Financial Leverage (DFL)= EBIT / ( EBIT - Total Interest expense ); Degree of Combined Leverage (DCL)= DOL * DFL
Accounting leverage has the same definition as in investments. operating leverage, the most common.
[11]
[10]
There are several ways to define
is:
Financial leverage is usually defined
[8]
as:
Operating leverage is an attempt to estimate the percentage change in operating income (earnings before interest and taxes or EBIT) for a one percent change in revenue.
[8]
Financial leverage tries to estimate the percentage change in net income for a one percent change in operating income.
[12][13] [14]
The product of the two is called Total leverage, a one percent change in revenue.
[15]
and estimates the percentage change in net income for
There are several variants of each of these definitions,
[8]
[16]
and the financial statements are usually
adjusted before the values are computed. Moreover, there are industry-specific conventions that differ somewhat from the treatment above. Leverage and ROE If we have to check real effect of leverage on ROE, we have to study financial leverage. Financial leverage refers to the use of debt to acquire additional assets. Financial leverage may decrease or increase return on equity in different conditions. Financial over-leveraging means incurring a huge debt by borrowing funds at a lower rate of interest and utilizing the excess funds in high risk investments in order to maximize returns. Leverage and risk The%.
[9] [18] [17]
There is an important implicit assumption in that account, however, which is that the underlying levered asset is the same as the unlevered one. If a company borrows money to modernize, or add to its product
MB0045
Naser Shoukat Firfire
line, or expand internationally, the additional diversification might more than offset the additional risk from leverage.
[9]
Or if an investor uses a fraction of his or her portfolio to margin stock index futures
[6]
and puts the rest in a money market fund, he or she might have the same volatility and expected return as an investor in an unlevered equity index fund, with a limited downside. a given asset always adds risk, it is not the case that a levered company or investment is always riskier than an unlevered one. In fact, many highly-levered hedge funds have less return volatility than unlevered bond funds, companies.
[6]
So while adding leverage to
and public utilities with lots of debt are usually less risky stocks than unlevered technology
[9]
Popular risks There is a popular prejudice against leverage rooted in the observation that people who borrow a lot of money often end up badly. But the issue here is those people are not leveraging anything, they're borrowing money for consumption.
[9]
In finance, the general practice is to borrow money to buy an asset with a higher return than the interest on the debt.
[7].
).
[9]
But the point is the fact that collapsing entities often have a lot of leverage does not mean that
leverage causes collapses. Involuntary leverage is a risk. or temporary. liquid assets.
[9] [7 ]
It means that as things get bad, leverage goes up, multiplying losses as
things continue to go down. This can lead to rapid ruin, even if the underlying asset value decline is mild The risk can be mitigated by negotiating the terms of leverage, and by leveraging only
[6]
Forced position reductions
MB0045
Naser Shoukat Firfire
maximum his counterparties will allow him, he has to sell one-third of his position to pay his debt down to $50. Now if oil goes back up to the original price, he has only $83 of equity. He lost 17 percent of his equity, even though the p (say) $10 of cash margin to enter into $200 of long oil futures contracts. Now if the price of oil declines 25%, the investor has to put up an additional $50 of margin, but she still has $40 of unencumbered cash. She may or may not wish to reduce the position, but she is not forced to do so. The point is that it is using maximum leverage that can force position reductions, not simply using leverage.
[6]
It often surprises people to learn that hedge funds running at 10 to 1 or higher notional
leverage ratios hold 80 percent or 90 percent cash. [edit]Model risk Another risk of leverage is model risk. Many investors run high levels of notional leverage but low levels of economic leverage (in fact, these are the type of strategies hedge funds are named for, although not all hedge fund pursue them). Economic leverage depends on model assumptions.
.
[2] [9]
In the case of a creditor, most of the risk is usually on the creditor's side, but there can
[9]
be risks to the borrower, such as demand repayment clauses or rights to seize collateral. negotiating terms, including mark-to-market collateral.
[6]
If a derivative
counterparty fails, unrealized gains on the contract may be jeopardized. These risks can be mitigated by
Q.5 The cash flows associated with a project are given below: Year 0 1 2 3 4 5 Cash flow (100,000) 25000 40000 50000 40000 30000
MB0045 Calculate the a) Payback period.
Payback Period (PB) calculation give us an idea on how long it will take for a project to recover the initial investment. If Y is the year before the full recovery of the investment I, U is the unrecovered cost at the start of last year and CFi is the CF of the year Y+1 then: PB = Y + U/CFi The initial investment is $100,000 and you will recover it during the fourth year, then: Y=5 and U = 100,000- (25000 + 40000 + 50000+40000+30000) = 85,000 PB (Payback period) = 5 + 85,000/85,000 = 6 years The Payback period is 6 complete years.
Naser Shoukat Firfire
b) Benefit cost ratio for 10% cost of capital
- NPV: Present Value (PV): CF1 CF2 CF5 PV = --------- + ---------- + ... + ---------(1 + r)^1 (1 + r)^2 (1 + r)^5 Where r is the required return (13% or 0.13 in this case) Net Present Value (NPV): NPV = PV - I where I = Total Initial Investment
First calculate the PV of the cash flows: PV = 25000/1.13 + 40000 /(1.13)^2 + 50000 /(1.13)^3 + 40000 /(1.13)^4 + 30000 /(1.13)^5 = 22123.89 + 17699.11 + 14749 + 8849.55 + 5309.73 = = 68731.28 NPV = PV - I = 68731.28- 100,000 = - 31268.72 (NEGATIVE!!) The net present value of this project is - 31268.72 . Since it is negative, you will lose money with this project.
MB0045
Naser Shoukat Firfire
Q6. A company’s earnings and dividends are growing at the rate of 18% pa. The growth rate is expected to continue for 4 years. After 4 years, from year 5 onwards, the growth rate will be 6% forever. If the dividend per share last year was Rs. 2 and the investors required rate of return is 10% pa, what is the intrinsic price per share or the worth of one share. Answer For 4 Years = 18 % Pa Dividend Per Share = Rs 2 After 4 Year (Means 8 Years) = 6 % Rate of Return = 10 % PA
D1 = Dividend for next period r = Cost of Capital or the capitalization rate of the company E = Earning on equity g = The growth rate of the company.
E1 = 2
P0 / E1 = 1/r [ 1+ (PVGO/(E1/r))]. = 1 / 18 [ 1 + 18/2/18] = 0.07 X 361 = 25.27 P0 / E1 = 1/r [ 1+ (PVGO/(E1/r))]. = 1 / 6 [ 1 + 6/2/6] = 0.16 X 19 = 19.16 P0 / E1 = 1/r [ 1+ (PVGO/(E1/r))]. = 1 / 10 [ 1 + 10/2/10] = .1 X 51 = 5.1
Intrinsic price per share = 3.75
MB0045
Naser Shoukat Firfire
Master of Business Administration - MBA Semester 2 MB0045 – Financial Management - 4 Credits (Book ID: B1134) Assignment Set- 2 (60 Marks)
Note: Each question carries 10 Marks. Answer all the questions.
Q.1 Discuss the objective of profit maximization vs wealth maximization. The financial management, Finance Management or
MB0045
Naser Shoukat Firfire
Giving priority to value creation, managers have now shifted from traditional approach to modern approach of financial management that focuses on wealth maximization. This leads to better and true evaluation under consideration. For e.g. to measure the worth of a project, criteria like: ― present value of its cash inflow – presen t value o f cash outflows‖ (net present value) is taken. This approach considers cash flows rather than profits into consideration and also use discounting technique to find le ad to maximization of shareho lder‘s w ea W hereas, a ma n a lth. ger migh t focu son taking such decisions that can bring quick result, so that he/she can get credit for good performance. However, in course of fulfilling the same, a manager might opt for risky decisions which can put on stake th e owner‘s objectives. Hence, a manager should align his/her objective to broad objective of organization and achieve a tradeoff between risk and return while making decision; keeping in mind the ultimate goal of financial management i.e. to maximize the wealth of its current shareholdershe.
MB0045
Naser Shoukat Firfire.of the shareholders and to maximize the net present worth. Wealth is equal to the the difference between gross presentworth of some decision or course of action and theinvestment required to achieve the expected benefits. Gross present worth involves the capitalised value of the expected benefits.This value is discounted a some rate,thisrate depends on the certainty or uncertainty factor of the expected benefits. The Wealth Maximization approach is concerned with theamount of cash flow generated by a course of action rather than the profits. Any course of action that has net present worth above zero or in other words,creates wealth should be selected. Q.2 Explain the Net operating approach to capital structure. The second approach as propounded by David Durand the
MB0045
Naser Shoukat Firfire. Net operating Income (NOI)) = (Earnings before interest and tax)/(Overall cost of capital) The value of equity can be determined by the following equation Value of equity (S) = V (market value of firm) – D (Market value of debt) and the cost of equity = (Earnings after interest and before tax)/(market value of firm (V)- Market value of debt (D)) The Net Operating Income Approach is based on the following assumptions: (i) The overall cost of capital remains constant for all degree of debt equity mix. (ii) The market capitalizes the value of firm as a whole. Thus the split between debt and equity is not important.
MB0045
Naser Shoukat Firfire
NOI approach since overall cost of capital is constant, therefore there is no optimal capital structure rather every capital structure is as good as any other and so every capital structure is optimal one.
Q.3 What do you understand by operating cycle. Operating Cycle Definition The Operating cycle definition, also known as cash operating cycle or cash conversion cycle or asset conversion cycle, establishes how many days it takes for a company to turn purchases of inventory into cash receipts from its eventual sale. Operating cycle has three components of payable turnover days, inventory turnover days and accounts receivable turnover days. These come together to form the complete measurement operating cycle days. The operating cycle formula and operating cycle analysis stems logically from these.
Operating Cycle Formula Operating cycle calculations are completed simply with this formula: Operating cycle = DIO + DSO - DPO Where DIO represents days inventory outstanding DSO represents day sales outstanding DPO represents days payable outstanding Operating Cycle Calculation Calculating operating cycle may seem daunting but results in extremely valuable information. DIO = (Average inventories / cost of sales) * 365 DSO = (Average accounts receivables / net sales) * 365
MB0045
Naser Shoukat Firfire
DPO = (Average accounts payables / cost sales) * 365 Example: What is the operating cycle of a business? A company has 90 days in days inventory outstanding, 60 days in days sales outstanding and 70 in days payable outstanding. Operating cycle = 90 + 60 - 70 = 80 This means that on average it takes 80 days for a company to turn purchasing inventories into cash sales. In regards to accounting, operating cycles are essential to maintaining levels of cash necessary to survive. Maintaining a beneficial net operating cycle ratio is a life or death matter. Operating Cycle Applications Th e operatin g cycle co n c indicates a comp an true liquidity. By tra ckin g th e historical re cord of ept y‘s the operating cycle of a company and comparing it to its peer groups in the same industry, it gives investors investment quality of a company. A short company operating cycle is preferable since a company realizes its profits quickly and allows a company to quickly acquire cash that can be used for reinvestment. A long business operating cycle means it takes longer time for a company to turn purchases into cash through sales. In general, the shorter the cycle, the better a company is since less time capital is tied up in the business process.
Q.4 What is the implication of operating leverage for a firm. Operating leverage is the extent to which a firm uses fixed costs in producing its goods or offering its services. Fixed costs include advertising expenses, administrative costs, equipment and technology, depreciation, and taxes, but not interest on debt, which is part of financial leverage. By
MB0045
Naser Shoukat Firfire
Q.5 A company is considering a capital project with the following information: The cost of the project is Rs.200 million, which consists of Rs. 150 million in plant a machinery and Rs.50 million on net working capital. The entire outlay will be incurred in the beginning. The life of the project is expected to be 5 years. At the end of 5 years, the fixed assets will fetch a net salvage value of Rs. 48 million ad the net working capital will be liquidated at par. The project will increase
MB0045
Naser Shoukat Firfire. Answer CP = 200 million CP (machinery) = Rs 50 T = 5 Years Value = 48 million. Per Year = 250 million Increase in cost = 100 million per Yr Tax Rate = 30 % Rate of cost of capital = 10 %
Year T=0 T=1 T=2 T=3 T=4 T=5 Cash flow 2500000000/(1+0.10) 2500000000/(1+0.10)1 2500000000/(1+0.10)2 2500000000/(1+0.10)3 2500000000/(1+0.10)1 2500000000/(1+0.10)1 Present value 152,000000 52,72700000 20,6610000 18,7830000 17,0750000 15,52300000 NPV = 15,523000 X 30/ 100 = 4569000000/100 = 45690000 = 456900 X 10 /100 = 456900.
Henc NPV = 15,523000 + 4569000 = 20092000
Q.6 Given the following information, what will be the price per share using the Walter model. Earnings per share Rs. 40 Rate of return on investments 18% Rate of return required by shareholders 12% Payout ratio being 40%, 50%, or 60%.
P is the market price per share M is a multiplier D is the dividend per share E is the earning per share = 40
Walter model: according to this model founded by James Walter, the dividend policy of a company has an impact on the share valuation. Quantitatively P=(D+(E-D) r/k)/k Where: P, D, E have the same connotations as above and r is the internal rate of return on the investments and k is the cost of capital D/P ratio = 50% When EPS = 40 and D/P ratio is 40%, D = 10 x 40% = $4 4 + [0.08 / 0.10] [10 - 4] P = 0.10 = 85
D/P ratio = 25% When EPS = $40 and D/P ratio is 50%, D = 10 x 50% = $2.5 2.5 + [0.08 / 0.10] [10 2.5] 0.10
P =
= 75
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue reading from where you left off, or restart the preview. | https://www.scribd.com/document/98170613/mb0045 | CC-MAIN-2017-04 | refinedweb | 5,157 | 51.18 |
Fooling with XULF.?" I decided that a shortcut interface to the XMLhack site would be a reasonable-sized learning project to undertake.
Working from back to front, I'll show the finished product and
then demonstrate how I got there. The image on the right shows a
list of articles with icons, which when double-clicked, cause the
browser to navigate to the complete article on the web site.
The first step was to learn how to construct the user interface. Of course, I could have done this in straight HTML rather than using XUL, but I wanted to use the experience to learn about the Mozilla browser, and also for the user to feel that the component was more tightly integrated with the sidebar. Fortunately, XUL is one of the better-documented parts of the Mozilla browser, and I used Neil Deakin's XUL Tutorial as my starting point. This is an excellent document, although still a work in progress.
It is beyond the scope of this article to present a XUL
tutorial, but I will mention the key concepts that are
required. User interface descriptions in XUL usually span four
files: the layout (
.xul) file, the CSS style sheet, scripts, and
localization information. For my experiments, I simply used the
default styles and omitted localization, so I only had to
consider two files: the
.xul file itself, and a file for my
JavaScript.
Working through the XUL tutorial, I grasped the fundamentals of
creating an interface in XUL. Each
.xul file
describes a window that contains widgets. The minimal XUL
file looks like this:
<?xml version="1.0"?> <?xml-stylesheet </window>
Mozilla makes extensive use of XML Namespaces, hence the
somewhat baroque URI denoting that
window and its
contents have the XUL default namespace. The browser can
interpret elements from other vocabularies (such as HTML and
RDF) within XUL documents. These can be included in XUL descriptions by using
namespace-qualified names.
Another thing of note to point out is the
chrome:// URL scheme. This scheme allows Mozilla to
find localized resources for its user interface. If you look in
your Mozilla install directory, you will find the
chrome directory, beneath which are the XUL
descriptions for the user interface. Mozilla interprets the
chrome URL to find the actual file it wants,
adjusting for such things as the current locale.
So, to get going, I added some simple widgets to my skeleton
user interface, following the
"Find Files" example from the XUL tutorial. I placed the
XUL file on my web server, pointed Mozilla at and got back the text
contents of the file in the browser, rather than a cool user
interface. Oh dear.
It turns out that a little bit of configuration to your web
server is required in order to get Mozilla to recognise XUL
files. In particular, XUL currently uses the Internet Media Type
text/xul. I simply added this line to my Apache's
mime.types file, which fixed the problem:
text/xul xul
That issue resolved, I continued with the tutorial and had soon created a pleasing user interface. Not everything worked quite as smoothly in the browser as it did in the tutorial though! Some of the widgets aren't properly implemented yet in Mozilla, and it can vary from platform to platform. For instance, I was able to get context pop-up menus ("right-click" menus) working on Windows, but not on Linux.
Arrangement of widgets in a XUL interface is done primarily by use of boxes. Readers au fait with the TeX typesetting system (or many other layout systems) will already be familiar with the box method. Boxes can have their contents stacked either horizontally or vertically. Layouts can be constructed by a combination of these boxes.
Each UI element has a flex attribute, which governs how
it will stretch to fill the space available. There is also a
spring element, which is an empty box you can use
to put space in between other UI elements. This example shows
how two pushbutton widgets and a spacer would be written:
<titledbutton class="push" value="OK" style="min-width: 100px;" flex="1"/> <spring flex="1"/> <titledbutton class="push" value="Cancel" style="min-width: 100px;" flex="1"/>
As the window containing these elements was expanded, each
button and the spring would all enlarge to consume the new space
in equal measure, owing to their having identical
flex values. Note the use of CSS styles to specify
minimum widget dimensions.
It's much more interesting to play with this for yourself than to have me recount all the ins and outs, so I recommend pursuing the exercises in the XUL Tutorial.
A collection of widgets is all very well, but not very useful
without something actually happening when they are
activated. This is achieved through the use of JavaScript. In a
similar way to HTML, a XUL document is exposed to the browser
via a DOM, which can be scripted. Event handlers such as
onClick can be specified as attributes of interface
elements, much as in HTML.
Although script can be included inline in XUL with the
<html:script> element, the documentation
advises against it. Instead, script should be stored in a
separate file, and referenced like this:
<html:script
It is relatively simple to get started with writing scripts. Here's a quick example. Imagine a window with this button inside:
<titledbutton class="push" value="Hello" onclick="sayHello();"/>
... and this JavaScript:
function sayHello() { alert("Hello, World!"); }
More complex operations can be achieved through the use of the
DOM. XUL provides two extension functions to the XML DOM:
getElementById and
getElementsByAttribute (see Introduction to XUL on Mozilla.org). Extensive use of the
ID attribute
on user interface elements is made within interface
descriptions, in order to facilitate easy scripting and
modification of the interface. The interface's DOM can be
modified by scripts, enabling such things as the dynamic
alteration of style and content of widgets.:
Details on participating in the Mozilla project can be found at Mozilla.org.
XML.com Copyright © 1998-2006 O'Reilly Media, Inc. | http://www.xml.com/lpt/a/399 | crawl-001 | refinedweb | 1,016 | 62.27 |
#include <sys/ddi.h> #include <sys/sunddi.h> int devmap_setup(dev_t dev, offset_t off, ddi_as_handle_t as, caddr_t *addrp, size_tlen, uint_t prot, uint_t maxprot, uint_t flags, cred_t *cred);
int ddi_devmap_segmap(dev_t dev, off_t off, ddi_as_handle_t as, caddr_t *addrp, off_tlen, uint_t prot, uint_t maxprot, uint_t flags, cred_t *cred);
Solaris DDI specific (Solaris DDI).
Device whose memory is to be mapped.
User offset within the logical device memory at which the mapping begins.
An opaque data structure that describes the address space into which the device memory should be mapped.
Pointer to the starting address in the address space into which the device memory should be mapped.
Length (in bytes) of the memory to be mapped.
A bit field that specifies the protections. Some possible settings combinations. The following flags can be specified:
Changes are private.
Changes should be shared.
The user specified an address in *addrp rather than letting the system choose an address.
Pointer to the user credential structure.
devmap_setup() and ddi_devmap_segmap() allow device drivers to use the devmap framework to set up user mappings to device memory. The devmap framework provides several advantages over the default device mapping framework that is used by ddi_segmap(9F) or ddi_segmap_setup(9F). Device drivers should use the devmap framework, if the driver wants to:
use an optimal MMU pagesize to minimize address translations,
conserve kernel resources,
receive callbacks to manage events on the mapping,
export kernel memory to applications,
set up device contexts for the user mapping if the device requires context switching,
assign device access attributes to the user mapping, or
change the maximum protection for the mapping.
devmap_setup() must be called in the segmap(9E) entry point to establish the mapping for the application. ddi_devmap_segmap() can be called in, or be used as, the segmap(9E) entry point. The differences between devmap_setup() and ddi_devmap_segmap() are in the data type used for off and len.
When setting up the mapping, devmap_setup() and ddi_devmap_segmap() call the devmap(9E) entry point to validate the range to be mapped. The devmap(9E) entry point also translates the logical offset (as seen by the application) to the corresponding physical offset within the device address space. If the driver does not provide its own devmap(9E) entry point, EINVAL will be returned to the mmap(2) system call.
Successful completion.
An error occurred. The return value of devmap_setup() and ddi_devmap_segmap() should be used directly in the segmap(9E) entry point.
devmap_setup() and ddi_devmap_segmap() can be called from user or kernel context only.
mmap(2), devmap(9E), segmap(9E), ddi_segmap(9F), ddi_segmap_setup(9F), cb_ops(9S)
Writing Device Drivers for Oracle Solaris 11.2 | https://docs.oracle.com/cd/E36784_01/html/E36886/ddi-devmap-segmap-9f.html | CC-MAIN-2018-09 | refinedweb | 434 | 55.44 |
Search the Community
Showing results for tags 'flash'.:TextField):void { TweenMax.to(tf, 3, {x:-tf.width, delay:1, onComplete:goForwardTween, onCompleteParams:[tf] }); } Hi guys haven't been here in awhile here goes ..... ...basically what I have is a TextField inside a MovieClip, and if the text inside the TextField exceeds a certain size it moves back & forth so that the far right of the MovieClip moves to the far left of the mask and far left of the MovieClip moves to the far right of the mask. But for some reason I cant seem to get it to move it populates the TextField ok .. but wont TweenMax.to functions even if the text is larger than the mask. hope someone can help... Steven
How to fade alpha in across text
jdfinch3 posted a topic in GSAP (Flash)I'm trying to create a line of text that slowly fades into view from left to right (or top to bottom, whatever). I see the very simple process of fading the entire text field in or out, but is there a way to fade vertically or horizontally? Thanks!
How to return an array element as a variable in TweenLite?
jdfinch3 posted a topic in GSAP (Flash)The code is below. When Tweenlite is called it treats "tileList[count1]" as a string instead of a variable name. However the trace seems to return what I would expect (tile1, tile2, tile3...). If I remove "tileList[count1]" from the tween and replace it with a direct call to the MovieClip (tile1, tile2, etc) the code works perfectly... Things I've tried: Using a vector instead of an array. Setting tileList[count1] to a public and local variable and then calling that variable. Removing the randomSort. Removing count1 and calling the array element directly (ie, tileList[5]). public class wtpMain extends MovieClip { public var tileList:Array = new Array(tile1,tile2,tile3,tile4,tile5,tile6,tile7,tile8,tile9,tile10,tile11,tile12,tile13,tile14,tile15 ,tile16); public var count1:int = 0; public function wtpMain() { nextButton.buttonMode = true; nextDis.mouseEnabled=false; nextButton.addEventListener(MouseEvent.CLICK, nextButtonClickh); tileList.sort(randomSort); } public function nextButtonClickh(event:MouseEvent):void { nextButtonClick(); } public function nextButtonClick():void{ TweenLite.to(tileList[count1], 5, {y:700, alpha:0}); trace(tileList[count1]); count1++; } public function randomSort(objA:Object, objB:Object):int{ return Math.round(Math.random() * 2) - 1; } } }
Load and Play MP3s Sequentially with a Pause Function?
dmca66 posted a topic in Loading (Flash)I am hoping Greensock will be the answer to a problem I have. My client would like to have MP3 narration loaded sequentially (one for each paragraph in the script). I am able to do this with AS3 (array) but I am unable to successfully code a pause function. Pausing always stores a position from the 1st mp3 and not the one currently playing. I have poked around Greensock a bit but I cannot find tutorials that suit my needs. Will Loadermax work for this situation? If so, can someone point me in the right direction? thx
moving object along with the mouse
mkg14 posted a topic in GSAP (Flash)Hello, Is there a way for the viewer to control an object with his mouse? First I want the object to enter the stage and stop. After the object stops I would like the viewer to be able to move it up and down on the y axis using his mouse. Any help would be appreciated!
Need Help With SplitTextField
Pipsisewah posted a topic in GSAP (Flash)Hello, We purchased the Club membership and received the SplitTextField which honestly is working really really well. I'm having a little problem though. I am attempting to create a sub-timeline using TimelineMax which animates a SplitTextField. Once I create the timeline in my function, I return the TimelineMax to the parent class which adds it to the main timeline (also a timelineMax). I have been doing this for all the animations I need and its working just fine. I can sequence 10 different complex animations without a problem! Here is my issue (and I know it simple, just need a push in the right direction), once the Split Text Field is done, I need it off the screen so the other animations will render. I tried the onComplete and the onCompleteRender functions and told them to call a cleanup function (which does work), but it looks like it calls the function after the first character of my staggerFrom completes animation. Basically, what I am seeing is the first character stop, then the SplitTextField is destroyed and the other animations render. If I do not destroy the SplitTextField, nothing happens after the first animation. Below is my general code: // PARENT CLASS ////////////////////////////////////////////////////////////////////////////// var t:TimelineMax = new TimelineMax(); // Creates an flash.text.textField with general properties including embedded fonts. createTextField("MAKE SOME NOISE!!!!"); // Add the animation I need which uses the Split Text Field t.add(AdvancedTextEffects.ScaleandSpinText(text,4)); //Add more animations which do not use the Split Text Field t.add(SimpleTextEffects.Flash2(text, 1)); t.add(SimpleTextEffects.ScaleOut(text, 1)); t.add(SimpleTextEffects.Jiggle4(text, 1)); t.add(SimpleTextEffects.SpinClockwise(text, 1)); //////////////////////////////////////////////////////////////////////////////////////////////////////////////// // ADVANCED TEXT EFFECTS CLASS ///////////////////////////////////////////////////////////////// public static function ScaleandSpinText(target:TextField, time:int):TimelineMax{ // Create Sub-Timeline var t:TimelineMax = new TimelineMax(); // Activate Plugins TweenPlugin.activate([TransformAroundCenterPlugin, AutoAlphaPlugin, OnCompleteRenderPlugin]); // Create the SplitTextField var mySTF:SplitTextField = new SplitTextField(target); // Add the Tween we want t.add(TweenMax.staggerFrom(mySTF.textFields, 3, {transformAroundCenter:{scale:5, rotation:360}, alpha:0, autoAlpha:1, ease:Power4.ease-Out, onCompleteRender:cleanup, onCompleteRenderParams:[mySTF]},0.2)); // Return the Timeline return t; } private static function cleanup(mySTF:SplitTextField):void{ mySTF.destroy(); } Please let me know what you think and how to get around this. Thank you! - Steven Lopes
Multiple Errors in Flash Builder 4.7
shane7 posted a topic in GSAP (Flash)Hi there, I upgraded my Flash Builder this morning, and all of a sudden my TweenLite.as is giving off warnings that have NEVER popped up in my life. It says: Assignment within conditional. Did you mean == instead of =? I've tried upgrading to GSAP v12, But instead of 1 error warning, it gave off 10, in different Greensock files... so I went back to 11. Any solutions? Really need to get this project finished Regards Shane! | https://staging.greensock.com/search/?tags=flash&updated_after=any&sortby=relevancy&page=2 | CC-MAIN-2021-43 | refinedweb | 1,047 | 57.77 |
iStringSetBase< Tag > Struct Template Reference
[Utilities]
The string set is a collection of unique strings. More...
#include <iutil/strset.h>
Detailed Description
template<typename Tag>
struct iStringSetBase< Tag >.
- csStringSet
Definition at line 114 of file strset.h.
Member Function Documentation
Remove all stored strings.
- Deprecated:
- Deprecated in 1.3. Use Empty() instead.
Check if the set contains a string with a particular ID.
- Remarks:
- This is rigidly equivalent to
return Request(id) != NULL, but more idiomatic.
Check if the set contains a particular string.
Remove a string with the specified ID.
- Returns:
- True if a matching string was in the set; else false.
Remove specified string.
- Returns:
- True if a matching string was in thet set; else false.
Remove all stored strings.
When new strings are registered again, new ID values will be used; the old ID's will not be re-used.
Get the number of elements in the hash. struct was generated from the following file:
Generated for Crystal Space 2.0 by doxygen 1.6.1 | http://www.crystalspace3d.org/docs/online/new0/structiStringSetBase.html | CC-MAIN-2014-10 | refinedweb | 169 | 71.51 |
Brief overview.
Hardware specification can be found on the package.
Hardware features
-
A more detailed specification can be found in official website of M5Stack.
For Windows:
There are 3 methods to burn firmware for Windows OS:
Using EasyLoader tool
- Select proper COM port (in my case it was COM3)
- Press Burn
- After completion of updating firmware, you will see that it was Successfully burned.
Using Kflash GUI
- Open downloaded firmware using Open File button
- Select board as M5StickV
- Click Download
Using command prompt
- Check the COM port for your M5StickV at the Device Manager of Windows.
- On Windows, you need to have Python3 with pip3 installed and the pyserial package as well. You can download the latest version of Python from the official website.
- Open command prompt as administrator and type the following command
pip3 install kflash
After finishing installation, run the following command
kflash.exe -p COM3 M5StickV_Firmware_1022_beta.kfpkg
It will print:
[32m[1m[INFO][0m COM Port Selected Manually: COM3 [0m [32m[1m[INFO][0m Default baudrate is 115200 , later it may be changed to the value you set. [0m [32m[1m[INFO][0m Trying to Enter the ISP Mode... [0m ._ [32m[1m[INFO][0m Automatically detected goE/kd233 [0m [32m[1m[INFO][0m Greeting Message Detected, Start Downloading ISP [0m Downloading ISP: |=====================================================================================| 100.0% 8kiB/s [32m[1m[INFO][0m Booting From 0x80000000 [0m [32m[1m[INFO][0m Wait For 0.1 second for ISP to Boot [0m [32m[1m[INFO][0m Boot to Flashmode Successfully [0m [32m[1m[INFO][0m Selected Flash: On-Board [0m [32m[1m[INFO][0m Initialization flash Successfully [0m [32m[1m[INFO][0m Extracting KFPKG ... [0m [32m[1m[INFO][0m Writing maixpy.bin into 0x00000000 [0m Programming BIN: |=====================================================================================| 100.0% 9kiB/s [32m[1m[INFO][0m Writing m5stickv_resources.img into 0x00d00000 [0m Programming BIN: |=====================================================================================| 100.0% 10kiB/s [32m[1m[INFO][0m Writing facedetect.kmodel into 0x00300000 [0m Programming BIN: |=====================================================================================| 100.0% 9kiB/s [32m[1m[INFO][0m Rebooting... [0m
sudo pip3 install kflash
Using Kflash burn firmware image
sudo kflash -b 1500000 -B goE M5StickV_Firmware_1022_beta.kfpkg
You will see the similar following output as above.
For MacOS:
Open terminal and run the following command
sudo pip3 install kflash
If you receive an error after installation, try the following command:
sudo python -m pip install kflash sudo python3 -m pip install kflash sudo pip install kflash sudo pip2 install kflash
Enter the following command
sudo kflash -b 1500000 -B goE M5StickV_Firmware_1022_beta.kfpkg
You must see the same output as above on Linux section.
Booting the M5stickV for the first time
For MacOS and Linux:
- Open terminal
- Install screen utility for MacOS and Linux. It can be installed by the following command:
sudo apt-get install screen
Using screen utility connect to M5stickV via serial communication
sudo screen /dev/ttyUSB0 115200
It will print:
[MAIXPY]Pll0:freq:832000000 [MAIXPY]Pll1:freq:398666666 [MAIXPY]Pll2:freq:45066666 [MAIXPY]cpu:freq:416000000 [MAIXPY]kpu:freq:398666666 [MAIXPY]Flash:0xc8:0x17 open second core... gc heap=0x80215060-0x80295060 [MaixPy] init end __ __ _____ __ __ _____ __ __ | \/ | /\ |_ _| \ \ / / | __ \ \ \ / / | \ / | / \ | | \ V / | |__) | \ \_/ / | |\/| | / /\ \ | | > < | ___/ \ / | | | | / ____ \ _| |_ / . \ | | | | |_| |_| /_/ \_\ |_____| /_/ \_\ |_| |_| M5StickV by M5Stack : M5StickV Wiki : Co-op by Sipeed : [MAIXPY]: result = 0 [MAIXPY]: numchannels = 1 [MAIXPY]: samplerate = 44100 [MAIXPY]: byterate = 88200 [MAIXPY]: blockalign = 2 [MAIXPY]: bitspersample = 16 [MAIXPY]: datasize = 158760 init i2c2 [MAIXPY]: find ov7740
When connected, it will automatically enter Maixpy UI. Now the device is running the default program code, you can terminate it by Ctrl+C.
Print Hello World example on display of M5StickV
Enter the following commands in your terminal of MacOS and Linux. For Windows use PuTTY.
import lcd lcd.init() lcd.draw_string(100, 100, "hello world", lcd.RED, lcd.BLACK)
- Install MaixPy IDE
- Launch the MaixPy IDE
- Select the model of the development board - Tools-> Select Board-> M5StickV.
- Click the green Connect link button in the lower left corner and select the USB serial connection port, click OK.
- When the connection button changes from green to red, it has been connected successfully.
- Click the Run button in the lower left corner to execute the code and verify it.
- Click the serial terminal tab below.
- Finally, you will see the following output:
Face Detection using M5StickV
By default face detection model and program code was already preinstalled. Here's how it works.
- The face detection example works pretty good.
- In order to be able to use the another models, we need to burn it into the flash memory of the M5StickV using kflash_gui. Other models can be downloaded from here. There is a pre-trained model, the mobilenet, which is pre-trained to recognize 1000 objects. It can detect many everyday objects with ease.
- Copy the below code into MaixPy IDE.
import sensor import image import KPU as kpu sensor.reset() sensor.set_pixformat(sensor.RGB565) sensor.set_framesize(sensor.QVGA) sensor.run(1) task = kpu.load(0x) while(True): img = sensor.snapshot() code = kpu.run_yolo2(task, img) if code: for i in code: print(i) a = img.draw_rectangle(i.rect()) a = kpu.deinit(task)
Press the Run button, and the board displays live video from the camera to the MaixPy IDE.
The accuracy is pretty good considering we are running it on a $27 board. This is truly impressive and revolutionary.
Conclusion
This board is not ideal though, it lacks analog inputs, microphone, WiFi, and Bluetooth. However, it is a great camera with AI capabilities that can be used for face recognition, object or shape detection and many other detection activities. Also, this is an awesome dev kit to get started with the Kendryte K210 RISC-V core.
I hope you found this guide useful and thanks for reading. If you have any questions or feedback? Leave a comment below. Stay tuned! | https://maker.pro/micropython/projects/getting-started-with-the-m5stickv-ai-iot-camera | CC-MAIN-2021-10 | refinedweb | 969 | 57.27 |
I'm trying to get this code:
context.lookup("java:comp/env/jdbc/myDS");
to work from a servlet. If I change it to:
context.lookup("java:/jdbc/myDS");
Then it works. However, for certain reason, I don't want to change that. I want the java:comp namespace to work. How can I make that happen?
I'm using an expanded Ear directory structure with expanded War dirs.
did you include a jboss-web.xml file in your web application's WEB-INF folder? You can specifiy jndi mappings within that file like this:
<jboss-web>
<resource-ref>
<res-ref-name>jdbc/myDS</res-ref-name>
<jndi-name>java:/jdbc/myDS</jndi-name>
</resource-ref>
</jboss-web>
That way you will not have to edit source code no matter where the resource is really located on you jndi server. Which also makes moving your app from one environment to another simple.
I have similiar problem. I will not use any proprietary XML conf file. My EAR HAVE to be app server independent. But as long as I have seen, every App server is trying to sabotage plataform independence...
It´s IMPOSSIBLE to build an EAR that works in JBOss and Sun´s J2EE. That´s my opinion. And it happens becouse of this comp/env incompatibility. If you get any closed EAR that access DB from Sun it will not work unchanged on Jboss.
I´d like to see an EAR that works unchanged and only with sun´s default XML configuration, with Entity beans,session beans, servlets, jsp and DB access that work on JBoss and J2ee. As I doubt that it will work unchanged in other Web App Servers ... | https://developer.jboss.org/message/172512 | CC-MAIN-2019-30 | refinedweb | 282 | 68.87 |
This is the 8th MVC (Model view controller) tutorial and in this article we
try to understand how we can validate data passed in MVC URL’s.
MVC is all about action which happens via URL and data for those actions is also
provided by the URL. It would be great if we can validate data which is passed
via these MVC URL’s.
For instance let’s consider the MVC URL . If anyone wants to view
customer details for 1001 customer code he needs to enter .
The customer code is numeric in nature. In other words anyone entering a MVC URL
like is invalid. MVC framework
provides a validation mechanism by which we can check on the URL itself if the
data is appropriate. In this article we will see how to validate data which is
entered on the MVC URL. So let’s start step by step.
The first is to create a simple customer class model which will be invoked by
the controller.
public class Customer
{
public int Id { set; get; }
public string CustomerCode { set; get; }
public double Amount { set; get; }
}
The next step is to create a simple controller class which ‘DisplayCustomer’ function which displays the
customer using the ‘id’ value. This function takes the ‘id’ value and looks up
through the customer collection. Below is the downsized reposted code of the
function.
[HttpGet]
public ViewResult DisplayCustomer(int id)
{
Customer objCustomer = Customers[id];
return View("DisplayCustomer",objCustomer);
}
If you look at the ‘DisplayCustomer’ function it takes an ‘id’ value which is
numeric. We would like put a validation on this id field with the following
constraints:-
• Id should always be numeric.
• It should be between 0 to 99.
We want the above validations to fire when the MVC URL is invoked with data.
The validation described in the step 2 can be achieved by applying regular
expression on the route map. If you go to global.asax file and see the maproute
function on the inputs to this function is the constraint as shown in the below
figure.
In case you are new to regular expression we would advise you to go through this
video on regular expressions
So in order to accommodate the numeric validation we need to the specify the
regex constraint i.e. ‘\d{1,2}’ in the ‘maproute’ function as shown below.
‘\d{1,2}’ in regex means that the input should be numeric and should be maximum
of length 1 or 2 , i.e. between 0 to 99.
You can specify ‘maproute’ functions,
it’s time to test if these validations work.
So in the first test we have specified valid 1 and we see that the controller is
hit and the data is displayed.
If you try to specify value more than 100 you would get the other page. In MVC everything is an action and those
actions invoke the views or pages. We can not specify direct hyperlinks like
, this would defeat the purpose of MVC. In other words we need to specify
actions and these actions will invoke the URL’s.
In the next article we will look in to how to define outbound URL in MVC
views which will help us to navigate from one page to other page.
Latest Articles
Latest Articles from Questpond
Login to post response | http://www.dotnetfunda.com/articles/show/1527/how-to-validate-data-provided-in-mvc-urls-mvc-tutorial-number-8 | CC-MAIN-2017-39 | refinedweb | 552 | 71.95 |
Struts 1 Tutorial and example programs
to the Struts Action Class
This lesson is an introduction to Action Class... a new action class for every action. In this part, I will walk through a full...Struts 1 Tutorials and many example code to learn Struts 1 in detail.
Struts 1
java struts DAO - Struts
java struts DAO hai friends i have some doubt regarding the how to connect strutsDAO and action dispatch class please provide some example to explain this connectivity.
THANKS IN ADVANCE
struts - Struts
Struts ui tags example What is UI Tags in Strus? I am looking for a struts ui tags example. to database connection first example. - Struts
Struts to database connection first example. Hi All,
I am new to Struts technology. I want to retriew the values of database to the browser... the Action class "iteratorTag.java" having arraylist "myList" having add element application
struts application hi,
i can write a struts application... into data base
**but now i can apply to some validation to that form fileds but it can not apply validations to that filed for example i put required rule but i am
struts - Struts
struts ValidatorResources not found in application scope under key "org.apache.commons.validator.VALIDATOR_RESOURCES
I get this error when i try the validator framework example.......wat could b the problem
Struts Tutorials
application development using Struts. I will address issues with designing Action... Struts application, specifically how you test the Action class.
The Action class... into a Struts enabled project.
5. Struts Action Class Wizard - Generates Java
Struts File Upload Example - Struts
Struts File Upload Example hi,
when i tried the struts file upload example(Author is Deepak) from the following URL
i have succeeded. but when i try to upload file Hi
what is struts flow of 1.2 version struts?
i have struts applicatin then from jsp page how struts application flows
Thanks
Kalins Naik
Please visit the following link:
Struts Tutorial
Struts Articles
application. The example also uses Struts Action framework plugins in order... previous Struts experience. I notice in many cases that some sort of ?paradigm.... A Struts Action does the job, may be calling some other tiers of the Application
Introduction to Struts 2 Framework
.
Action Form
ActionForm class is mandatory in Struts 1
In Struts...
ActionForward is class in Struts 1.
In Struts 2 action returns....
In this video tutorial I will teach you the Struts 2 framework. You will
Struts for beginners struts for beginners example
struts-config.xml - Struts
struts-config.xml in struts-config.xml i have seen some code like
in this what is the meaning of "{1}".
when u used like this? what is the purpose of use
tiles - Struts
Struts Tiles I need an example of Struts Tiles
Based on struts Upload - Struts
Based on struts Upload hi,
i can upload the file in struts but i want the example how to delete uploaded file.Can you please give the code
Struts
Struts Am newly developed struts applipcation,I want to know how to logout the page using the strus
Please visit the following link:
Struts Login Logout Application
best Struts material - Struts
best Struts material hi ,
I just want to learn basic Struts.Please send me the best link to learn struts concepts
Hi Manju,
Read for more and more information with example at:
http Books
Request Dispatcher. In fact, some Struts aficionados feel that I exagerate the negatives of Struts in the next section. I like Struts, and think... First Example - Framework
Struts First Example HI
i am new to Struts and developing a single... in details of problem for complete solution.
For read more information on struts to visit the link....
Thanks
struts
struts I have no.of checkboxes in jsp.those checkboxes values came from the databases.we don't know howmany checkbox values are came from......use of Struts Hi,
can anybody tell me what is the importance of sturts? why we are using it?
Hitendra Hi,
I am sending build a Struts Project - Struts
How to build a Struts Project Please Help me. i will be building a small Struts Projects, please give some Suggestion & tips dispatch action - Struts
Struts dispatch action i am using dispatch action. i send....
but now my problem is i want to send another value as querystring for example... not contain handler parameter named 'parameter'
how can i overcome
struts
*;
import org.apache.struts.action.*;
public class LoginAction extends Action...struts <p>hi here is my code can you please help me to solve...;
<p><html>
<body></p>
<form action="login.do">
Struts - Jboss - I-Report - Struts
Struts - Jboss - I-Report Hi i am a beginner in Java programming and in my application i wanted to generate a report (based on database) using Struts, Jboss , I Report
struts - Struts
; Hi,Please check easy to follow example at dispatchaction vs lookupdispatchaction What is struts 2.0 - Struts
Struts 2.0 Hi ALL,
I am getting following error when I am trying... not be resolved as a collection/array/map/enumeration/iterator type. Example: people or people.{name}
here is the action:
public class WeekDay
Struts
Struts I want to create tiles programme using struts1.3.8 but i got jasper exception help me out
struts <html:select> - Struts
with the action.
For example, the class attribute might be specified...struts i am new to struts.when i execute the following code i am..., allowing Struts to figure out the form class from the
struts-config.xml file how to make one jsp page with two actions..ie i need to provide two buttons in one jsp page with two different actions without redirecting to any other page
Struts Tutorial
the
information to them.
Struts Controller Component : In Controller, Action class...).
Features of Struts
Struts has various of features some of them are as follows... known as Struts 1, and Struts 2 (till the time of writing this
tutorial
struts hibernate integraion tut - Struts
struts hibernate integraion tut Hi, I was running struts hibernate integration tutorial. I am facing following error while executing the example. type Exception report message description The server encountered Hello Experts,
How can i comapare
in jsp scriptlet in if conditions like
struts - Struts
struts I want to know clear steps explanation of struts flow..please explain the flow clearly Links - Links to Many Struts Resources
architecture using the standard RequestDispatcher. In fact, some Struts aficionados feel that I exaggerate the negatives of Struts in the next section. I like... this example to show you additional features of Struts.
Jakarta Struts 1.2 Tutorial
java - Struts
Java - Validating a form field in JSP using Struts I need an example in Java struts that validates a form field in JSP....shtml
Hope that the above links
struts - Framework
struts can show some example framework of struts Hi Friend,
Please visit the following links:
Understanding Struts - Struts
on is Mifos, it is an open-source application.
I have been reading some...Understanding Struts Hello,
Please I need your help on how I can understand Strut completely. I am working on a complex application which :
java - Struts
java how can i get dynavalidation in my applications using struts? Hi friend,
For dyna validation some step to be remember...
*) Action class
public class MyAction extends
que - Struts
que how can i run a simple strut programm?
please explain with a proper example.
reply soon. Hi Friend,
Please visit the following link:
Thanks | http://roseindia.net/tutorialhelp/comment/13933 | CC-MAIN-2014-10 | refinedweb | 1,245 | 66.33 |
using UnityEngine; using System.Collections; public class Singleton : Monobehaviour { public static Singleton instance; void Awake() { if(instance == null) { instance = this; DontDestroyOnLoad(gameObject); } else if(instance != this) { Destroy(gameObject); } } }
This is the Singleton I often use when developing in Unity. I’ve spent the past few days doing some more in depth research for using Singletons specifically for game development in Unity. I’ve learned a few things; first, everyone and their dog has written a post on Singletons, secondly, there are an endless amount of developers arguing the merits and woes of Singletons, and lastly I’ve learned that even though there are many implementations of Singletons, this basic pattern handles most scenarios. These things have given me inspiration as to the direction of future posts and this blog in general. I am going to keep things concise and practical. So without further ado, here is an example of a Singleton in use.
using UnityEngine; using System.Collections; public class Player : Monobehaviour { public static Player instance; public int health = 100; void Awake() { if(instance == null) { instance = this; DontDestroyOnLoad(gameObject); } else if(instance != this) { Destroy(gameObject); } } }
In this Player class we have added health and made it Public. This is important if you want to give other scripts access to this variable.
using UnityEngine; using System.Collections; public class Sword : Monobehaviour { void Update() { if(Input.GetKeyDown(KeyCode.Return)) { Slash(); print(Player.instance.health.ToString()); } } void Slash() { Player.instance.health -= 30; } }
In this Sword class we have a Slash function that accesses the Player’s health and subtracts it by 30. In the Update function I’ve added some code to test Slash. It does indeed go down by 30. I realize the Player can now have negative health but for the sake of simplicity I’ll leave it there as it already demonstrates a Singleton.
Considerations
- DontDestroyOnLoad() only works with root GameObjects or components on root GameObjects. Therefore you should leave DontDestroyOnLoad() off of child objects at least to save yourself from warnings filling up the console. | https://ccgivens.wordpress.com/2017/01/19/obligatory-singleton-post/ | CC-MAIN-2019-35 | refinedweb | 336 | 57.27 |
Introduction to Brian part 2: Synapses:
2-intro-to-brian-synapses.ipynb
See the tutorial overview page for more details.
If you haven’t yet read part 1: Neurons, go read that now.
As before we start by importing the Brian package and setting up matplotlib for IPython:
from brian2 import * %matplotlib inline
The simplest Synapse¶
Once you have some neurons, the next step is to connect them up via synapses. We’ll start out with doing the simplest possible type of synapse that causes an instantaneous change in a variable after a spike.
start_scope() eqs = ''' dv/dt = (I-v)/tau : 1 I : 1 tau : second ''' G = NeuronGroup(2, eqs, threshold='v>1', reset='v = 0', method='linear') G.I = [2, 0] G.tau = [10, 100]*ms # Comment these two lines out to see what happens without Synapses S = Synapses(G, G, on_pre='v_post += 0.2') S.connect(i=0, j=1) M = StateMonitor(G, 'v', record=True) run(100*ms) plot(M.t/ms, M.v[0], label='Neuron 0') plot(M.t/ms, M.v[1], label='Neuron 1') xlabel('Time (ms)') ylabel('v') legend();
There are a few things going on here. First of all, let’s recap what is
going on with the
NeuronGroup. We’ve created two neurons, each of
which has the same differential equation but different values for
parameters I and tau. Neuron 0 has
I=2 and
tau=10*ms which means
that is driven to repeatedly spike at a fairly high rate. Neuron 1 has
I=0 and
tau=100*ms which means that on its own - without the
synapses - it won’t spike at all (the driving current I is 0). You can
prove this to yourself by commenting out the two lines that define the
synapse.
Next we define the synapses:
Synapses(source, target, ...) means
that we are defining a synaptic model that goes from
source to
target. In this case, the source and target are both the same, the
group
G. The syntax
on_pre='v_post += 0.2' means that when a
spike occurs in the presynaptic neuron (hence
on_pre) it causes an
instantaneous change to happen
v_post += 0.2. The
_post means
that the value of
v referred to is the post-synaptic value, and it
is increased by 0.2. So in total, what this model says is that whenever
two neurons in G are connected by a synapse, when the source neuron
fires a spike the target neuron will have its value of
v increased
by 0.2.
However, at this point we have only defined the synapse model, we
haven’t actually created any synapses. The next line
S.connect(i=0, j=1) creates a synapse from neuron 0 to neuron 1.
Adding a weight¶
In the previous section, we hard coded the weight of the synapse to be the value 0.2, but often we would to allow this to be different for different synapses. We do that by introducing synapse equations.
start_scope() eqs = ''' dv/dt = (I-v)/tau : 1 I : 1 tau : second ''' G = NeuronGroup(3, eqs, threshold='v>1', reset='v = 0', method='linear') G.I = [2, 0, 0] G.tau = [10, 100, 100]*ms # Comment these two lines out to see what happens without Synapses S = Synapses(G, G, 'w : 1', on_pre='v_post += w') S.connect(i=0, j=[1, 2]) S.w = 'j*0.2'_1<<
This example behaves very similarly to the previous example, but now
there’s a synaptic weight variable
w. The string
'w : 1' is an
equation string, precisely the same as for neurons, that defines a
single dimensionless parameter
w. We changed the behaviour on a
spike to
on_pre='v_post += w' now, so that each synapse can behave
differently depending on the value of
w. To illustrate this, we’ve
made a third neuron which behaves precisely the same as the second
neuron, and connected neuron 0 to both neurons 1 and 2. We’ve also set
the weights via
S.w = 'j*0.2'. When
i and
j occur in the
context of synapses,
i refers to the source neuron index, and
j
to the target neuron index. So this will give a synaptic connection from
0 to 1 with weight
0.2=0.2*1 and from 0 to 2 with weight
0.4=0.2*2.
Introducing a delay¶
So far, the synapses have been instantaneous, but we can also make them act with a certain delay.
start_scope() eqs = ''' dv/dt = (I-v)/tau : 1 I : 1 tau : second ''' G = NeuronGroup(3, eqs, threshold='v>1', reset='v = 0', method='linear') G.I = [2, 0, 0] G.tau = [10, 100, 100]*ms S = Synapses(G, G, 'w : 1', on_pre='v_post += w') S.connect(i=0, j=[1, 2]) S.w = 'j*0.2' S.delay = 'j*2*ms'_2<<
As you can see, that’s as simple as adding a line
S.delay = 'j*2*ms'
so that the synapse from 0 to 1 has a delay of 2 ms, and from 0 to 2 has
a delay of 4 ms.
More complex connectivity¶
So far, we specified the synaptic connectivity explicitly, but for larger networks this isn’t usually possible. For that, we usually want to specify some condition.
start_scope() N = 10 G = NeuronGroup(N, 'v:1') S = Synapses(G, G) S.connect(condition='i!=j', p=0.2)
Here we’ve created a dummy neuron group of N neurons and a dummy
synapses model that doens’t actually do anything just to demonstrate the
connectivity. The line
S.connect(condition='i!=j', p=0.2) will
connect all pairs of neurons
i and
j with probability 0.2 as
long as the condition
i!=j holds. So, how can we see that
connectivity? Here’s a little function that will let us visualise it.
def visualise_connectivity(S): Ns = len(S.source) Nt = len(S.target) figure(figsize=(10, 4)) subplot(121) plot(zeros(Ns), arange(Ns), 'ok', ms=10) plot(ones(Nt), arange(Nt), 'ok', ms=10) for i, j in zip(S.i, S.j): plot([0, 1], [i, j], '-k') xticks([0, 1], ['Source', 'Target']) ylabel('Neuron index') xlim(-0.1, 1.1) ylim(-1, max(Ns, Nt)) subplot(122) plot(S.i, S.j, 'ok') xlim(-1, Ns) ylim(-1, Nt) xlabel('Source neuron index') ylabel('Target neuron index') visualise_connectivity(S)
There are two plots here. On the left hand side, you see a vertical line of circles indicating source neurons on the left, and a vertical line indicating target neurons on the right, and a line between two neurons that have a synapse. On the right hand side is another way of visualising the same thing. Here each black dot is a synapse, with x value the source neuron index, and y value the target neuron index.
Let’s see how these figures change as we change the probability of a connection:
start_scope() N = 10 G = NeuronGroup(N, 'v:1') for p in [0.1, 0.5, 1.0]: S = Synapses(G, G) S.connect(condition='i!=j', p=p) visualise_connectivity(S) suptitle('p = '+str(p))
And let’s see what another connectivity condition looks like. This one will only connect neighbouring neurons.
start_scope() N = 10 G = NeuronGroup(N, 'v:1') S = Synapses(G, G) S.connect(condition='abs(i-j)<4 and i!=j') visualise_connectivity(S)
Try using that cell to see how other connectivity conditions look like.
You can also use the generator syntax to create connections like this
more efficiently. In small examples like this, it doesn’t matter, but
for large numbers of neurons it can be much more efficient to specify
directly which neurons should be connected than to specify just a
condition. Note that the following example uses
skip_if_invalid to
avoid errors at the boundaries (e.g. do not try to connect the neuron
with index 1 to a neuron with index -2).
start_scope() N = 10 G = NeuronGroup(N, 'v:1') S = Synapses(G, G) S.connect(j='k for k in range(i-3, i+4) if i!=k', skip_if_invalid=True) visualise_connectivity(S)
If each source neuron is connected to precisely one target neuron (which would be normally used with two separate groups of the same size, not with identical source and target groups as in this example), there is a special syntax that is extremely efficient. For example, 1-to-1 connectivity looks like this:
start_scope() N = 10 G = NeuronGroup(N, 'v:1') S = Synapses(G, G) S.connect(j='i') visualise_connectivity(S)
You can also do things like specifying the value of weights with a string. Let’s see an example where we assign each neuron a spatial location and have a distance-dependent connectivity function. We visualise the weight of a synapse by the size of the marker.
start_scope() N = 30 neuron_spacing = 50*umetre width = N/4.0*neuron_spacing # Neuron has one variable x, its position G = NeuronGroup(N, 'x : metre') G.x = 'i*neuron_spacing' # All synapses are connected (excluding self-connections) S = Synapses(G, G, 'w : 1') S.connect(condition='i!=j') # Weight varies with distance S.w = 'exp(-(x_pre-x_post)**2/(2*width**2))' scatter(S.x_pre/um, S.x_post/um, S.w*20) xlabel('Source neuron position (um)') ylabel('Target neuron position (um)');
Now try changing that function and seeing how the plot changes.
More complex synapse models: STDP¶
Brian’s synapse framework is very general and can do things like short-term plasticity (STP) or spike-timing dependent plasticity (STDP). Let’s see how that works for STDP.
STDP is normally defined by an equation something like this:
That is, the change in synaptic weight w is the sum over all presynaptic spike times \(t_{pre}\) and postsynaptic spike times \(t_{post}\) of some function \(W\) of the difference in these spike times. A commonly used function \(W\) is:
This function looks like this:
tau_pre = tau_post = 20*ms A_pre = 0.01 A_post = -A_pre*1.05 delta_t = linspace(-50, 50, 100)*ms W = where(delta_t>0, A_pre*exp(-delta_t/tau_pre), A_post*exp(delta_t/tau_post)) plot(delta_t/ms, W) xlabel(r'$\Delta t$ (ms)') ylabel('W') axhline(0, ls='-', c='k');
Simulating it directly using this equation though would be very inefficient, because we would have to sum over all pairs of spikes. That would also be physiologically unrealistic because the neuron cannot remember all its previous spike times. It turns out there is a more efficient and physiologically more plausible way to get the same effect.
We define two new variables \(a_{pre}\) and \(a_{post}\) which are “traces” of pre- and post-synaptic activity, governed by the differential equations:
When a presynaptic spike occurs, the presynaptic trace is updated and the weight is modified according to the rule:
When a postsynaptic spike occurs:
To see that this formulation is equivalent, you just have to check that the equations sum linearly, and consider two cases: what happens if the presynaptic spike occurs before the postsynaptic spike, and vice versa. Try drawing a picture of it.
Now that we have a formulation that relies only on differential equations and spike events, we can turn that into Brian code.
start_scope() taupre = taupost = 20*ms wmax = 0.01 Apre = 0.01 Apost = -Apre*taupre/taupost*1.05 G = NeuronGroup(1, 'v:1', threshold='v>1') S = Synapses(G, G, ''' w : 1 dapre/dt = -apre/taupre : 1 (event-driven) dapost/dt = -apost/taupost : 1 (event-driven) ''', on_pre=''' v_post += w apre += Apre w = clip(w+apost, 0, wmax) ''', on_post=''' apost += Apost w = clip(w+apre, 0, wmax) ''')
There are a few things to see there. Firstly, when defining the synapses
we’ve given a more complicated multi-line string defining three synaptic
variables (
w,
apre and
apost). We’ve also got a new bit of
syntax there,
(event-driven) after the definitions of
apre and
apost. What this means is that although these two variables evolve
continuously over time, Brian should only update them at the time of an
event (a spike). This is because we don’t need the values of
apre
and
apost except at spike times, and it is more efficient to only
update them when needed.
Next we have a
on_pre=... argument. The first line is
v_post += w: this is the line that actually applies the synaptic
weight to the target neuron. The second line is
apre += Apre which
encodes the rule above. In the third line, we’re also encoding the rule
above but we’ve added one extra feature: we’ve clamped the synaptic
weights between a minimum of 0 and a maximum of
wmax so that the
weights can’t get too large or negative. The function
clip(x, low, high) does this.
Finally, we have a
on_post=... argument. This gives the statements
to calculate when a post-synaptic neuron fires. Note that we do not
modify
v in this case, only the synaptic variables.
Now let’s see how all the variables behave when a presynaptic spike arrives some time before a postsynaptic spike.
start_scope() taupre = taupost = 20*ms wmax = 0.01 Apre = 0.01 Apost = -Apre*taupre/taupost*1.05 G = NeuronGroup(2, 'v:1', threshold='t>(1+i)*10*ms', refractory=100*ms) S = Synapses(G, G, ''' w : 1 dapre/dt = -apre/taupre : 1 (clock-driven) dapost/dt = -apost/taupost : 1 (clock-driven) ''', on_pre=''' v_post += w apre += Apre w = clip(w+apost, 0, wmax) ''', on_post=''' apost += Apost w = clip(w+apre, 0, wmax) ''', method='linear') S.connect(i=0, j=1) M = StateMonitor(S, ['w', 'apre', 'apost'], record=True) run(30*ms) figure(figsize=(4, 8)) subplot(211) plot(M.t/ms, M.apre[0], label='apre') plot(M.t/ms, M.apost[0], label='apost') legend() subplot(212) plot(M.t/ms, M.w[0], label='w') legend(loc='best') xlabel('Time (ms)');
A couple of things to note here. First of all, we’ve used a trick to make neuron 0 fire a spike at time 10 ms, and neuron 1 at time 20 ms. Can you see how that works?
Secondly, we’ve replaced the
(event-driven) by
(clock-driven) so
you can see how
apre and
apost evolve over time. Try reverting
this change and see what happens.
Try changing the times of the spikes to see what happens.
Finally, let’s verify that this formulation is equivalent to the original one.
start_scope() taupre = taupost = 20*ms Apre = 0.01 Apost = -Apre*taupre/taupost*1.05 tmax = 50*ms N = 100 # Presynaptic neurons G spike at times from 0 to tmax # Postsynaptic neurons G spike at times from tmax to 0 # So difference in spike times will vary from -tmax to +tmax G = NeuronGroup(N, 'tspike:second', threshold='t>tspike', refractory=100*ms) H = NeuronGroup(N, 'tspike:second', threshold='t>tspike', refractory=100*ms) G.tspike = 'i*tmax/(N-1)' H.tspike = '(N-1-i)*tmax/(N-1)' S = Synapses(G, H, ''' w : 1 dapre/dt = -apre/taupre : 1 (event-driven) dapost/dt = -apost/taupost : 1 (event-driven) ''', on_pre=''' apre += Apre w = w+apost ''', on_post=''' apost += Apost w = w+apre ''') S.connect(j='i') run(tmax+1*ms) plot((H.tspike-G.tspike)/ms, S.w) xlabel(r'$\Delta t$ (ms)') ylabel(r'$\Delta w$') axhline(0, ls='-', c='k');
Can you see how this works? | http://brian2.readthedocs.io/en/2.0.2.1/resources/tutorials/2-intro-to-brian-synapses.html | CC-MAIN-2018-17 | refinedweb | 2,553 | 64.61 |
PTHREAD_KILL(3) Linux Programmer's Manual PTHREAD_KILL(3)
pthread_kill - send a signal to a thread
#include <signal.h> int pthread_kill(pthread_t thread, int sig); Compile and link with -pthread. Feature Test Macro Requirements for glibc (see feature_test_macros(7)): pthread_kill(): _POSIX_C_SOURCE >= 199506L || _XOPEN_SOURCE >= 500
The pthread_kill() function sends the signal sig to thread, a thread in the same process as the caller. The signal is asynchronously directed to thread. If sig is 0, then no signal is sent, but error checking is still performed.
On success, pthread_kill() returns 0; on error, it returns an error number, and no signal is sent.
EINVAL An invalid signal was specified.() │ Thread safety │ MT-Safe │ └──────────────────────────────────────┴───────────────┴─────────┘
POSIX.1-2001, POSIX.1-2008..
kill(2), sigaction(2), sigpending(2), pthread_self(3), pthread_sigmask(3), raise(3), pthreads(7), signal(7)
This page is part of release 5.11 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. Linux 2021-03-22 PTHREAD_KILL(3)
Pages that refer to this page: pthread_sigmask(3), raise(3), nptl(7), pthreads(7), signal(7), signal-safety(7) | https://man7.org/linux/man-pages/man3/pthread_kill.3.html | CC-MAIN-2021-17 | refinedweb | 191 | 66.44 |
Angular 6: What's New, and Why Upgrade
Angular 6 is now available and it’s not a drop-in replacement for Angular 5. If you’ve been developing with Angular since Angular 2, you likely remember that it wasn’t too difficult to upgrade to Angular 4 or Angular 5. In most projects, you could change the version numbers in your
package.json and you were on your way.
In fact, the most significant change I remember in the last couple of years was the introduction of
HttpClient, which happened in Angular 4.3. And it wasn’t removed in Angular 5; it was merely deprecated. There was also the move from
<template> to
<ng-template>. There were some APIs removed in Angular 5, but I wasn’t using them in any of my projects.
This brings us to Angular 6, where there are breaking changes. The most prominent difference that I’ve found is not in Angular itself but in RxJS. In this post, I’ll walk you through these breaking changes so you can stay on the happy path while upgrading.
Upgrading to RxJS 6
In RxJS v6, many of the class locations changed (affecting your imports) as did the syntax you use to manipulate data from an
HttpClient request.
With RxJS v5.x, your imports look as follows:
import { Observable } from 'rxjs/Observable'; import 'rxjs/add/operator/map';
With RxJS v6.x, they’ve changed a bit:
import { Observable } from 'rxjs'; import { map } from 'rxjs/operators';
Another breaking change is the introduction of
pipe(). With v5.x, you could map directly from an HTTP call to manipulate data.
search(q: string): Observable<any> { return this.getAll().map(data => data .filter(item => JSON.stringify(item).toLowerCase().includes(q))); }
With v6.x, you have to pipe the results into
map():
search(q: string): Observable<any> { return this.getAll().pipe( map((data: any) => data .filter(item => JSON.stringify(item).toLowerCase().includes(q))) ); }
Hat tip to funkizer on Stack Overflow for their help in figuring out how to convert from
map() to
pipe().
The RxJS v5.x to v6 Update Guide has many more tips for upgrading, including:
If you depend on a library that uses RxJS v5.x or don’t want to modify your code, you can install rxjs-compat:
npm install rxjs@6 rxjs-compat@6 --save
Note that this will increase the bundle size of your application, so you should try to remove it as soon as you can.
To convert your imports from the old locations to the new ones, you can use rxjs-tslint:
npm i -g rxjs-tslint rxjs-5-to-6-migrate -p [path/to/tsconfig.json]
Dependency Injection Simplified in Angular 6
One of the changes I like in Angular 6 is your services can now register themselves. In previous versions, when you annotated a class with
@Injectable(), you had to register it as a provider in a module or component to use it. In Angular 6, you can specify
providedIn and it will auto-register itself when the app bootstraps.
@Injectable({ providedIn: 'root' })
You can still use the old way, and things will work. You can also target a specific module for your service.
@Injectable({ providedIn: AdminModule })
I like the new auto-registration capability because it allows you to test your components and services easier. No need to register services in your modules and your tests anymore!
See Angular’s Dependency Injection Guide for more information.
Angular CLI Changes
Angular CLI has updated its version number to match Angular’s, going from 1.7.4 to 6.0.0. The two most significant changes I noticed are:
- Running
ng testno longer watches files for changes. It executes each test, then exits. If you want to watch your files for changes, you can run
ng test --watch=true
- Running
ng buildno longer produces a production build by default. To do a production build, you can run
ng build --prod. In Angular 5 and below, the flag was
-prod, with a single dash.
There are many more changes, but these were the ones that had the biggest impact on my workflow.
What Else is New in Angular 6?
Stephen Fluin announced that Angular 6 is available last week. He notes the significant changes:
ng update: this is a new CLI command that can upgrade components of your application. For example,
ng update @angular/corewill update all of the Angular packages, as well as RxJS and TypeScript. To see how you can use it on your project, see the Angular Update Guide.
ng add: this command makes it easier to add popular libraries and capabilities to your project. For example:
ng add @angular/pwa: turn your app into a progressive web application (PWA)
ng add @ng-bootstrap/schematics: adds Bootstrap and ng-bootstrap to your project
ng add @angular/material: installs and configures Angular Material (note: it does not import individual modules)
- Angular Elements: allows dynamic bootstrapping of components in embedded HTML
- Angular Material Starters: allows you to add flags to
ng generateto generate Material components like side navigation, dashboards, and data tables
- CLI workspaces: you can now have multiple Angular projects
- Library support: component libraries can be generated with
ng generate library {name}
Tutorials Updated for Angular 6
I updated a number of tutorials from Angular 5 to 6 since the release last Friday. I started with my Angular and Angular CLI Tutorial.
I spent a couple hours today upgrading my @angular tutorial to use Angular 6. It wasn't too difficult, but not that straightforward either. The good news is I learned a lot!— Matt Raible (@mraible) May 4, 2018
💻 PR at
📘 Updated tutorial at #bleedingedge
This tutorial has branches for Angular Material, Bootstrap, and OIDC authentication with Okta. Angular’s CLI has an include Angular Material story and an include Bootstrap story that helped me upgrade.
Upgrading the
okta branch to use Angular 6 wasn’t too difficult. That branch uses Manfred Steyer’s angular-oauth2-oidc v3.1.4, which depends on RxJS v5.x, so I did have to install rxjs-compat to make things work.
I also upgraded two of the very first Angular tutorials I wrote for Okta last year:
- Build an Angular App with Okta’s Sign-In Widget in 15 Minutes
- Angular Authentication with OpenID Connect and Okta in 20 Minutes
We all know authentication is an important component of most apps, so I thought it would be helpful to get these posts updated, in case you are ready to make the switch to Angular 6. You can see the changelog at the bottom of each post to review exactly what changed. There are links to both the post changes and the code changes.
For each project I updated, I performed the following steps:
- Created a new project from scratch using
ng new
- Went through the tutorial steps, adjusting the code as necessary
- In the existing project, created a branch, ran
rm -rf *, then copied the code from the completed tutorial
- Copied/deleted dot files that didn’t get deleted or moved
For the many other Angular tutorials on this blog, I believe it’s possible to upgrade them, but also very time-consuming. For that reason, I’ve changed all tutorials to specify the version of Angular CLI to install, as well as the version of Angular Material.
If you want to try upgrading any of them and succeed, please send a pull request! I’ll be happy to update its matching blog post.
Learn More about Upgrading to Angular 6
I hope this post has helped you learn how to upgrade to Angular 6. All the applications I updated in the last several days were small and didn’t contain a whole lot of functionality. I imagine upgrading a more substantial project might be more difficult.
An excellent example of a large-project-by-default is one created by JHipster. William Marques recently created a pull request for upgrading to Angular 6, which might serve as a guide for those not using Angular CLI.
If you have any questions about Angular 6 or related projects, please leave a comment. I’m always happy to try and help! | https://developer.okta.com/blog/2018/05/09/upgrade-to-angular-6 | CC-MAIN-2019-22 | refinedweb | 1,365 | 53.61 |
In JavaFX, I have a Controller class that pulls control components from an FXML file and has methods that act on the component, shown with a Label here:
public class ViewController { @FXML private Label labelStatus; public void updateStatusLabel(String label) { labelStatus.setText("Status: " + label); } }
I also have a Java Thread with a run() method, like this:
public class Server extends Thread { public void run() { super.run(); } }
This Server thread handles some socket connections that I need for my particular application. After a connection has been established (in the run() method -- not shown), I need to update the Label in the FXML Controller. How would I do this?
Note: I've purposely made my code and question general so it may help others with the same problem.
You call Platform.runLater(runnable) off the JavaFX UI thread to execute a runnable that updates elements of the active JavaFX Scene Graph on the JavaFX UI thread.
Also review Concurrency in JavaFX, with the Task and Service classes and see if that is not a more appropriate solution to your particular task.
For more information, see: | https://javafxpedia.com/en/knowledge-base/17873597/javafx--updating-ui-elements-in-a-controller-class-from-a-thread | CC-MAIN-2020-50 | refinedweb | 183 | 62.07 |
This site uses strictly necessary cookies. More Information
I have a small viewport, and I would like to know what is the best way to draw a line or border around the viewport.
I tried the following:
I used line renderer and specify the points in 3D. This approach is tedious as I have to specify the x, y, z coordinates to make a rectangle in a way to frame the viewport.
I used Line.MakeRect() method from Vectrosity. It works great. Except that when scaling (switching from 4:3 to 16:9 aspect ratio) occurs, it does not scale but retains position. Which is great, but not desirable as viewport get scaled when aspect ratio changes.
what is the best way to draw a border around viewport? In a way so that it always matches with the scaling effect.
Answer by Eric5h5
·
Sep 04, 2013 at 03:06 AM
Using Vectrosity, redraw the border if the resolution changes:
import Vectrosity;
private var border : VectorLine;
var borderWidth = 4;
private var screenWidth : int;
function Start () {
border = new VectorLine("border", new Vector2[5], null, borderWidth, LineType.Continuous, Joins.Weld);
UpdateBorder();
}
function Update () {
if (Screen.width != screenWidth) {
UpdateBorder();
}
}
function UpdateBorder () {
VectorLine.SetCamera();
border.MakeRect (Vector2(borderWidth/2, borderWidth/2),
Vector2(Screen.width-borderWidth/2, Screen.height-borderWidth/2));
border.Draw();
screenWidth = Screen.width;
}
$$anonymous$$any ways to do this.
a) Simplest: Create a GUITexture with
an image of a transparent box (just
borders). It will scale correctly
with viewport changes if you set all
"Pixel inset" values to 0, and set
scale (x,y and z) to 1.
b) Update the values of "rect" based on Screen.width and Screen.height (or viewport respectively).
Use UnityGUI/Vectrosity to draw the lines in 2d with respect to your viewport size.
Im sure I am missing some still...
Trying (B) now...
a) is definitely simplest, however it's a big shame to cover the entire screen with a transparent texture when you will only be using 1% of it...big waste of fill-rate.
Well, you can fix that part by using the left/right/top/bottom border properties.
Thanks. I was able to follow your algorithm.
Issue drawing lines between buttons - Lines overlap the buttons
2
Answers
How do I make a particle system follow a function?
1
Answer
Draw line in 2D space using LineRender
1
Answer
Making a border on a LineRenderer
0
Answers
Need the user to be able to draw lines on the screen.
1
Answer
EnterpriseSocial Q&A | https://answers.unity.com/questions/530048/what-is-the-best-way-to-draw-a-border-around-a-vie.html | CC-MAIN-2021-25 | refinedweb | 419 | 59.3 |
// A personalized adventure #include <iostream> #include <string> #include <iostream> #include <string> using std::cout; using std::cin; using std::endl; using std::string; using namespace std; int main() { cout << "\tGame Designer's Network\n"; int security = 0; string username; cout << "\nUsername: "; cin >> username; string password; cout << "Password: "; cin >> password; if (username == "colton" || password == "williams") { cout << "\nWelcome, colton."; security = 10; } if (!security) cout << "\nYour login failed."; cout << "\nAsk Colton for your username and password."; char again= 'y'; while (again=='y') { rogue, " << leader << ".\n"; cout << "\nAlong the way, a band of marauding ogres ambushed the party. "; cout << "All fought bravely under the command of " << leader; cout << ", and the ogres were defeated, but at a cost. "; cout << "Of the adventurers, " <<**Played a cool game**"; cout << "\nDo you want to play again? (y/n): "; cin >> again; } return 0; } }
What did i do wrong | https://www.daniweb.com/programming/software-development/threads/271717/it-will-not-repeat-the-story-over-if-i-enter-y | CC-MAIN-2018-13 | refinedweb | 139 | 65.12 |
Introduction
The Import SIG exists to provide a forum to discuss the next generation of Python's import facilities.
The long-term goal of the SIG is to eform the entire import architecture of Python. This affects Python start-up, the semantics of sys.path, and the C API to importing.
A short-term goal is to provide a "new architecture import hooks" module for the standard library. This would provide developers with a way of learning the new architecture.
Background
The SIG was born as the result of a discussion on developers day at IPC8. The topic itself is much older, of course.
Pre-History
In the early days of the 21st century, archeologists discovered that originally, Python had no packages (not even jars to keep pickles in). Modules were left on the path where anyone could trip over them. When a particular module was needed, a page was sent out to "find_module". When he returned, saying he had found it, he was then clobbered over the head and sent back out to get it.
Meanwhile, modules, without any sense of propriety, were doing their thing on the path, and in no time at all, Pythondom was littered with the disgusting little things.
Then, one dark and stormy Knight who said "ni" got tired of tripping over them, falling into the ditch and getting his armor rusty. He bravely started piling modules on top of other modules. Protocol was not adjusted however, so the pages now had to make 4 trips; first to find and get the top module, then to find and get the module underneath.
Other Knights, tired of pages returning with the wrong module, began using specially trained pages (called "hooks") who had some tricks for finding exactly the right module. Unfortunately, hooks were a bloodthirsty lot, and if two of them met on the path, usually only one survived.
Late 20th Century
By Python 1.5, both approaches had been blessed. Python had packages built into the language, and a "preferred" method for doing import hooks (ihooks.py).
Unfortunately, the architecture has grown rather complex. Hooks take over at the level of the builtin
__import__ (which is what the keyword
import calls, as well as the C level
PyImport_ImportModule). This is before the package mechanics are encountered. So any hook that deals with packages needs to emulate the package machinery (and ihooks.py provides an implementation of this). See the call graph diagram for an overview.
Using ihooks requires an intimate knowledge of the import mechanism. You change or add functionality by overriding the way ihook's pure Python implementation of the import process sees the "filesystem", or performs the low-level import tasks, (you can, of course, override at a higher level, but you'll have to implement more of the basic mechanisms). See the class diagram of ihooks.
The Problem
The import mechanism is coming under pressure from a number of sources. Packages have moved from being a novelty to a necessity. Package authors are creating complex multi-level structures with inter-dependencies between sub-packages or packages.
Others are doing imports from things other than the filesystem, (archives, databases, possibly even URLs).
People do strange import hacks to get around versioning problems, or platform dependencies. Most of these do not use ihooks, probably because it takes considerable effort to learn how to use ihooks effectively. Many end up with a wrapper module that finds the right code and stuffs it directly into the required namespaces, bypassing the import mechanism altogether.
This creates a problem for
freeze and installers in that tracking dependencies is nearly impossible.
There are other problems. It takes a whole lot of system calls to do a (normal) import, so Python performance suffers, particularly in a CGI-like enviroment. The "approved" ways of extending the path and installing packages and modules are rarely followed, (it's been a moving target), making installations brittle.
And then there are some related issues: such as network installs of Python; or Python in the presence of both network and local installations.
The Proposal
In early 1999, Greg Stein wrote
imputil, which turns the problem on its head. It introduced the idea of having multiple importers. An import request would be handed to each importer in turn, until one of them satisfied the request. In addition, the API for importers makes it easier for the developer to deal with the package machinery.
This solves a number of problems. It makes it easy to import from alternate sources (you don't have to pretend you're a filesystem). It lets one package author install one set of hooks without interfering with anyone else's hooks (or lack thereof). The importer can be distributed with the package, making distribution and maintenance simpler. Combined with an archive of compiled Python modules, it makes awesome start up performance possible. A class diagram of imputil is here. Imputil itself can be downloaded from Greg's web site.
It does make writing certain kinds of import hooks more difficult. "Policy" hooks that affect an entire installation are not easy, (whether this is good or bad is a valid discussion topic). Hooks that take advantage of the current import's assumption that everything is in the filesystem may end up more verbose, (eg, a hook that overrides the "find" part of today's import mechanism, but leaves the "load" part alone).
In addition, there are areas that need improvement. There is currently almost no capability to manage the collection of importers. Performance on a normal Python installation is disappointing, (the only time imputil passes control back to the normal mechanism is for loading binary extensions). | http://www.python.org/community/sigs/retired/import-sig/ | crawl-002 | refinedweb | 944 | 64.1 |
Topic Last Modified: 2007-10-22
This topic explains how to use Group Policy Management Console (GPMC) to configure the Domain Name System (DNS) suffix search list. In some Microsoft Exchange Server 2007 scenarios, if you have a disjoint namespace, you must configure the DNS suffix search list to include multiple DNS suffixes. For more information about Exchange 2007 and disjoint namespaces, see Understanding Disjoint Namespace Scenarios with Exchange 2007.
Before you perform this procedure, confirm that you have installed .NET Framework 3.0 on the computer on which you will install GPMC.
The current version of GPMC that you can download from the Microsoft Download Center operates on the 32-bit versions of the Windows Server 2003 and Windows XP operating systems and can remotely manage Group Policy objects on 32-bit and 64-bit domain controllers. This version of GPMC does not include a 64-bit version, and the 32-bit version does not run on 64-bit platforms. The 32-bit version of Windows Server 2008 and the 32-bit version of Windows Vista both include a 32-bit version of GPMC. The 64-bit version of Windows Server 2008 and the 64-bit version of Windows Vista both include a 64-bit version of GPMC.
To perform this procedure, the account you use must be delegated the following:
For more information about permissions, delegating roles, and the rights that are required to administer Exchange 2007, see Permission Considerations. Object Editor, expand Computer Configuration, expand Administrative Templates, expand Network, and then click DNS Client.
Right-click DNS Suffix Search List, and then click Properties.. For more information about scoping Group Policy objects, see Scoping GPOs.
For more information about Group Policy, see the following topics: | http://technet.microsoft.com/en-us/library/bb847901.aspx | crawl-002 | refinedweb | 288 | 53.1 |
Testing with useEffect
We don't test
useEffect() hooks directly; we test the user-visible results they have.
Say have a component that loads some data from a web service upon mount and displays it:
import React, {useState, useEffect} from 'react';
import {Text, View} from 'react-native';
import api from './api';
export default function WidgetContainer() {
const [widgets, setWidgets] = useState([]);
useEffect(() => {
api.get('/widgets').then(response => {
setWidgets(response.data);
});
}, []);
return (
<View>
{widgets.map(widget => (
<Text key={widget.id}>{widget.name}</Text>
))}
</View>
);
}
Here's our initial attempt at a test:();
});
});
But the calls to
queryByText() return
null--the text is not found. This is because the test doesn't wait for the web service to return.
We can confirm this by adding console.log statements:
useEffect(() => {
+ console.log('sent request');
api.get('/widgets').then((response) => {
+ console.log('got response');
setWidgets(response.data);
});
}, []);
In the test, we can see that we sent the request, but didn't get the response before the test finished.
How can we fix this?
One way is to make the test wait for some time before it checks:
it('loads widgets upon mount', () => {
const {queryByText, debug} = render(<WidgetContainer />);
- expect(queryByText('Widget 1')).not.toBeNull();
- expect(queryByText('Widget 2')).not.toBeNull();
+ return new Promise((resolve, reject) => {
+ setTimeout(() => {
+ expect(queryByText('Widget 1')).not.toBeNull();
+ expect(queryByText('Widget 2')).not.toBeNull();
+ resolve();
+ }, 1000);
+ });
});
The test passes. There is a React
act() warning that we would want to find a way to fix if we kept this test approach:
Warning: An update to WidgetContainer inside a test was not wrapped in act(...).
But do we want to keep this test approach? There are a few other downsides to it as well:
- If the request takes too long, the test can fail sometimes.
- To get around this, you have to set the delay to a longer time, which slows down your whole test suite.
- And if the remote server goes down, your test will fail.
Mocking a Module
As an alternative, let's use Jest module mocks to replace the API call with one we create.
First let's restore our test to before we added the Promise:();
});
});
Next, mock the API module that the
WidgetContainer uses:
import WidgetContainer from '../WidgetContainer';
+import api from './api';
+
+jest.mock('./api');
+
describe('WidgetContainer', () => {
Now we get a different error:
TypeError: Cannot read property 'then' of undefined
8 | useEffect(() => {
9 | console.log('sent request');
> 10 | api.get('/widgets').then((response) => {
| ^
So our call to
api.get() returns undefined. This is because
jest.mock() replaces each function in the default export object with a mock function that by default returns
undefined. Since the real
api returns a promise that resolves, we should set the mocked function to resolve as well.
it('loads widgets upon mount', () => {
+ api.get.mockResolvedValue();
+
const {queryByText} = render(<WidgetContainer />);
Now our test no longer errors out, but we still get expectation failures that our results are
null. This is because
api.get() is now returning a promise that resolves. We also get a warning about an unhandled promise rejection:
TypeError:
Cannot read property 'data' of undefined
10 | api.get('/widgets').then((response) => {
11 | console.log('got response');
> 12 | setWidgets(response.data);
| ^
So we want to resolve to data that the component expects.
-api.get.mockResolvedValue();
+api.get.mockResolvedValue({
+ data: [
+ {id: 1, name: 'Widget 1'},
+ {id: 2, name: 'Widget 2'},
+ ],
+});
This isn't a full Axios response object; all we need to add are the fields the component is using.
Now the promise is no longer rejecting. There's the
act() warning again that we will need to address eventually. But also, we are still getting
null outputted. Why?
We can find out by using
debug(), which will output a representation of our component tree to the test logs:
-const {queryByText} = render(<WidgetContainer />);
+const {queryByText, debug} = render(<WidgetContainer />);
+
+debug();
expect(queryByText('Widget 1')).not.toBeNull();
The logged component tree we get is simply:
<View />
Why would that be? Our API is being called and is responding. This is because our test runs on the same tick of the event loop, so React doesn't have time to get the response and render. We need to wait for the rerender for the element to be displayed on the screen. We can do this by changing the first check to the asynchronous
findByText():
-const {queryByText, debug} = render(<WidgetContainer />);
+const {findByText, queryByText, debug} = render(<WidgetContainer />);
debug();
-expect(queryByText('Widget 1')).not.toBeNull();
-expect(queryByText('Widget 2')).not.toBeNull();
+await findByText('Widget 1');
+expect(queryByText('Widget 2')).not.toBeNull();
Why do we only change the first check to
findByText, leaving the second as
queryByText? This is because React will give us a warning if we
await a
findByText when it's available right away. As soon as "Widget 1" is visible, "Widget 2" will also be visible, so we can use a normal
expect(queryByText(…)).not.toBeNull() to check for it.
To use
await we also need to change the test function to be an
async function:
describe('WidgetContainer', () => {
- it('loads widgets upon mount', () => {
+ it('loads widgets upon mount', async () => {
api.get.mockResolvedValue({
Now the tests passes.
Now we can remove debug and log statements to keep our test output clean.
+const {findByText, queryByText} = render(<WidgetContainer />);
-const {findByText, queryByText, debug} = render(<WidgetContainer />);
-
-debug();
await findByText('Widget 1'); | https://reactnativetesting.io/component/effects-and-external-services/ | CC-MAIN-2022-21 | refinedweb | 878 | 50.33 |
Having problems reading info from a data file into an array of structures. The first time through, it gets the origin state and destin city correct, but all the other information is wrong. The origin City is storing the first three letters of the origin city, but also the origin state. (ie BalMD instead of Baltimore).
Also, I'm not sure waht the "200" means in fgets. I've toyed around with that ranging anywhere from 100 to 1000, but still get the same results.
Thanks.
Code:typedef struct leg { char originCity[20]; char originState[3]; char destinCity[20]; char destinState[3]; int nights; int miles; }LEG; struct leg *GetLegInfo (int numLegs, FILE *ifp, LEG *legs) { int i; for (i = 0; i < numLegs; i++) { /* fill arrary of legs */ fgets (legs[i].originCity, 200, ifp); fgets (legs[i].originState, 200, ifp); fgets (legs[i].destinCity, 200, ifp); fgets (legs[i].destinState, 200, ifp); fscanf (ifp, "%d", &legs[i].nights); fscanf (ifp, "%d", &legs[i].miles); } fclose (ifp); return (legs); } | http://cboard.cprogramming.com/c-programming/48029-problem-fgets.html | CC-MAIN-2014-42 | refinedweb | 168 | 76.72 |
Using an IO pin interrupt + Alarm
So I'm building a device using the LoPy + Pytrack where I need to have an Alarm set where every n seconds some readings are taken from sensors and sent over LoRa to my gateway, but I also need to have a physical interrupt which will send the same data as well, but it needs to be able to interrupt the Alarm if it's running (this is the main issue). Is there a way where I can set an Alarm, but allow an IO pin interrupt to override the alarm and do its thing, then once it's finished continue the alarm. Would the best method be to set a function which will return the alarm object, and when an interrupt comes up I cancel the alarm, run what I need to, then setup the alarm again?
This is what I have currently, I still have to actually test if this will work, but this does NOT implement cancelling or reinitializing the alarm. Would it be necessary to do so?
I will basically connect IO 7 (Pin 10) to IO 2 (PWR_EN) and set the pull up resistor according to this pin out
Something like this...
def get_coords(gps): print(gps.coordinates(debug=True)) print("Alarm func") def get_coords_pin(gps): print(gps.coordinates(debug=True)) print("Interrupt func") def setup_timer(func_to_run, arg, time_interval, periodic=False): return Timer.Alarm(func_to_run, s=time_interval, arg=arg, periodic=periodic) node = setup_node() py = Pytrack() gps = gps_setup.setup_gps() pin = Pin('P10', mode=Pin.IN, pull=Pin.PULL_UP) pin.callback(Pin.IRQ_FALLING, get_coords_pin, arg=gps) try: timer = setup_timer(get_coords, gps, 5, True) except Exception as e: print(e) print("Setup complete!")
I basically have it so that a pin is set as input and a callback to function to read the coordinates is run whenever the pin reads a falling edge. Then a Timer object is set based on the input to the setup and is set to run every 5 seconds periodically. Would I run into any hangups in having it this way? I feel like if the timer function is running (it can take a couple seconds to finish) and the button is pressed while the timer is callback runs, it will block the pin callback. As I said I have not tested this yes, so I will be doing so tonight. Just figured I'd ask
@gregcope Wondering if it might be useful to simply start a thread for everything. If I spawn an alarm in one thread, an interrupt in another and so on, I may be able to avoid the issue of blocking.
The major issue is that since some of the functions I need to run can take 20 seconds or so, I don't want one to block another from running and have to wait a whole other interval for the respective functions to be run. Especially since there are multiple interrupts and one or two alarms.
So would simply running a function that spawns an alarm within a separate thread make it so that the alarm itself is contained in that thread indefinitely?
@kbman99 i would suggest running your code in a loop, and within the interrupt handler just set a flag, that you main loop checks. Otherwise if i understand you right repeated hits on the switch could have it just going in circles. Also look into debouncing. | https://forum.pycom.io/topic/2914/using-an-io-pin-interrupt-alarm/1 | CC-MAIN-2019-13 | refinedweb | 567 | 58.42 |
Snake on the BBC micro:bit
The BBC, in their first digital literacy project since the 1980's, recently gave Year 7 students (aged 11-12) in the UK a tiny single-board computer called the micro:bit. It has a 5x5 grid of LEDs, two buttons, plenty of sensors, and not much memory (16K) or processing power (16 MHz). Thankfully, that's still plenty enough computer to have some fun with.
Playing snake on the micro:bit To me, the 5x5 LEDs looked like they might display a small game of Snake. All I had to do was figure out the input. I originally used the accelerometer to control the snake by tilting the device, but it lacked the speed and precision of a good Snake game. What I really wanted was a way to move the snake in four directions using the two A/B buttons. If you know binary, you'll know that two on/off values gives us four possible combinations. Of course, one of the combinations is off/off meaning no buttons are pressed. For this case, I decided it was most natural to have the snake constantly move 'down' until a button is pressed: A goes left, B right, and A+B up.
Controls
Video
Despite a slightly odd control layout and tiny display, I found Snake on the micro:bit to be surprisingly playable. As proof, here's a video of me playing a full game. It's converted from a GIF so the timing looks a bit odd. In reality the movement is nice and smooth.
Source code
The game clocks in at around 120 lines of Python. To read, I suggest you start with the 'main game loop' at the end of the file. [cceN_cpp theme="dawn"]
from microbit import * from random import randrange class Snake(): from microbit import * from random import randrange class Snake(): def __init__(self): self.length = 2 self.direction = "down" self.head = (2, 2) self.tail = [] def move(self): # extend tail self.tail.append(self.head) # check snake size if len(self.tail) > self.length - 1: self.tail = self.tail[-(self.length - 1):] if self.direction == "left": self.head = ((self.head[0] - 1) % 5, self.head[1]) elif self.direction == "right": self.head = ((self.head[0] + 1) % 5, self.head[1]) elif self.direction == "up": self.head = (self.head[0], (self.head[1] - 1) % 5) elif self.direction == "down": self.head = (self.head[0], (self.head[1] + 1) % 5) def grow(self): self.length += 1 def collides_with(self, position): return position == self.head or position in self.tail def draw(self): # draw head display.set_pixel(self.head[0], self.head[1], 9) # draw tail brightness = 8 for dot in reversed(self.tail): display.set_pixel(dot[0], dot[1], brightness) brightness = max(brightness - 1, 5) class Fruit(): def __init__(self): # place in a random position on the screen self.position = (randrange(0, 5), randrange(0, 5)) def draw(self): display.set_pixel(self.position[0], self.position[1], 9) class Game(): def __init__(self): self.player = Snake() self.place_fruit() def place_fruit(self): while True: self.fruit = Fruit() # check it's in a free space on the screen if not self.player.collides_with(self.fruit.position): break def handle_input(self): # change direction? (no reversing) if button_a.is_pressed() and button_b.is_pressed(): if self.player.direction != "down": self.player.direction = "up" elif button_a.is_pressed(): if self.player.direction != "right": self.player.direction = "left" elif button_b.is_pressed(): if self.player.direction != "left": self.player.direction = "right" else: if self.player.direction != "up": self.player.direction = "down" def update(self): # move snake self.player.move() # game over? if self.player.head in self.player.tail: self.game_over() # nom nom nom elif self.player.head == self.fruit.position: self.player.grow() # space for more fruit? if self.player.length < 5 * 5: self.place_fruit() else: self.game_over() def score(self): return self.player.length - 2 def game_over(self): display.scroll("Score: %s" % self.score()) reset() def draw(self): display.clear() self.player.draw() self.fruit.draw() game = Game() # main game loop while True: game.handle_input() game.update() game.draw() sleep(500) [/cceN_cpp]
Installation
You can use the Mu editor to flash the above code to your micro:bit. This article is written by Mr. Caolan McMahon. | https://www.elecfreaks.com/blog/post/snake-on-the-bbc-microbit.html | CC-MAIN-2022-27 | refinedweb | 710 | 63.05 |
Provided by: libcanna1g-dev_3.7p3-6.5_amd64
NAME
RkMountDic - mount a dictionary in the dictionary list
SYNOPSIS
#include <canna/RK.h> int RkMountDic(cxnum, dicname, mode) int cxnum; char *dicname; int mode;
DESCRIPTION
RkMountDic mounts a dictionary in the dictionary list. The dictionary name is got with RkGetDicList(3). RkMountDic appends the named dictionary to the dictionary list. The dictionary thus mounted can be used from the next run of kana-kanji conversion. mode is meaningless at this moment. The dictionary to be mounted must not have already been mounted in the present context.
RETURN VALUE
This function returns 0 if successful; otherwise, it returns -1.
SEE ALSO
RkUnmountDic(3) RKMOUNTDIC(3) | http://manpages.ubuntu.com/manpages/precise/man3/RkMountDic.3.html | CC-MAIN-2019-43 | refinedweb | 112 | 52.76 |
Finance
Have a Finance Question? Ask a Financial Expert Online.
Dear Friend,
The internal rate of return (IRR) and net present value (NPV) are the two approaches which give similar results very often. But, in many projects it is observed that the internal rate of return is a less effective way of discounting the cash flows than net present value. The investment is measured and evaluated through internal rate of return by just discount rate only.
In many of the situations it is observed that IRR will have problems as it depends only on the discount rates. If two projects are evaluated simultaneously with same discount rates, similar income flow, same time period to check the profits and so on; it is easy to evaluate the project using IRR. But, discount rates usually differ over the period. For example, the rate of return varied from one percent to 12 percent over the range of around 20 years. So, here the discount rate is varying definitely.
IRR is also not effective for the projects which show often both positive and negative cash flows. If the investors have to look after the situation and reinvest for every two to three years, it will be ideally not good to evaluate the project using IRR. In case the project is made to run with changing discount rates then the method of evaluation called NPV will be better than IRR.
Looking to the above described reasons, comparing the projects' NPV is better than comparing their IRRs.
I hope this helps...
Warm Regards, | http://www.justanswer.com/finance/4k099-suppose-company-different-capital-budgeting-projects.html | CC-MAIN-2016-36 | refinedweb | 257 | 62.58 |
C++, legacy and future
Yes, I do like C++ for being C++, an expert language close to the metal. But there are some things in C++ that should be seriously overhauled, if C++ does not want to pass the field over to C# and friends.
Frontend changes
- Redundant keywords, basically anything where there are multiple ways to express the same thing semantically. struct or class means the same? If so, get rid of one, or add back some meaning, like struct may contain only an empty constructor and no functions.
- Variable argument functions, i.e. the ... . This is basically totally broken. I'd rather have something like
printf (const char* string, void* params = ...)which would mean "convert anything that comes to void*" and let the user cast it. If you specify something that is not void*, the compiler would check the type. This would also allow things like
vector<Type>::vector(Type* params = ...)where you do get compile time type checking. Anyway, the current ... construct is dangerous and needs some overhaul.
- A way to enforce variable initialisations. The default being "do not initialise at all" is dangerous, as the hundreds of bugs constantly remind us. If the programmer really does not want to initialise some variable, he should make it clear, like
int variable = undefined;in all other cases, he should be forced to initialise to some sane value. I know, most compilers do warn, but anyway, this should be enforced on the language level.
- Lambda functions. Seriously,
std::for_eachwithout lambdas is no fun, and this kind of stuff is just getting more and more important as it is extremely easy to make this run in parallel. There are even two proposals in the pipeline but it seems they won't make it into C++0x.
- Automatic type deduction --
for (var it = coolMap.begin (), end = coolMap.end ();is so much nicer than
for (std::map<std::string, std::pair<std::vector<std::string>, int> > >::const_iterator. At least, this seems to be "in" for C++0x
- Getting rid of legacy features. Things like
static void foo ()should be done with
namespace { void foo (); }, there is no need to support the legacy stuff if someone is writing against a C++0x compiler. Let's call this the strict mode or something like that, but stop C++ compilers from parsing 20 years old deprecated stuff. Same for
sstreamand
strstream, issue an error if someone is using the deprecated stuff using a C++0x compiler and refuse to compile it.
Runtime changes
- An usable RTTI system. C++'s RTTI system is totally rudimentary, and is not practically useful. Just take a look at Qt, MFC, Unreal Engine, wxWidgets -- all of them roll their own RTTI instead of using the C++ RTTI system.
- Extended memory allocation functions. New and delete is fine, but I'd like to have the ability to redirect all new calls to my custom allocator (easily! Something like
std::set_new_handler ()). Same for array new. Same goes for allocation hints, hinting that some memory is going to be permanent is an important clue for the allocator, as it might be put into a special heap.
Language changes
- Arrays should be of type
std::arrayby default, that is, if I write
char [6]it should be actually the same as
std::array<char, 6>, so I can use .begin () etc. on it. This is just a consistency thing, but anyway, it surely wouldn't hurt.
- An enumerable concept is needed. This goes hand in hand with concepts and arrays being of an array type, as one can easily loop over them using a for each keyword. Boost already has a FOR_EACH macro, but language level support is needed here so people can easily use the standard containers.
Library changes
- Library overhaul with performance in mind. Take a look at EASTL or something like this, the default C++ STL implementations are sometime just crap. This is getting better, but the standard should also enforce some performance constraints, like "empty containers must not allocate memory".
- The new style iterators from Boost should be the default, they are easy to use -- more people will be willing to write an iterator if it is easy.
- Proper string handling. No, std::string, std::wstring etc. is not proper string handling in the 21st century. The default should be probably UTF-8 based, with a working system to convert between various encodings -- like ICU.
- Proper stream handling. Operating systems write data byte-oriented, not char ... This is something that makes me wonder all the time, why the heck are the iostreams char oriented? I'd much rather have some basic
input_streamwhich can read bytes, same for output, and layer the functionality over this, i.e.
input_stream::ptr is (new file_input_stream ("filename")); char_stream<char> cs (is);Makes many things much easier, and looking at Java and C#, this seems to be working in practice, too.
- Thread creation, synchronisation. There is some thread stuff going into C++0x, but it is looking half-serious only. For example, no way to specify priorities as it is not really portable -- heck, something like priority_hint would have done it, and I wonder how someone is supposed to implement a background thread without priorities on Windows. Ok, an embedded platform might not have priorities but this should not cripple everyone.
There are changes coming that go into the right direction (atomics are
coming, regular expressions, smart pointers -- but not smart arrays,
uah, and finally hash maps, a working bind is coming, function objects),
but the pace is too slow to remain competitive. C++ was successful in
the past because it used to be extremely backwards compatible with C,
but this is no longer the case with C99 and C++ going slightly different
routes, moreover, most people are stuck with either C++ or C, and the
interop part of C++ via C (i.e., unmangled names via
extern "C") is
good and does not need to be changed at all. The main problem for C++ is
the lack of a driving force that can push a standard -- changing
something in this language is difficult, and it really needs a cleanup
so the compilers get maintainable and robust and the language itself
becomes easier to use. Even 5 years after C++03, there is still just one
compiler that supports C++, and that is the EDG
frontend. Seriously,
if a language gets that complicated that it becomes near impossible to
write a compiler that supports all of it, something is going really
really wrong.
Sources, references, etc.
- Lambda functions: Proposal 1, proposal 2, C# 3.0 lambda functions
- STL performance: EA STL
- C++ 0x wish list
- C++ concepts
- Unicode: The ICU project
- RTTI implementations: wxWidgets, Qt, MFC | https://anteru.net/blog/2008/03/11/216/index.html | CC-MAIN-2017-43 | refinedweb | 1,120 | 63.39 |
Java.io.PipedReader.read() Method
Description
The java.io.PipedReader.read(char[] cbuf, int off, int len) method reads up to len characters of data from this piped stream into an array of characters. Less than len characters will be read if the end of the data stream is reached or if len exceeds the pipe's buffer size. This method blocks until at least one character of input is available.
Declaration
Following is the declaration for java.io.PipedReader.read() method
public int read(char[] cbuf, int off, int len)
Parameters
cbuf -- the buffer into which the data is read.
off -- the start offset of the data.
len -- the maximum number of characters read.
Return Value
This method returns the total number of characters read into the buffer, or -1 if there is no more data because the end of the stream has been reached.
Exception
IOException -- if an I/O error occurs.
Example
The following example shows the usage of java.io.PipedReader.read() method.
package com.tutorialspoint; import java.io.*; public class PipedReaderDemo { public static void main(String[] args) { // create a new Piped writer and reader PipedWriter writer = new PipedWriter(); PipedReader reader = new PipedReader(); try { // connect the reader and the writer reader.connect(writer); // write something writer.write(70); writer.write(71); // read into a char array char[] b = new char[2]; reader.read(b, 0, 2); // print the char array for (int i = 0; i < 2; i++) { System.out.println("" + b[i]); } } catch (IOException ex) { ex.printStackTrace(); } } }
Let us compile and run the above program, this will produce the following result:
F G | http://www.tutorialspoint.com/java/io/pipedreader_read_char.htm | CC-MAIN-2015-06 | refinedweb | 267 | 66.64 |
(prototype) Introduction to Named Tensors in PyTorch¶
Author: Richard Zou
Named Tensors aim to make tensors easier to use by allowing users to associate explicit names”.
This tutorial is intended as a guide to the functionality that will be included with the 1.3 launch. By the end of it, you will be able to:
- Create Tensors with named dimensions, as well as remove or rename those dimensions
- Understand the basics of how operations propagate dimension names
- See how naming dimensions enables clearer code in two key areas:
- Broadcasting operations
- Flattening and unflattening dimensions
Finally, we’ll put this into practice by writing a multi-head attention module using named tensors.
Named tensors in PyTorch are inspired by and done in collaboration with Sasha Rush. Sasha proposed the original idea and proof of concept in his January 2019 blog post.
Basics: named dimensions¶
PyTorch now allows Tensors to have named dimensions; factory functions take a new names argument that associates a name with each dimension. This works with most factory functions, such as
- tensor
- empty
- ones
- zeros
- randn
- rand
Here we construct a tensor with names:
import torch imgs = torch.randn(1, 2, 2, 3, names=('N', 'C', 'H', 'W')) print(imgs.names)
Out:
('N', 'C', 'H', 'W')
Unlike in
the original named tensors blogpost,
named dimensions are ordered:
tensor.names[i] is the name of the
i th
dimension of
tensor.
There are two ways to rename a
Tensor’s dimensions:
# Method #1: set the .names attribute (this changes name in-place) imgs.names = ['batch', 'channel', 'width', 'height'] print(imgs.names) # Method #2: specify new names (this changes names out-of-place) imgs = imgs.rename(channel='C', width='W', height='H') print(imgs.names)
Out:
('batch', 'channel', 'width', 'height') ('batch', 'C', 'W', 'H')
The preferred way to remove names is to call
tensor.rename(None):
imgs = imgs.rename(None) print(imgs.names)
Out:
(None, None, None, None)
Unnamed tensors (tensors with no named dimensions) still work as
normal and do not have names in their
repr.
unnamed = torch.randn(2, 1, 3) print(unnamed) print(unnamed.names)
Out:
tensor([[[-0.6445, 0.0388, -0.9435]], [[ 0.0120, -1.4350, 0.3005]]]) (None, None, None)
Named tensors do not require that all dimensions be named.
imgs = torch.randn(3, 1, 1, 2, names=('N', None, None, None)) print(imgs.names)
Out:
('N', None, None, None)
Because named tensors can coexist with unnamed tensors, we need a nice way to
write named tensor-aware code that works with both named and unnamed tensors.
Use
tensor.refine_names(*names) to refine dimensions and lift unnamed
dims to named dims. Refining a dimension is defined as a “rename” with the
following constraints:
- A
Nonedim can be refined to have any name
- A named dim can only be refined to have the same name.
imgs = torch.randn(3, 1, 1, 2) named_imgs = imgs.refine_names('N', 'C', 'H', 'W') print(named_imgs.names) # Refine the last two dims to 'H' and 'W'. In Python 2, use the string '...' # instead of ... named_imgs = imgs.refine_names(..., 'H', 'W') print(named_imgs.names) def catch_error(fn): try: fn() assert False except RuntimeError as err: err = str(err) if len(err) > 180: err = err[:180] + "..." print(err) named_imgs = imgs.refine_names('N', 'C', 'H', 'W') # Tried to refine an existing name to a different name catch_error(lambda: named_imgs.refine_names('N', 'C', 'H', 'width'))
Out:
('N', 'C', 'H', 'W') (None, None, 'H', 'W') refine_names: cannot coerce Tensor['N', 'C', 'H', 'W'] to Tensor['N', 'C', 'H', 'width'] because 'W' is different from 'width' at index 3
Most simple operations propagate names. The ultimate goal for named tensors
is for all operations to propagate names in a reasonable, intuitive manner.
Support for many common operations has been added at the time of the 1.3
release; here, for example, is
.abs():
print(named_imgs.abs().names)
Out:
('N', 'C', 'H', 'W')
Accessors and Reduction¶
One can use dimension names to refer to dimensions instead of the positional
dimension. These operations also propagate names. Indexing (basic and
advanced) has not been implemented yet but is on the roadmap. Using the
named_imgs tensor from above, we can do:
output = named_imgs.sum('C') # Perform a sum over the channel dimension print(output.names) img0 = named_imgs.select('N', 0) # get one image print(img0.names)
Out:
('N', 'H', 'W') ('C', 'H', 'W')
Name inference¶
Names are propagated on operations in a two step process called name inference:
- Check names: an operator may perform automatic checks at runtime that check that certain dimension names must match.
- Propagate names: name inference propagates output names to output tensors.
Let’s go through the very small example of adding 2 one-dim tensors with no broadcasting.
x = torch.randn(3, names=('X',)) y = torch.randn(3) z = torch.randn(3, names=('Z',))
Check names: first, we will check whether the names of these two tensors
match. Two names match if and only if they are equal (string equality) or
at least one is
None (
None is essentially a special wildcard name).
The only one of these three that will error, therefore, is
x + z:
catch_error(lambda: x + z)
Out:
Error when attempting to broadcast dims ['X'] and dims ['Z']: dim 'X' and dim 'Z' are at the same position from the right but do not match.
Propagate names: unify the two names by returning the most refined name
of the two. With
x + y,
X is more refined than
None.
print((x + y).names)
Out:
('X',)
Most name inference rules are straightforward but some of them can have unexpected semantics. Let’s go through a couple you’re likely to encounter: broadcasting and matrix multiply.
Broadcasting¶
Named tensors do not change broadcasting behavior; they still broadcast by position. However, when checking two dimensions for if they can be broadcasted, PyTorch also checks that the names of those dimensions match.
This results in named tensors preventing unintended alignment during
operations that broadcast. In the below example, we apply a
per_batch_scale to
imgs.
imgs = torch.randn(2, 2, 2, 2, names=('N', 'C', 'H', 'W')) per_batch_scale = torch.rand(2, names=('N',)) catch_error(lambda: imgs * per_batch_scale)
Out:
Error when attempting to broadcast dims ['N', 'C', 'H', 'W'] and dims ['N']: dim 'W' and dim 'N' are at the same position from the right but do not match.
Without
names, the
per_batch_scale tensor is aligned with the last
dimension of
imgs, which is not what we intended. We really wanted to
perform the operation by aligning
per_batch_scale with the batch
dimension of
imgs.
See the new “explicit broadcasting by names” functionality for how to
align tensors by name, covered below.
Matrix multiply¶
torch.mm(A, B) performs a dot product between the second dim of
A
and the first dim of
B, returning a tensor with the first dim of
A
and the second dim of
B. (other matmul functions, such as
torch.matmul,
torch.mv, and
torch.dot, behave similarly).
markov_states = torch.randn(128, 5, names=('batch', 'D')) transition_matrix = torch.randn(5, 5, names=('in', 'out')) # Apply one transition new_state = markov_states @ transition_matrix print(new_state.names)
Out:
('batch', 'out')
As you can see, matrix multiply does not check if the contracted dimensions have the same name.
Next, we’ll cover two new behaviors that named tensors enable: explicit broadcasting by names and flattening and unflattening dimensions by names
New behavior: Explicit broadcasting by names¶
One of the main complaints about working with multiple dimensions is the need
to
unsqueeze “dummy” dimensions so that operations can occur.
For example, in our per-batch-scale example before, with unnamed tensors
we’d do the following:
imgs = torch.randn(2, 2, 2, 2) # N, C, H, W per_batch_scale = torch.rand(2) # N correct_result = imgs * per_batch_scale.view(2, 1, 1, 1) # N, C, H, W incorrect_result = imgs * per_batch_scale.expand_as(imgs) assert not torch.allclose(correct_result, incorrect_result)
We can make these operations safer (and easily agnostic to the number of
dimensions) by using names. We provide a new
tensor.align_as(other)
operation that permutes the dimensions of tensor to match the order specified
in
other.names, adding one-sized dimensions where appropriate
(
tensor.align_to(*names) works as well):
imgs = imgs.refine_names('N', 'C', 'H', 'W') per_batch_scale = per_batch_scale.refine_names('N') named_result = imgs * per_batch_scale.align_as(imgs) # note: named tensors do not yet work with allclose assert torch.allclose(named_result.rename(None), correct_result)
New behavior: Flattening and unflattening dimensions by names¶
One common operation is flattening and unflattening dimensions. Right now,
users perform this using either
view,
reshape, or
flatten; use
cases include flattening batch dimensions to send tensors into operators that
must take inputs with a certain number of dimensions (i.e., conv2d takes 4D
input).
To make these operation more semantically meaningful than view or reshape, we
introduce a new
tensor.unflatten(dim, namedshape) method and update
flatten to work with names:
tensor.flatten(dims, new_dim).
flatten can only flatten adjacent dimensions but also works on
non-contiguous dims. One must pass into
unflatten a named shape,
which is a list of
(dim, size) tuples, to specify how to unflatten the
dim. It is possible to save the sizes during a
flatten for
unflatten
but we do not yet do that.
imgs = imgs.flatten(['C', 'H', 'W'], 'features') print(imgs.names) imgs = imgs.unflatten('features', (('C', 2), ('H', 2), ('W', 2))) print(imgs.names)
Out:
('N', 'features') ('N', 'C', 'H', 'W')
Autograd support¶
Autograd currently ignores names on all tensors and just treats them like regular tensors. Gradient computation is correct but we lose the safety that names give us. It is on the roadmap to introduce handling of names to autograd.
x = torch.randn(3, names=('D',)) weight = torch.randn(3, names=('D',), requires_grad=True) loss = (x - weight).abs() grad_loss = torch.randn(3) loss.backward(grad_loss) correct_grad = weight.grad.clone() print(correct_grad) # Unnamed for now. Will be named in the future weight.grad.zero_() grad_loss = grad_loss.refine_names('C') loss = (x - weight).abs() # Ideally we'd check that the names of loss and grad_loss match, but we don't # yet loss.backward(grad_loss) print(weight.grad) # still unnamed assert torch.allclose(weight.grad, correct_grad)
Out:
tensor([ 0.3232, 0.3847, -0.2699]) tensor([ 0.3232, 0.3847, -0.2699])
Other supported (and unsupported) features¶
See here for a detailed breakdown of what is supported with the 1.3 release.
In particular, we want to call out three important features that are not currently supported:
- Saving or loading named tensors via
torch.saveor
torch.load
- Multi-processing via
torch.multiprocessing
- JIT support; for example, the following will error
imgs_named = torch.randn(1, 2, 2, 3, names=('N', 'C', 'H', 'W')) @torch.jit.script def fn(x): return x catch_error(lambda: fn(imgs_named))
Out:
NYI: Named tensors are currently unsupported in TorchScript. As a workaround please drop names via `tensor = tensor.rename(None)`.
As a workaround, please drop names via
tensor = tensor.rename(None)
before using anything that does not yet support named tensors.
Longer example: Multi-head attention¶
Now we’ll go through a complete example of implementing a common
PyTorch
nn.Module: multi-head attention. We assume the reader is already
familiar with multi-head attention; for a refresher, check out
this explanation
or
this explanation.
We adapt the implementation of multi-head attention from ParlAI; specifically here. Read through the code at that example; then, compare with the code below, noting that there are four places labeled (I), (II), (III), and (IV), where using named tensors enables more readable code; we will dive into each of these after the code block.
import torch.nn as nn import torch.nn.functional as F import math class MultiHeadAttention(nn.Module): def __init__(self, n_heads, dim, dropout=0): super(MultiHeadAttention, self).__init__() self.n_heads = n_heads self.dim = dim self.attn_dropout = nn.Dropout(p=dropout) self.q_lin = nn.Linear(dim, dim) self.k_lin = nn.Linear(dim, dim) self.v_lin = nn.Linear(dim, dim) nn.init.xavier_normal_(self.q_lin.weight) nn.init.xavier_normal_(self.k_lin.weight) nn.init.xavier_normal_(self.v_lin.weight) self.out_lin = nn.Linear(dim, dim) nn.init.xavier_normal_(self.out_lin.weight) def forward(self, query, key=None, value=None, mask=None): # (I) query = query.refine_names(..., 'T', 'D') self_attn = key is None and value is None if self_attn: mask = mask.refine_names(..., 'T') else: mask = mask.refine_names(..., 'T', 'T_key') # enc attn dim = query.size('D') assert dim == self.dim, \ f'Dimensions do not match: {dim} query vs {self.dim} configured' assert mask is not None, 'Mask is None, please specify a mask' n_heads = self.n_heads dim_per_head = dim // n_heads scale = math.sqrt(dim_per_head) # (II) def prepare_head(tensor): tensor = tensor.refine_names(..., 'T', 'D') return (tensor.unflatten('D', [('H', n_heads), ('D_head', dim_per_head)]) .align_to(..., 'H', 'T', 'D_head')) assert value is None if self_attn: key = value = query elif value is None: # key and value are the same, but query differs key = key.refine_names(..., 'T', 'D') value = key dim = key.size('D') # Distinguish between query_len (T) and key_len (T_key) dims. k = prepare_head(self.k_lin(key)).rename(T='T_key') v = prepare_head(self.v_lin(value)).rename(T='T_key') q = prepare_head(self.q_lin(query)) dot_prod = q.div_(scale).matmul(k.align_to(..., 'D_head', 'T_key')) dot_prod.refine_names(..., 'H', 'T', 'T_key') # just a check # (III) attn_mask = (mask == 0).align_as(dot_prod) dot_prod.masked_fill_(attn_mask, -float(1e20)) attn_weights = self.attn_dropout(F.softmax(dot_prod / scale, dim='T_key')) # (IV) attentioned = ( attn_weights.matmul(v).refine_names(..., 'H', 'T', 'D_head') .align_to(..., 'T', 'H', 'D_head') .flatten(['H', 'D_head'], 'D') ) return self.out_lin(attentioned).refine_names(..., 'T', 'D')
(I) Refining the input tensor dims
def forward(self, query, key=None, value=None, mask=None): # (I) query = query.refine_names(..., 'T', 'D')
The
query = query.refine_names(..., 'T', 'D') serves as enforcable documentation
and lifts input dimensions to being named. It checks that the last two dimensions
can be refined to
['T', 'D'], preventing potentially silent or confusing size
mismatch errors later down the line.
(II) Manipulating dimensions in prepare_head
# (II) def prepare_head(tensor): tensor = tensor.refine_names(..., 'T', 'D') return (tensor.unflatten('D', [('H', n_heads), ('D_head', dim_per_head)]) .align_to(..., 'H', 'T', 'D_head'))
The first thing to note is how the code clearly states the input and
output dimensions: the input tensor must end with the
T and
D dims
and the output tensor ends in
H,
T, and
D_head dims.
The second thing to note is how clearly the code describes what is going on.
prepare_head takes the key, query, and value and splits the embedding dim into
multiple heads, finally rearranging the dim order to be
[..., 'H', 'T', 'D_head'].
ParlAI implements
prepare_head as the following, using
view and
transpose
operations:
def prepare_head(tensor): # input is [batch_size, seq_len, n_heads * dim_per_head] # output is [batch_size * n_heads, seq_len, dim_per_head] batch_size, seq_len, _ = tensor.size() tensor = tensor.view(batch_size, tensor.size(1), n_heads, dim_per_head) tensor = ( tensor.transpose(1, 2) .contiguous() .view(batch_size * n_heads, seq_len, dim_per_head) ) return tensor
Our named tensor variant uses ops that, though more verbose, have more
semantic meaning than
view and
transpose and includes enforcable
documentation in the form of names.
(III) Explicit broadcasting by names
def ignore(): # (III) attn_mask = (mask == 0).align_as(dot_prod) dot_prod.masked_fill_(attn_mask, -float(1e20))
mask usually has dims
[N, T] (in the case of self attention) or
[N, T, T_key] (in the case of encoder attention) while
dot_prod
has dims
[N, H, T, T_key]. To make
mask broadcast correctly with
dot_prod, we would usually unsqueeze dims
1 and
-1 in the case
of self attention or
unsqueeze dim
1 in the case of encoder
attention. Using named tensors, we simply align
attn_mask to
dot_prod
using
align_as and stop worrying about where to
unsqueeze dims.
(IV) More dimension manipulation using align_to and flatten
def ignore(): # (IV) attentioned = ( attn_weights.matmul(v).refine_names(..., 'H', 'T', 'D_head') .align_to(..., 'T', 'H', 'D_head') .flatten(['H', 'D_head'], 'D') )
Here, as in (II),
align_to and
flatten are more semantically
meaningful than
view and
transpose (despite being more verbose).
Running the example¶
n, t, d, h = 7, 5, 2 * 3, 3 query = torch.randn(n, t, d, names=('N', 'T', 'D')) mask = torch.ones(n, t, names=('N', 'T')) attn = MultiHeadAttention(h, d) output = attn(query, mask=mask) # works as expected! print(output.names)
Out:
('N', 'T', 'D')
The above works as expected. Furthermore, note that in the code we
did not mention the name of the batch dimension at all. In fact,
our
MultiHeadAttention module is agnostic to the existence of batch
dimensions.
query = torch.randn(t, d, names=('T', 'D')) mask = torch.ones(t, names=('T',)) output = attn(query, mask=mask) print(output.names)
Out:
('T', 'D')
Conclusion¶
Thank you for reading! Named tensors are still very much in development; if you have feedback and/or suggestions for improvement, please let us know by creating an issue.
Total running time of the script: ( 0 minutes 0.072 seconds)
Gallery generated by Sphinx-Gallery | https://pytorch.org/tutorials/intermediate/named_tensor_tutorial.html | CC-MAIN-2021-39 | refinedweb | 2,776 | 58.99 |
Anatoliy,
> 1. We are able to create volume rendering sessions as shown in the videos
> posted on PyMOL website, but when we save a session as .pse file, and try
> to re-open it again, it does not actually render data until we press a 'Volume' button in the
> external GUI. The problem is that we need to render these images on our
> hyperwall production nodes, where mouse is not available. Is there any way
> to get PyMOL to automatically reproduce a volume rendering session
> from a .pse file ?
Use this terribly ugly code to initialize volumes without the Volume
Editor. We'll be sure to add this functionality to PyMOL soon:
python
import pymol
from pymol import cmd
from pmg_tk.skins.normal.ColorRampModel import ColorRamp
s = cmd.get_session()
c = None
obj_name = None
for x in range(1,len(s["names"])):
obj_name = s["names"][x][0]
if cmd.get_type(obj_name) == "object:volume":
c = s["names"][x][5][2][0][-1]
r = ColorRamp(360)
for x in range(len(c)/5):
r.addColor(int(c[x*5]),
(float(c[x*5+1]),float(c[x*5+2]),float(c[x*5+3]),float(c[x*5+4])))
ramp_colors = r.getRamp()
cmd.volume_color(obj_name, ramp_colors)
cmd.recolor()
python end
> 2. In ideal case, we would also like to run volume rendering from a .pml script.
> Could we possibly get a comprehensive list of all PyMOL commands and
> their usage instructions related to volume rendering?
> Specifically, we would like to have all (or as many as possible) volume
> rendering-related functions that are currently available through the
> external gui ('Volume' option) to be available via .pml scripts.
The main volume functions are:
volume_new -- create a new volume from a map just like isomesh.
volume_color -- assign colors to the volume data.
See the PyMOLWiki () or the Incentive PyMOL
documentation () for more help.
You _must_ have an openGL context for saving images of volumes at this
point. Volumes require shaders which requires GLEW which requires a
context to be initialized.
Cheers,
-- Jason
--
Jason Vertrees, PhD
PyMOL Product Manager
Schrödinger, LLC
(e) Jason.Vertrees@...
(o) +1 (603) 374-7120
Hi,
Just a request to be removed from the PyMOL mailing list, thanks.
Brennen | https://sourceforge.net/p/pymol/mailman/pymol-users/?viewmonth=201203&viewday=6 | CC-MAIN-2018-17 | refinedweb | 367 | 57.16 |
FileStream Class
Assembly: mscorlib (in mscorlib.dll)
Use the FileStream class to read from, write to, open, and close files on a file system, as well as to manipulate other file-related operating system handles including pipes, standard input, and standard output. You can specify read and write operations to be either synchronous or asynchronous. FileStream buffers input and output for better performance.
FileStream.
Although the synchronous methods Read and Write and the asynchronous methods BeginRead, BeginWrite, EndRead, and EndWrite can work in either synchronous or asynchronous mode, the mode affects the performance of these methods. FileStream defaults to opening files synchronously, but provides the FileStream(String,FileMode,FileAccess,FileShare,Int32,Boolean) constructor to open files asynchronously.
If a process terminates with part of a file locked or closes a file that has outstanding locks, the behavior is undefined.
Be sure to call the Dispose method on all FileStream objects, especially in environments with limited disk space. Performing IO operations can raise an exception if there is no disk space available and the Dispose method is not called before the FileStream is finalized.
For directory and other file operations, see the File, Directory, and Path classes. The File class is a utility class with static methods primarily for the creation of FileStream objects based on file paths and the standard input, standard output, and standard error devices. The MemoryStream class creates a stream from a byte array and functions similarly to a FileStream.
The following table lists examples of other typical or related assure()); //Split the output at every 10th character. if (Math.IEEERemainder(Convert.ToDouble(i), 10) == 0) { AddText(fs, "\r\n"); } } } /); } }
import System.*; import System.IO.*; import System.Text.*; class Test { public static void main(String[] args) { String path = "c:\\temp\\MyTest.txt"; // Delete the file if it exists. if (File.Exists(path)) { File.Delete(path); } //Create the file. { FileStream fs = File.Create(path); try {, System.Convert.ToString((char)i)); //Split the output at every 10th character. if (Math.IEEEremainder(Convert.ToDouble(i), 10) == 0) { AddText(fs, "\r\n"); } } } finally { fs.Dispose(); } } //Open the stream and read it back. { FileStream fs = File.OpenRead(path); try { ubyte b[] = new ubyte[1024]; UTF8Encoding temp = new UTF8Encoding(true); while (fs.Read(b, 0, b.length) > 0) { Console.WriteLine(temp.GetString(b)); } } finally { fs.Dispose(); } } } //main private static void AddText(FileStream fs, String value) { ubyte info[] = (new UTF8Encoding(true)).GetBytes(value); fs.Write(info, 0, info.length); } //AddText } //Test
The following example. | https://msdn.microsoft.com/en-us/library/y0bs3w9t(v=vs.80).aspx | CC-MAIN-2015-48 | refinedweb | 410 | 51.04 |
>Write a program that will allow randomized 0..100 numbers
> random_integer = (rand()%100)+1;
That gives you numbers from 1..100.
>Write a program that will allow randomized 0..100 numbers
> random_integer = (rand()%100)+1;
That gives you numbers from 1..100.
>Is it possible? if yes, how do I do it?
Try this.
#include <cstdlib>
#include <iostream>
#include <string>
int main()
{
std::string tag;
> char c;
Declaring c as an int will likely fix your problem. Also a better way to control the loop is:
while((c=fgetc(fp)) != EOF)
{
It depends on what function you use to write each data item or struct. If you use fprintf(), then text mode should be fine. If you use fwrite(), then you need to use binary mode.
This is C, not C++.
Yeah, having two names is probably just for convenience. It makes it easier to read code. It seems like they could have just had one function, but maybe coming up with a good name would be a...
I'd be very surprised if that's the case.
>I did not use the ntohl or htonl because I have to choose which one. This works both ways with the same routine
As long as you don't sometime in the future run your code on a different platform,...
>and it prints the correct value
Just be sure you assign the result to something back in process_cmd().
And you never assigned the return value to anything:
var = BYTESWAP2(ccmd);
Or you could simply print out the value of BYTESWAP2(ccmd).
You have two of your masks incorrect, this one ((ccmd&0x000000FF)<<24) and ((ccmd&0xFF000000)>>24). And for maximum portability, the input to BYTESWAP should be an unsigned int:
int...
>The problem is, what happens when you want to do arithmetic on the stuff that is being put in?
_getch() only inputs a single number per call, so if the user entered 123, you'd have to combine them...
>Why does this tell me i'm missing ")"'s?
>sizeof wxString
I think you need sizeof(wxString), only because wxString is a class, or you could also write sizeof billdat.Cost.
>p.s. Will it work?...
Use tabstop's idea, and <ENTER> will probably compare equal with either '\n' or '\r'.
>void main()
And it's a good habit to return an int from main, since void main() is wrong.
int main(void)
{
.
.
return 0;
}
>Can you please explain me why it wasn't k earlier...coz the pointer also would give the same size of the node.
node is defined as a pointer to struct nodetype:
typedef struct nodetype *node;
...
You might need to use the actual object:
node tmp = malloc(sizeof(*tmp));
>node tmp=(node)malloc(sizeof(node));
Make this:
node tmp = malloc(sizeof(*node));
You should also #include <stdlib.h> for malloc().
I'm not spotting any problem with your code. Try traversing the tree after every insert(), and let us know what it prints for the tree:
insert(&mytree, num);
intrav(mytree);
printf("\nEnter...
How much memory did you allocate for screenDataPnt (or whatever buffer that points to)?
>open it as fopen("rb") instead of ("r")
Correct.
>read the file in text (not binary) mode.
Wouldn't it be the opposite?
Also I'm assuming you set *array to point to resizedarray at some point. And MacGyver makes a good point about & in front of tmp for the scanf().
>Also, shouldn't I free the original array regardless and set it equal to resizedarray?
No, because resizedarray may actually point to the exact same memory.
I may be wrong about this, but I...
>do this mean I need to declare a new char ** and set the realloc equal to it
That's the preferred method, because in case realloc() returns NULL, you can still free the original before returning... | https://cboard.cprogramming.com/search.php?s=125e59e5fb5f669003dd70df551f6a43&searchid=5722188 | CC-MAIN-2020-34 | refinedweb | 641 | 83.76 |
A comparison between AkkaHTTP, Play, and Http4s
This blog post is based on a talk I presented at the first Scala meetup in Porto (Portugal). Standing in front of an audience of developers, my intent was to show how these tools tackle a simple Scala HTTP REST API. This blog post aims to explore which approach may better suit your needs and style.
What is the problem?
Of the many definitions one might find of a software developer, a common trait among them would be someone that solves problems. So we need a problem to solve. Let’s pick a common one: how to build an HTTP REST API in Scala.
trait Controller[R] {
def get(id: Id): R
def post(resource: JSON): R
def put(id: Id, resource: JSON): R
def delete(id: Id): R
}
trait Service[R] {
def create(resource: JSON): Future[Id]
def read(id: Id): Future[R]
def update(id: Id, resource: JSON): Future[Unit]
def delete(id: Id): Future[Unit]
}
trait Repository[R] {
def create(resource: R): Future[Id]
def read(id: Id): Future[R]
def update(id: Id, resource: R): Future[Unit]
def delete(id: Id): Future[Unit]
}
I picked a traditional architecture with a Controller to handle requests, a Service to handle business logic, and a Repository to handle data storage. I will only focus on the Controller part of our problem. Our Controller should handle the common HTTP requests and return HTTP responses, here represented by the type parameter R.
The first question is “why Scala?” The answer is because this is a blog post about Scala. I’m assuming the language was previously chosen and we are at the “what tools to use?” stage. Scala has many ways and styles. It can be more OO more FP or a hybrid. I will try to solve our problem with three different tools. I won’t compare performance, I just want to give you the notion of how these tools tackle our problem and how it feels to work with them. Hopefully, you will get some insight on which one better matches your needs and style.
AkkaHTTP
AkkaHTTP belongs to the Akka toolkit. It treats an HTTP server as a stream of HTTP requests that are transformed into HTTP responses. As such, it uses AkkaStreams to implement HTTP servers. What is AkkaStreams? It is the Akka implementation of Reactive Streams. What is a Reactive Stream? You can get more detailed information on that elsewhere, but in a very succinct way, it is a flow of data that is resilient to failure, i.e. it knows what to do when stuff happens without crashing, and it has backpressure, i.e. there is an attempt to control the flow of data when, for instance, a consumer of data is slower than a provider of data.
Akka implements Reactive Streams using the Actor Model. What is the Actor Model? Again, this isn’t the place to study it, but in a very succinct way, it is a concurrency model based on actors. An actor is a programming entity that can do just a few things:
- Receive messages in a mailbox.
- Send messages.
- Create other actors.
- Execute a behavior based on its state and messages received.
Setup Akka
The first step is to setup akka. We will need something like the following in the code:
implicit val system: ActorSystem = ActorSystem(“amazingActorSystem”)
implicit val materializer: ActorMaterializer = ActorMaterializer()
implicit val executionContext: ExecutionContext = system.dispatcher
Since we are dealing with the actor model, we will need an ActorSystem. This handles the actor stuff behind the scenes. Think of it as one Actor to rule them all. We need an ActorMaterializer. A stream in Akka is just an object that can be passed around as needed. To actually start the flow of data, the stream needs to be “materialized.” That is what the ActorMaterializer does. Finally we need an execution context to handle Futures. These three lines of code are the minimum setup we need to start our server. Additionally, placing an “application.conf” file in the right place in our project allows for tweaking other configurations like parallelism, timeouts, connection pools, and many other details.
Handle the Requests
We need to define how to handle the incoming requests, so let’s implement our Controller:
override def get(id: Id): Future[HttpResponse] =
ResourceService.read(id)
.map(_.name)
.map(resource => HttpResponse(StatusCodes.OK, entity =
HttpEntity(ContentTypes.
application/json, resource)))
override def post(resource: JSON): Future[HttpResponse] =
ResourceService.create(resource.toString)
.map(id =>
HttpResponse(StatusCodes.Created)
.withHeaders(RawHeader(“Location”, s”/$id“)))
override def put(id: Id, resource: JSON): Future[HttpResponse] =
ResourceService.update(id, resource.toString)
.map(_ => HttpResponse(StatusCodes.NoContent))
override def delete(id: Id): Future[HttpResponse] =
ResourceService.delete(id)
.map(_ => HttpResponse(StatusCodes.NoContent))
We define our type parameter R to be Future[HttpResponse]. We have Future because everything is asynchronous and HttpResponse is, surprisingly, the Akka representation of an HTTP response. In each method implementation, we call a service to execute some logic or to get some data, make the desired processing, and construct an HttpResponse. The constructor of this class allows the definition of the many aspects of a response like status code, content types, or headers. It uses implicit Mashallers. These are Akka serializers that send our Scala data structures through the wire. Mashallers for the common content types already exist, like raw string or JSON. Others can easily be implemented.
Now that we have a way of building responses, we need a way of directing incoming requests to the right place. AkkaHTTP has a powerful RoutingDSL. There are two main concepts in this DSL: Route and Directive. A Directive matches an HTTP request based on path, HTTP verb, body, and query and it processes those matched requests. A Route is a composition of Directives forming a road with many paths for an HTTP request to follow. It works like this:
val routes: Route =
path(“resource” / Segment) { id =>
get {
onComplete(get(id)){
case Success(response) => complete(response)
case Failure() => complete(StatusCodes.ServiceUnavailable) }
} ~ put {
entity(as[JsObject]) { json =>
onComplete(put(id, json.toString)) {
case Success(_) => complete(StatusCodes.NoContent)
case Failure(_) => complete(StatusCodes.ServiceUnavailable)
}
}
} ~ ???
}
This Route begins with the “path” Directive. If the HttpRequest has a path “resource” followed by some other string, there is a match and we enter the Directive with this string assigned to the “id” parameter. Next, we find the “get” Directive. Not surprisingly, if the HttpRequest is a GET, then we enter this Directive. The next one is “onComplete.” This Directive takes a Future. In this case it is the “get” method from our Controller called with the “id” parameter. According to the result of this Future, we send the appropriate response back with the “complete” method. What if our HttpRequest is a PUT not a GET? For this, we use the “~” operator. This just means “if the previous Directive does not match, try the next one.” With a PUT we would skip the “get” Directive and enter the “put” one. The following Directive is “entity.” In this case, this Directive tries to parse the request body as a JSON. If successful, we enter the Directive and the parsed JSON is assigned to the “json” parameter. We then complete it appropriately by calling our Controller as before. Akka allows parsing the contents of a request as any data structure we like. We just need to implement the right Unmarshaller, which is just Akka’s name for a deserializer. Similar to Marshallers, Unmashallers for the common content types already exist, like raw string or JSON. Others can easily be implemented.
Because I’m lazy and laziness is a requirement in FP, I left the other two methods of our Controller out of the Route, but check if you can do it yourself.
I’ve shown here just a handful of Directives, but AkkaHTTP comes with tens of them that handle pretty much everything related to HTTP requests, like path segments, query parameters, headers, authentication, cookies, logging, and more. If you can think of anything else you need, implementing new Directives is also possible.
Start the server
Finally, we just need to start the HTTP server. The HTTP class in AkkaHTTP has a few methods to start a server, like this one:
val serverBinding: Future[ServerBinding] = Http().bindAndHandle(routes, “localhost”, 8000)
You just need to provide the “routes” and define the host and port to listen to. The created ServerBinding can later be used to unbind the used port and you should then terminate the actor system.
As you will find repeated in the Akka documentation, AkkaHTTP is not a framework, it is a toolkit. This means they aim to provide an un-opinionated set of tools that we can use as we see fit. They don’t really care how we used them. Either way, AkkaHTTP, in my humble opinion, allows the pretty fast creation of an HTTP server with a considerable set of options to be tweaked to our heart’s desire. It has a powerful RoutingDSL that can easily process requests. Even though we did not need it to solve our problem, it also allows the creation of an HTTP client.
Play
Play has a single purpose, to build web applications using Model-View-controller architecture. As a framework, it has a default way to handle pretty much every aspect of a web application. If you don’t like Play’s opinions on something, you are always free to try your way.
Setup Play
Play follows the convention when it comes to configuration, so using the default configuration doesn’t require you to do anything. If you want to tweak stuff, placing an application.conf file in the resources folder with your configuration overrides is enough.
Taking Action
There are three important concepts in Play: Controller, Action, and Result. A Controller is a class we need to implement that has methods that return Actions. An Action just represents a function that takes an HTTP request and returns a Result. A Result is just what Play calls an HTTP response.
When we implement our Controller trait, we need to also implement a Play Controller.
abstract class ControllerImpl(cc: ControllerComponents, res: Resource)
(implicit ec: ExecutionContext)
extends AbstractController(cc)
with Controller[Future[Result]] {
override def get(id: Id): Future[Result] =
ResourceService(res.name).read(id).map(resource => Ok(resource.name))
override def post(resource: JSON): Future[Result] =
ResourceService(res.name).create(resource)
.map(id => Created(“”).withHeaders(“Location” ->s”/${res.name}/$id“))
override def put(id: Id, resource: JSON): Future[Result] =
ResourceService(res.name).update(id, resource).map(_ => NoContent)
override def delete(id: Id): Future[Result] =
ResourceService(res.name).delete(id).map(_ => NoContent)
def getAction(id: Id): Action[AnyContent] = Action.async(get(id))
def postAction: Action[JsValue] = Action.async(parse.json){
request => post(request.body)
}
def putAction(id: Id): Action[JsValue] = Action.async(parse.json){
request => put(id, request.body.toString)
}
def deleteAction(id: Id): Action[AnyContent] = Action.async(delete(id))
}
Let’s look at the constructor of our class. It implements our Controller with the type parameter R being a Future of Result. This is quite similar to AkkaHTTP, it just has different names. Our class also implements Play’s AbstractController, which takes a ControllerComponents instance. This is just something Play uses to handle requests and there is a default one that Play injects into our classes without any effort on our part except for the use of an @Inject annotation. It is also possible to implement our own ControllerComponents if we need custom request processing. Finally, the Resource parameter just identifies the type of resource we want to handle.
To implement our method, we follow the same approach as for AkkaHTTP. We asynchronously get some data from a service and process them into a Result. There is a collection of methods that can create these, like “Ok” and “NoContent” shown above. As with AkkaHTTP, there are implicit serializers that convert our Scala data structures into something that can go through the wire. Here they are called Writables and it is possible to expand the default ones to handle other specific cases.
Because this is a Play Controller, we need to provide Actions. We use “Action.async” to keep things asynchronous. This method takes a function from Request to Future of Result. This function should process and handle the incoming requests. In case we need to process a request’s body, we need to pass a deserializer. We are using the “parse.json” “BodyParser” that is the default deserializer for JSON bodies. This instance comes from the ControllerComponents instance that a Play Controller requires. Next, we use our Controller methods to create the results in the created Actions.
Let’s create two Play Controllers, one to handle ResourceA and one to handle ResourceB.
class ResourceControllerA @Inject() (cc: ControllerComponents)
(implicit ec: ExecutionContext) extends ControllerImpl(cc, ResourceA)
class ResourceControllerB @Inject() (cc: ControllerComponents)
(implicit ec: ExecutionContext) extends ControllerImpl(cc, ResourceB)
Finally, we need to connect incoming requests to our Actions. This is done in Play by the Router. This is a class that by default is implemented by code that is generated by Play through its sbt plugin. For this code to be generated, we need to create a “routes” file and place it in the resources folder. This file has its own syntax and looks like this:
GET /resourcea/:id scalameetup.controllers.ResourceControllerA.getAction(id: String)
PUT /resourcea/:id scalameetup.controllers.ResourceControllerA.putAction(id: String)
DELETE /resourcea/:id scalameetup.controllers.ResourceControllerA.deleteAction(id: String)
POST /resourcea scalameetup.controllers.ResourceControllerA.postAction
GET /resourceb/:id scalameetup.controllers.ResourceControllerB.getAction(id: String)
PUT /resourceb/:id scalameetup.controllers.ResourceControllerB.putAction(id: String)
DELETE /resourceb/:id scalameetup.controllers.ResourceControllerB.deleteAction(id: String)
POST /resourceb scalameetup.controllers.ResourceControllerB.postAction
The Router syntax is pretty straight forward. You write the HTTP verb followed by the path and the respective Action that handles the request. Each HTTP path segment starting with “:” can be used as a parameter in the Action method. Some other details exist to handle query parameters or default input.
Contrary to Akka, Play is an opinionated framework; It has a default way of handling each aspect of a web application. For instance, AkkaHTTP is its default server, but others can be used. Guice is the default dependency injection solution, but others can be used. There is a default way of handling JSON or XML documents. It also has its own template engine to respond with pretty web pages, which we didn’t need to solve our current problem. If a big web application with lots of endpoints is required, Play is probably a more focused and better solution than AkkaHTTP.
Http4s
Http4s has a different approach than the two previous tools. It is a tool in the Cats environment, so it follows a more functional approach based on this functional programming library. So what is Functional Programming?
This is not the time and place for a detailed answer to this question, but let’s try a very simple one with bullet points:
- Program with functions
- Functions must be total
- Functions must be deterministic
- Functions must be pure
- Code must have referential transparency
Point 1 seems pretty self-explanatory. Point 2 means for every input, we have an output, not a catastrophic event like an Exception. Point 3 means that for the same input we always get the same output; there is no influence from some other code somewhere or from the position of the planets in the sky. Point 4 means that side effects are forbidden; a function only calculates and returns the output without updating some field somewhere, for instance. If these points 1 to 4 are followed, we have Point 5. Referential transparency means that any piece of code can be replaced with its result without changing the meaning of a program.
The Future is no good
The two tools presented so far use Scala’s Future to achieve asynchronicity. Is the Future any good? Does it follow the rules above? Let’s test referential transparency:
for {
_ <- Future { println(“Is the Future Pure?”) }
_ <- Future { println(“Is the Future Pure?”) }
} yield ()
val future = Future { println(“Is the Future Pure?”) }
for {
_ <- future
_ <- future
} yield ()
How many times do we print in the first for-comprehension? How many times do we do it in the second? I’m sure you know it is twice first and once second. However, we just replaced an expression by its result. There is no referential transparency. The reason is that the Future is eager. As soon as it is created, the code inside starts running in some thread.
When following a more functional approach, as the Cats library does, the Future is no good and something else is needed; we require something that is lazy and composable. Let’s make up a completely original name for it: “Monad.”
After this boring introduction to FP, we will solve our problem once again.
F[_] our Controller
Which Monad should we use? Laziness and procrastination are good qualities to have in FP. We will delay this decision as much as possible. Hopefully, someone else will handle it.
class Http4sController[F[]: Effect](res: Resource) extends Controller[F[]]
with Http4sDsl[F]
override def get(id: Id): F[Resource] =
BetterResourceService[F].read(id)
override def post(resource: JSON): F[Id] =
BetterResourceService[F].create(resource)
override def put(id: Id, resource: JSON): F[Unit] =
BetterResourceService[F].update(id, resource)
override def delete(id: Id): F[Unit] =
BetterResourceService[F].delete(id)
Since we are not picking a Monad for our Controller to return yet, we implement it with a Higher Order Kind. This is a fancy name for a type constructor, something that takes a type and returns a type. For instance, List is a Higher Order Kind. It can receive the type Int and return the type List[Int]. Option is a Higher Order King. It can receive the type String and return the type Option[String]. Future is a Higher Order Kind and so on. F[_] is Scala’s syntax for this. Remember, we have some rules to follow so that everything is nice and functional. To guarantee that rules are followed, we use the :Effect syntax. This makes sure that whatever Monad we pick, it follows the rules in the Cats trait Effect. Now each implemented method returns an F of something. We are no longer using the previous service that would return Futures. This new service also uses a Higher Order Kind, F. Since we made sure that F follows the rules, these return values can be composed nicely.
Finally, the Http4sDsl provides the tools to handle requests, like the HttpRoutes.of method:
val routes: HttpRoutes[F] = HttpRoutes.of[F] {
case GET -> Root / Name / id =>
for {
res <- get(id)
response <- Ok(res)
} yield response
case req @ POST -> Root / Name =>
for {
body <- req.as[JSON]
id <- post(body)
response <- Created(Header(“Location”, s”$Name/$id“))
} yield response
case req @ PUT -> Root / Name / id =>
for { body <- req.as[JSON] _ <- put(id, body)
response <- NoContent()
} yield response
case DELETE -> Root / Name / id =>
for { _ <- delete(id)
response <- NoContent()
} yield response
}
This method takes our F and a partial function in which each case clause matches a request and uses the implemented methods to handle it.
It is interesting to notice that every request is handled by a for-comprehension. We guarantee that our F is composable. This makes it easy to reason and work with. Imagine building a new endpoint. What might we need?
- Authenticate the user
- Authorize the user
- Get resource A
- Get resource B
- Build response with resources
This list of logical steps might easily be replaced by:
for {
user <- authenticate(req)
_ <- authorize(user)
resourceA <- getA(id)
resourceB <- getB(id)
response <- Ok(resourceA + resourceB)
} yield response
This nice one to one match only requires that each method returns an F and that F follows our rules.
F[_] a server
Next, we need a server. Http4s uses Blaze. There are a few builder methods to get a server:
def routes[F[_]: Effect]: HttpRoutes[F] =
Http4sController(ResourceA).routes <+> Http4sController(ResourceB).routes
def server[F[_]: ConcurrentEffect: Timer]: Stream[F, ExitCode] =
BlazeServerBuilder[F]
.bindHttp(8080, “localhost”)
.withHttpApp(routes[F].orNotFound)
.serve
The routes method gets the created routes for a resource of type A, the routes for a resource of type B and concatenates them. The server method builds a server with a given port, host, and the created routes. This method returns a Stream. This is not an Akka stream but an fs2 stream. fs2 is another library in the Cats environment to handle streams. Here we have a stream of our Fs that will eventually complete with an ExitCode, our application’s exit code. Blaze has two additional requirements for our F: ConcurrentEffect and Timer. These are just some additional rules to be followed but we keep procrastinating and we don’t pick a Monad yet.
Pick a Monad
If we were writing a software library, we could leave this last step for the user to handle and that would be done. Here it is not the case, and, instead, we reached the end of the world: the main method. We can’t procrastinate anymore and have to pick a Monad:
object Main extends IOApp {
def run(args: List[String]): IO[ExitCode] = server[IO].compile.drain.as(ExitCode.Success)
}
We are using the IO Monad from CatsEffect. This Monad follows all the rules that we require as it was designed to do so.
Is F[_] worth it?
In this post, I tried to compare three tools in the Scala environment and their approach to solve a simple problem. The first two are based on the Actor model since Play uses AkkaHTTP as the default server, and on Scala’s Future. The third follows a more functional approach and uses Effect monads to keep the code pure. This latter approach has a steeper learning curve. Functional programming introduces quite a few new concepts like monad, functor, traverse, and others that might take some time to fully grasp. If this is the chosen approach, a bright developer team is required. However, FP allows using small composable functions pieced together to get what you want, and you can do it with intuitive for-comprehensions, as in the new endpoint example above. This makes pure functional code safer and faster to work with.
To answer the question, FP is definitely worth it, but a bright developer team should also know that if we need a small service with one endpoint, AkkaHTTP might be a quicker way to get there.
Did you find this blog post useful? What topics would you like to read about? Please leave your feedback in the comments section below and help us bring you the most relevant content. | https://www.growin.com/category/blog/ | CC-MAIN-2020-29 | refinedweb | 3,798 | 57.67 |
Guys right now my Jeep (FJ40) is being restored, however, here are the pics of my Son's Jeep. Its a mini vehicle with 4 wheel drive and and 4 wheel auto brakes (moving the foot of the accelarator results in braking mechanisim to activate electrically. The body is made from high impact plastic supported by a real steel frame :). A set of 4 25-watt motors (one at each wheel) drive the vehicle through a gear reduction mechanism. It has two speeds and seems to do pretty well offroad in high gear with one passenger. Drive time is about 40 minutes. Future upgrade will be to move from 12volts (2 6volt sealed lead acid batteries) to 18volts. This bumps ups power and speed goes from the current 7.5km/hr to 10km/hr. This is pretty good for a little machines as full torque is available at zero rpm as well as top speed
Mashallah cute jeep and cute kids.if in isb ur son could have joined IJC for sure
thanks mate.
My son does go offroading on the green belt :D. The little jeep does pretty well in deep sand and gravel and my son gets a real thrill pushing the jeep to its limits. will post some video clips soon. (H)
MashAllah!Glad to see the interest at this age
Shehreyar taking his little cousin for a off-road drive. Here he is driving with switch turned to HIGH (7.5 km/per hour).
Masha Allah
Future Offroader in action
I used to drive them 5 -6 years ago Is it having a seatbelt?
MASHAALLAH, that is a high ride for your kid! Perfect -
This is pretty good for a little machines as full torque is available at zero rpm as well as top speed
Jhatkay waali ride hojaati hay magar ustaad!! 0 RPM pey TQ available nahin hona chaiye, you have to sacrifice comfort! I am not talking about your sons ride though!
Bro how much did it cost you and from where did u get it? i am looking for a decent electric vehicle for my son as well.
its nice to know that you enjoyed these vehicles in the past :).
In pakistan you usually get the two wheel drive or 6 volt ones which do not have much power, speed or ablility. this even in its stock form had 12volts with 24 amps. it has now been upgraded to 18 volts with 36amps. volts push up speed/power and amps make it last / run longer.
In the US and Europe now they have Power Wheels and Peg Preggio branded jeeps with excellent quality with suspension and 24 volts giving it decent torque and speed.
Yep, it has 2 seat belts...one for each passenger.
What a lucky kid.Mashallah.
My first question would be how old is your kid ? If he is 8year old and above then a petrol version is recommended with parental speed control. these petrol powered vehicle are usually in ATV (all terrain vehicle) 4 wheeled bike format with 35cc to 110cc range. Yes, very small atvs also available in 110cc. These type of vehicles are usually tougher than electric ones...especially if you pick a good quality one. Stay away from cheaply built ATV. At Al Fateh they had a really nice mini ATV with disc brakes, full suspension and most importantly a parental speed control. It was powered by a 70cc petrol engine and auto gears. It was of decent quality and on sale for about Rs. 50,000. My son was only 3years old last year so I get him the electric jeep instead.
I would recommend that you go to Al Fateh and see if any are available. Sometimes they get a good stock of electric and gasoline powered vehicles. Beware of low quality and /or 6volt versions. they will be nothing but a waste of money. cheap 10,000 rupee electric vehicles dont perform or last.
My kids jeep was approximatley Rs.24,000 with 4x4 12volt 24 amp batteries and of normal quality. However it also requires routine maintenance from Rasheed & Naveed Electronics (03004516396) to keep it running. I have now upgraded it to 18volts.
Very informative, thanks a lot Salman. I will definately visit Al-Fateh in near future.
My son will be 4 in April, i think a jeep like that of your son will be a perfect gift for his birthday.
yes indeed your son will enjoy the Jeep. You will however have to spend a day or teaching him how to drive (stear)...and how to use the forward and reverse switches. I made a track for my kid and coached him for 2 to 3 days...after that he drove on his own...and after about 2 weeks he was driving like a pro. His mom was pretty upset at me and him as he broke all the flower / plant pots by knocking them over with his jeep Now he drives the jeep outside in the canal green belt.
MashAllah!! very cute kids (Allah nazar sei bachay) and that is a VERY cool 4x4. I hop emy sons don't see this thread or i'll have to buy them one too
Thank you very much salmanchaudhry My son is 3 years old and thats why I want an electric vehicle for him. I will try Alfatah definitely. Also, the information about the volatge and amperage is very useful. I will keep those things in mind too.
woah!!!its really nice (Y)whats under the hood ??(H)
Indeed I will try to teach Ahmad the basics, but, my opinion about the kids of 21st century is that they, on account of the TV/media exposure, are really very smart. <?xml:namespace prefix = o<o:p></o:p> <o:p></o:p>Just an example, my son and I watched Fast and Furious/Gone in 60 Seconds in the last month or so and you know what, my mother told the other day that Ahmad was trying to show his Dado and Dada how to drift on his tricycle. Off course my mother and Ahmad's mother were not amused but my father and I could only laugh at his enthusiasm.<o:p></o:p> <o:p></o:p>Boys are boys and will do this kind of stuff anyway, I used to do it on roads during my teens, risking not only my life and limb but that of others around me, however, with the parental guidance, children can be taught to consider safety as paramount and be taught what their elders learnt the hard way.<o:p></o:p> <o:p></o:p>So, I totally agree with your decision of buying your son his first jeep and teach him how to drive it. Naturally he will discuss his adventures with you in life and you can help him in taking his safety and that of others into account.<o:p></o:p>
I totally agree with Saidhi | https://www.pakwheels.com/forums/t/shehreyar-age-4-with-little-4x4/119581 | CC-MAIN-2017-30 | refinedweb | 1,170 | 82.04 |
AWS Developer Blog
Automatically deploy a Serverless REST API from GitHub with AWS Chalice
AWS Chalice lets you quickly create serverless applications in Python. When you first start using Chalice, you can use the
chalice deploy command to deploy your application to AWS without any additional setup or configuration needed other than AWS credentials. As your application grows and you add additional team members to your project, you’ll want a system that can automatically deploy your application instead of remembering to run
chalice deploy every time you make a change to your app. One way to do this is use AWS CodeBuild and AWS CodePipeline to build and deploy your application whenever changes are pushed to a Git repository.
You can set this process up by using the AWS Console or by creating an AWS CloudFormation template for your deployment pipeline, but Chalice includes functionality to automatically generate a deployment pipeline template for you. In this post, we’ll show you how to set this up.
Walkthrough
To demonstrate this, we’ll create a Chalice application and configure it to automatically deploy whenever we push our changes to GitHub. To follow along you’ll need:
- Python 3.7 or higher
- The AWS CLI installed and configured
- A GitHub account
First we’ll need to create a new development environment and create a new Chalice application.
$ python3 -m venv /tmp/venv37
$ . /tmp/venv37/bin/activate
$ pip install chalice
$ chalice new-project testpipeline
$ cd testpipeline
Next, we’ll configure a local Git repository for our app.
$ git init .
$ git add -A .
$ git commit -m "Initial commit"
[master (root-commit) c35d315] Initial commit
4 files changed, 40 insertions(+)
create mode 100644 .chalice/config.json
create mode 100644 .gitignore
create mode 100644 app.py
create mode 100644 requirements.txt
Now that we’ve done that, we’ll need to create a GitHub repository for our application. You can go to the Create a New Repository page on GitHub and follow the steps to create a new repository. This can be either a public or private repository. Once you’ve created your repository, you’ll need to add GitHub as a remote for your application.
$ git remote add origin git@github.com:YOUR-NAME/YOUR-PROJECT.git
$ git push origin master
Now that we’ve configured GitHub we can configure our deployment pipeline. Instead of writing this pipeline definition by hand, Chalice includes a
generate-pipeline command that can generate a starter template for you. We’ll use this command to create our initial template.
$ mkdir infrastructure
$ chalice generate-pipeline \
--source github \
--buildspec-file buildspec.yml \
--pipeline-version v2 \
infrastructure/pipeline.json
In the command above there’s several arguments we’re providing. The
--source github argument is used to configure our pipeline to deploy from a GitHub repository instead of an AWS CodeCommit repository. The
--buildspec-file argument is used to specify that we want a buildspec file generated instead of being included inline with our CloudFormation template, and the
--pipeline-version specifies that we want to generate a v2 template instead of the default v1 template. The v2 template includes several improvements over the v1 template including using the latest CodeBuild images and the latest Buildspec specification. See the Chalice documentation for more details.
This command will generate two files, a
buildspec.yml file and an
infrastructure/pipeline.json file. We’ll add these files to our Git repository.
$ git add buildspec.yml infrastructure/pipeline.json
$ git commit -m "Add deployment pipeline template"
We’re almost ready to deploy our pipeline. In order for AWS CodePipeline to retrieve changes from our GitHub repository, we must provide an access token that our pipeline can use. To do this, we’ll create a personal access token on GitHub and store this value in AWS Secrets Manager. Our CloudFormation template will then reference this secret so that we don’t hardcode our access token in our template. You can follow the GitHub documentation on how to generate a personal access token. The AWS Secrets Manager documentation has a tutorial on how to store and retrieve secrets. You can also use the AWS CLI to create a new secret. Create a file named
/tmp/secrets.json with the following contents:
{"OAuthToken": "abcdefghhijklmnop"}
Be sure to replace the value with your own personal access token. Next run this command to create a new secret.
$ aws secretsmanager create-secret --name GithubRepoAccess \
--description "Token for Github Repo Access" \
--secret-string
We can now deploy our pipeline. We’ll use the AWS CLI to deploy our CloudFormation template.
$ aws cloudformation deploy --template-file infrastructure/pipeline.json \
--stack-name MyChaliceApp --parameter-overrides \
GithubOwner=repo-owner-name \
GithubRepoName=repo-name \
--capabilities CAPABILITY_IAM
Be sure to replace the
GithubOwner and
GithubRepoName with your own values.
Once our stack is created, we can go to our the CodePipeline page in the AWS Console and we’ll see that our pipeline was created.
Our application will take a few minutes to deploy. Once the last stage in our pipeline was finished running we can now test our application.
To test our application, we’ll need to retrieve the
EndpointURL associated with our application. You can go to the CloudFormation console page and lookup the value of
EndpointURL in the stack outputs tabs.
Now we can make a GET request to our endpoint URL we’ll see our hello world response:
$ curl -w '\n'
{"hello":"world"}
At this point, we have our deployment pipeline set up. Any changes to our GitHub repository will automatically be deployed. To test this, make a change to your
app.py file.
from chalice import Chalice app = Chalice(app_name='testpipeline') @app.route('/') def index(): return {'hello': 'world, these are new changes!'}
We’ll now commit and push our changes.
$ git add app.py
$ git commit -m "Change hello world message"
[master 68cfc91] Change hello world message
1 file changed, 1 insertion(+), 3 deletions(-)
$ git push origin master
After a few minutes, we can see that application has been deployed.
$ curl -w '\n'
{"hello":"world, these are new changes!"}
Next Steps
Now that we have our deployment pipeline in place, there’s a few things we can do from here.
- Continue building out our application by modifying our
app.pyfile.
- Change the
buildspec.ymlfile to modify how our application is built. We can add additional commands to run unit tests, run linters, type checkers, etc before we deploy our application.
- Modify our
infrastructure/pipeline.jsonfile to add new stages to our deployment pipeline. You’ll need to rerun the
aws cloudformation deploycommand if you make changes to your deployment pipeline.
If you’d like to deep dive on deploying your application with Chalice, our documentation goes into more detail and has additional options for you to consider. Let us know what you think! You can give feedback on our GitHub page. | https://aws.amazon.com/blogs/developer/automatically-deploy-a-serverless-rest-api-from-github-with-aws-chalice/ | CC-MAIN-2021-10 | refinedweb | 1,139 | 56.96 |
/* Definitions of dependency data structures for GNU Make. Copyright (C) 1988, 1989, 1991, 1992, 1993, 1996. */ /* Flag bits for the second argument to `read_makefile'. These flags are saved in the `changed' field of each `struct dep' in the chain returned by `read_all_makefiles'. */ #define RM_NO_DEFAULT_GOAL (1 << 0) /* Do not set default goal. */ #define RM_INCLUDED (1 << 1) /* Search makefile search path. */ #define RM_DONTCARE (1 << 2) /* No error if it doesn't exist. */ #define RM_NO_TILDE (1 << 3) /* Don't expand ~ in file name. */ #define RM_NOFLAG 0 /* Structure representing one dependency of a file. Each struct file's `deps' points to a chain of these, chained through the `next'. Note that the first two words of this match a struct nameseq. */ struct dep { struct dep *next; char *name; struct file *file; unsigned int changed : 8; unsigned int ignore_mtime : 1; }; /* Structure used in chains of names, for parsing and globbing. */ struct nameseq { struct nameseq *next; char *name; }; extern struct nameseq *multi_glob PARAMS ((struct nameseq *chain, unsigned int size)); #ifdef VMS extern struct nameseq *parse_file_seq (); #else extern struct nameseq *parse_file_seq PARAMS ((char **stringp, int stopchar, unsigned int size, int strip)); #endif extern char *tilde_expand PARAMS ((char *name)); #ifndef NO_ARCHIVES extern struct nameseq *ar_glob PARAMS ((char *arname, char *member_pattern, unsigned int size)); #endif #ifndef iAPX286 #define dep_name(d) ((d)->name == 0 ? (d)->file->name : (d)->name) #else /* Buggy compiler can't hack this. */ extern char *dep_name (); #endif extern struct dep *copy_dep_chain PARAMS ((struct dep *d)); extern struct dep *read_all_makefiles PARAMS ((char **makefiles)); extern int eval_buffer PARAMS ((char *buffer)); extern int update_goal_chain PARAMS ((struct dep *goals, int makefiles)); extern void uniquize_deps PARAMS ((struct dep *)); | http://opensource.apple.com//source/gnumake/gnumake-110/make/dep.h | CC-MAIN-2016-36 | refinedweb | 266 | 71.04 |
Opened 13 years ago
Closed 12 years ago
#3285 enhancement closed fixed (fixed)
Add ISUPPORT implementation for irc.py
Description
The method irc_RPL_BOUNCE (line 1182) detects the ISUPPORT message and sends it to the API function named isupport() (line 657) as it is. There should be instead a irc_RPL_ISUPPORT method that parses the string before sending it to isupport. Doing this will break the backward compatibility because the isupport() method is supposed to receive a string. The parsing could be done in the isupport() method too, but is not the right place and will led to another problem if the user overrides isupport() without calling the original isupport().
The ISUPPORT messagge is like:
:servername 005 mynick :are available on this server
The ISUPPORT message defines a series of useful informations (like PREFIX and CHANMODES) that should be saved (at class or module level) and used in other functions. For example, the constant named CHANNEL_PREFIX is defined in irc.py (line 61) with the value '&#!+'. This constant is never used, but the value is hardcoded in other functions (in join, leave, kick, lines 825, 832, 839; in topic, say, me, line 855, 877, 941) always in the form "if channel[0] in '&#!+': channel = '#' + channel". This value is defined in the ISUPPORT message with the name CHANTYPES and could vary between servers so it should be saved and used (and also a function that adds the '#' before the channel should be done).
If the parsing is done in the isupport() method and the user overrides it, all these variables won't be set and although default variables can be used this could create problems if they differ from the ones sent by the server.
Another possible solution is to create the irc_RPL_ISUPPORT method that parses and save the informations (at class/module level) and change irc_RPL_BOUNCE to send the string both to this and to the isupport() method, so the informations will be saved and there won't be compatibility problems (unless the user already created an irc_RPL_ISUPPORT method). If the user wants to access this data could do it using directly the variables set at class/module level (isupport() will then be useless and we keep it just for compatibility).
If we implement this there will be to change several functions in order to use these values instead of hardcoded ones. The NAMES method (see ticket #3275) should also use the PREFIX values while parsing the list of users (that looks like @nick1 nick2 +nick3) in order to separate the prefixes from the nicks.
Attachments (12)
Change History (77)
comment:1 Changed 13 years ago by
comment:2 Changed 13 years ago by
comment:3 Changed 13 years ago by
comment:4 Changed 13 years ago by
comment:5 Changed 13 years ago by
For what it's worth, UnrealIRCD appends its ISUPPORT lines with
are supported by this server and not
are available on this server as Twisted currently assumes.
comment:6 Changed 13 years ago by
I'm working on it, as suggested by glyph in #3286 I'll do a instance attribute for IRCClient. A message like
will be parsed to:
self.ISUPPORT = { 'FLAGS': set(['NOQUIT', 'SAFELIST']), 'WATCH': 128, 'MODES': 6, 'PREFIX': '(ohv)@%+', 'CHANMODES': 'b,k,l,cdijmMnOprRsStuU', ...
Some values, like PREFIX and CHANMODES will be parsed and the result will be saved in the same dict (e.g. PREFIX will result in two new keys: USERPREFIX and USERMODES - see #3286, comment 5), I'll do specific functions for that.
comment:7 follow-up: 8 Changed 13 years ago by
I don't think there should be an distinction between FLAGS and the other informations. It it would be non-nested one could use
if 'MODES' in self.ISUPPORT instead of
if 'MODES' in self.ISUPPORT or 'MODES' in self.ISUPPORT['FLAGS'] which seems smarter. Remember the Zen of Python "Flat is better than nested". This would be useful, if there would be any name clashes of flags with keys, but I don't think there are any.
I'd also suggest something that is not
ALL_UPPERCASE.
ISUPPORT is probably not the smartest name, either. I'd love to see
RPL_MYINFO (004) and
RPL_ISUPPORT (005) merged into one dictionary. The distinction between these is as far as I see only historical - no client author would nowadays care whether the informations are from
MYINFO or
ISUPPORT.
comment:8 Changed 13 years ago by
I don't think there should be an distinction between FLAGS and the other informations.
I don't think is a good thing having a key/value pair if we don't actually have any value and we only check if the key exists doing
if mode in self.ISUPPORT, even if it's shorter to type.
... if there would be any name clashes of flags with keys, but I don't think there are any.
A lower-case 'flags' could solve this problem.
I'd also suggest something that is not
ALL_UPPERCASE.
I used it for ISUPPORT because it should be a constant (once set), for the modes I just leave them as they are (and they are supposed to be constants too).
ISUPPORTis probably not the smartest name, either. I'd love to see
RPL_MYINFO(004) and
RPL_ISUPPORT(005) merged into one dictionary.
I agree but I don't know a better name, if you have something better that can describe the type of informations represented by 004 and 005 without being too long let me know.
comment:9 follow-up: 10 Changed 13 years ago by
I attached a patch so you can see if what I've done is ok (tests are not included here).
I did what I proposed the last time, so the dict is still named ISUPPORT (if we find a better name a I'll find&replace it) and the values (except 'flags') are all UPPERCASE. I haven't implemented MYINFO here.
I added a default ISUPPORT dict with some values for 2 reasons:
- If I create the dict when I receive the ISUPPORT message, and the server sends more ISUPPORT messages, only the last will be saved (even if I could check if
hasattr(self, 'ISUPPORT')but I think is fine as it is).
- The methods can read or add keys from/to the dict without having to check if the dict or the keys exist. instead of
self.ISUPPORT['CHANMODES']) if they are widely used by the functions (also things like the network name have no reason to be in the dict).
Is probably better to use
log.msg() instead of
raise Exception because is not fault of the user if the server sends a wrong message (that should be just ignored).
Some parameters like PREFIX, CHANMODES, CHANLIMIT, MAXLIST, LANGUAGE (and possibly others) need further parsing. I only wrote methods to parse PREFIX and CHANMODES, the other methods could be easily added later, if ever someone will need them.
comment:10 follow-up: 11 Changed 13 years ago by
I did what I proposed the last time, so the dict is still named ISUPPORT (if we find a better name a I'll find&replace it) and the values (except 'flags') are all UPPERCASE. I haven't implemented MYINFO here.
How about
server_capab? It is not too long, while it does not imply that it is only
ISUPPORT. I'm still sceptical about splitting the ISUPPORT into flags and non-flags. If they are in one dict, I can just query one dict whether or not some flag/param is set. Splitting it in two result in authors needing to think whether to query the
server_capab dict or the set in
server_capab['flags'] for no reason. I don't see where a dict key with
FLAG : True would be bad.
True implies that it is set. There are even more enhancements possible like adding all possible flags into
server_capab and setting them to
False per default and only setting these which are supported by the server to
True. Not possible with a set, in that case the flags would need to move to the
server_capab dict anyway. So the split is something artificial.instead of
self.ISUPPORT['CHANMODES']) if they are widely used by the functions (also things like the network name have no reason to be in the dict).
I wouldn't like the namespace of the class be cluttered with attributes that might or might not be set, especially when they are ALL UPPERCASE.
comment:11 follow-up: 12 Changed 13 years ago by
How about
server_capab? It is not too long, while it does not imply that it is only
ISUPPORT.
This is surely better than ISUPPORT. I'll also remove the 'flags' key and I'll add the flags in the dict as normal keys, with True as value. (after all, "Special cases aren't special enough to break the rules." is followed by "Although practicality beats purity.")
I wouldn't like the namespace of the class be cluttered with attributes that might or might not be set, especially when they are ALL UPPERCASE.
I think that userprefix, usermodes and channelmodes (lowercase) can be set as instance attributes for two reasons:
- They are not really part of the original ISUPPORT message
- They are widely used by several functions, and it's better having them as handy attributes
(also the original version of irc.py has them as module-level constants, so a couple of instance-level attributes shouldn't be a problem)
comment:12 follow-up: 13 Changed 13 years ago by
I'm not really sure about whether we need an
iSupport method at all, when we save the flags into an instance attribute. But if you really want, you could parse the ISUPPORT message in irc_RPL_ISUPPORT, add the flags to
server_capab and call
self.iSupport(the_parsed_isupport). I don't think
iSupport should ever get the
MYINFO data, otherwise it would be quite pointless to have the
server_capab attribute at all (and I think the attribute is a good idea).
comment:13 Changed 13 years ago by
I'm not really sure about whether we need an
iSupportmethod at all, when we save the flags into an instance attribute.
The user may want to know that we did it.
I'll do
self.server_capab = {} in
connectionMade, and something like
the_parsed_isupport = parseISupport() self.iSupport(the_parsed_isupport) self.server_capab.update(the_parsed_isupport)
in irc_RPL_ISUPPORT, so the user will receive only the data from ISUPPORT (for the MYINFO message we can do something similar, saving all the data in server_capab and sending to myInfo only the parsed data).
comment:14 Changed 13 years ago by
The
isupport method already exists and it was called passing a list of string, passing a dict will break the backward compatibility. A solution could be deprecate
isupport and replace it with
iSupport.
comment:15 Changed 13 years ago by
The isupport method will be deprecated and the new method will be called
serverSupports. I'll use twisted.python.deprecate.deprecated in the
__init__ to mark isupport as deprecated (this will allow to mark the function even if the user has overridden it) and TestCase.callDeprecated instead of assertWarns in the tests.
I won't add here the channelmodes instance attribute (usermodes and userprefixes will be added - see comment 11) because it's not needed anymore in #3296.
comment:16 Changed 13 years ago by
decorating an instance method might have bad side-effects. It would probably be less surprising to just emit a deprecation warning when you're going to call it when you notice it is overridden. The problem with this is that there's no easily-testable way to emit a deprecation warning without using
twisted.python.deprecate.deprecated. We should really address that point though, rather than come up with somewhat gross work-arounds.
comment:17 Changed 13 years ago by
I added the new patch, it now has the tests too and everything works fine. I had some problems whit test_deprecatedISupportMethod and I end up with using assertWarns. As you can see from the commented code there I tried several (hackish) things with no results.
If the patch is ok I'll remove the commented code and submit it for review.
comment:18 Changed 13 years ago by
comment:19 Changed 13 years ago by
comment:20 Changed 13 years ago by
comment:21 Changed 13 years ago by
comment:22 follow-up: 23 Changed 13 years ago by
irc_RPL_ISUPPORTshould dispatch to methods named
prefix_SUFFIX- eg, instead of
_parseISupportArg, dispatch to
_isupport_ARG
server_capabshould be renamed something like
serverCapabilities. Abbreviations should be limited to widely used abbreviations or cases where the shorter name is significantly more readable than the full version; the _ should be omitted in any case, since that's not how attributes are named according to the coding standard.
userprefixand
usermodesshould be documented in the class docstring, not the
connectionMademethod docstring
The deprecation stuff seems right to me now, but it'll be more clear when the commented out dead code is removed. Please submit a patch against the branch to do that, as well as to fix the above issues. Thanks!
comment:23 Changed 13 years ago by
irc_RPL_ISUPPORTshould dispatch to methods named
prefix_SUFFIX- eg, instead of
_parseISupportArg, dispatch to
_isupport_ARG
I left _parseISupportArg as it is and renamed _parseISupportPrefix to parseISupport_PREFIX. irc_RPL_ISUPPORT will dispatch the parameters that require further parsing to methods named parseISupport_PARAMNAMEHERE.
server_capabshould be renamed something like
serverCapabilities.
Done
userprefixand
usermodesshould be documented in the class docstring, not the
connectionMademethod docstring
Done, I also added there the doc for serverCapabilities and removed it from the methods' docstrings
I also removed the commented code, replaced all the
assertEquals() with
assertEqual() in test_irc.py, fixed some docstrings and added a couple of comments.
comment:24 Changed 13 years ago by
ticket3285.3.patch doesn't apply cleanly to the branch. What did you generate it from?
comment:25 Changed 13 years ago by
This one should work.
comment:26 Changed 13 years ago by
comment:27 Changed 13 years ago by
Thanks. A bunch of fairly minor things:
_checkServerSupportsdocstring misspells
serverSupports
_checkPrefixAndServerCapabdocstring misspells
variable
test_serverSupportsWithSingleMessageand
test_serverSupportsWithMultipleMessagesshould explain what correct result means. In general, using correct in a test method docstring is not the best idea: explain what correct means instead.
- The
_delISupportFromClientin
test_deprecatedISupportMethodis probably superfluous - what difference will it make if that doesn't happen?
- The change to
testMultipleLinemeans the two lines following it need to be re-indented.
- @ivar and other epytext markup should have the second and following lines of multiline text indented 4 spaces - the three new @ivars on
IRCClientare missing this
- The deprecation
irc_RPL_BOUNCEemits is good, but it would be best if the message said exactly where the deprecated code was - for example, in this case, it'd be great if it gave the name of the class which has the
isupportattribute.
irc_RPL_ISUPPORTuses
hasattr- use
getattrwith a default instead:
hasattrcan swallow unexpected exceptions.
irc_RPL_BOUNCEalso uses
hasattr, and the
self.isupport.im_func is not IRCClient.isupport.im_funcpart of the check for calling
isupportis untested
-
_parseISupportArgcalls
rstrip('='), but it doesn't need to, if there's an
=, there won't be a
ValueErrorraised. Also, that exception handler is untested.
- All of the log messages/early returns in
parseISupport_PREFIXare untested and there's a
NameErrorin one of them.
- The
^at the beginning of the pattern in
parseISupport_PREFIXis unneeded -
re.matchimplies that.
comment:28 Changed 13 years ago by
comment:29 Changed 12 years ago by
Most ISUPPORT parameters require special parsing, some also have default values. I've attached a patch that introduces an ISUPPORT parser object which follows the guidelines laid out in. The patch was constructed against trunk.
According to that same document, RPL_BOUNCE is so scarcely implemented that they've changed it to numeric
010 and left RPL_ISUPPORT as
005.
comment:30 Changed 12 years ago by
comment:31 Changed 12 years ago by
comment:32 Changed 12 years ago by
I realise there may be a number of changes that were present in previous patches that do not appear in mine; my feeling is that the majority of these changes don't seem like a good way to move forward: raising exceptions during parsing ISUPPORT parameters, introducing
IRCClient.serverSupports (in my opinion, this function (along with the original
IRCClient.isupport function) add little to no value.)
I'm generally always available on IRC (as
k4y) to discuss any of my comments or patches.
comment:33 Changed 12 years ago by
IRC's terribleness boggles my mind. Thank you for attempting this. I will attempt to write a review, but I feel unqualified.
- The naming of
ISupportis unfortunate. It looks like an interface. I'd suggest naming it something a bit more verbose, like
ServerSupportedFeatures.
- I don't like the dict-like-object design. If an object has a set of features which each have distinct semantics, then it seems to me that there should be a set of methods or attributes with documentation that explain their function. This is useful when reasoning about code, and also useful for automated tools; there's a hope that e.g. Eclipse can figure out what type
someServerSupportedFeaturesInstance.channelModesis, but it's pretty close to zero that it will figure out what
maybeThisIsADictIDunno["CHANMODES"]is.
CommandDispatcherMixinhas a problem with inheritance; you can't mix it in twice for dispatching different prefixes. That's fine as a limitation, but (A) perhaps it shouldn't be public, and (B) the limitation should be documented.
- I don't see that
_dispatchadds any value; why have two methods where one will suffice?
_intOrDefaultneeds a docstring.
- There's an XXX in
isupport_CHANMODES. I don't think I understand the protocol involved here well enough to recommendation one way or another, but implement something rather than leave an XXX there :).
IRCClient.isupportedneeds an
@ivar. Also consider a name parallel with
ISupport's new class name (
serverSupportedFeatures?)
test_irc.Dispatcherand its methods need docstrings.
As far as I can tell, this is otherwise in good shape...
comment:34 Changed 12 years ago by
- I've removed the dict-like-object design and implemented
getFeatureand
hasFeatureinstead, attributes are tricky when you start changing
CHANMODESto
channelModes. I think it's better to let everyone address the features using their original names.
- 1. I don't have any use case right now to back this up, so I've reduced it to a single function.
I've addressed all the other points too.
comment:35 Changed 12 years ago by
comment:36 Changed 12 years ago by
- Some of
_intOrDefaultis not covered by any tests
- The first check in
ServerSupportedFeatures._parsePrefixParamis not covered by any tests
- The last line of
ServerSupportedFeatures._parsePrefixParamseems overly complex. How about
dict(zip(modes, symbols))?
ServerSupportedFeatures.isupport_CHANMODESis entirely untested
ServerSupportedFeatures.isupport_IDCHANis entirely untested
ServerSupportedFeatures.isupport_MAXLISTis entirely untested
ServerSupportedFeatures.isupport_NETWORKis entirely untested
ServerSupportedFeatures.isupport_SAFELISTis entirely untested
ServerSupportedFeatures.isupport_STATUSMSGis entirely untested
ServerSupportedFeatures.isupport_TARGMAXis entirely untested
hasFeatureisn't really any different from
__contains__(or whatever the dict-like API glyph complained about was). A good high-level API would indeed things like
channelModesand such. However, a good low-level API doesn't need to do this. So perhaps this is only the low-level API. I don't think this point needs to block this ticket.
comment:37 Changed 12 years ago by
I've addressed points 1 to 10.
Regarding point 11: My only concern about introducing attributes like
channelModes is that there's no real way to figure out that what is specified as
CHANMODES in the RFC is now named
channelModes (this particular example is relatively obvious, but there are some more esoteric names in the RFC), apart from trawling through the documentation or source.
Things get trickier when you consider that every IRC server implementation introduces their own ISUPPORT parameters, so maybe
SPAMEGGS is supported but there's no
spamEggs attribute, so now do you have to fish around in a dictionary of unknowns? What happens when Twisted does support
SPAMEGGS? Does the
SPAMEGGS value in the dictionary of unknowns go away suddenly?
ISUPPORT is low-level, anything higher level should almost certainly be handled (or exposed?) by
IRCClient.
comment:38 Changed 12 years ago by
comment:39 Changed 12 years ago by
comment:40 Changed 12 years ago by
comment:41 Changed 12 years ago by
- The test docstrings are written in a somewhat strange style. They're generally missing a subject and in a funny tense. Preferred style is along the lines of foo does bar.
assertEqualsis preferred over
assertEqualnow.
- Generally we don't vertically align things (as the patch does in, eg,
test_support_CHANMODES)
- IDCHANs seems to be a list, conceptually, not a tuple. Also, the test for this has a docstring which doesn't really go into sufficient detail. In general, if you're describing something as "correct" in a test docstring, you're not going into enough detail.
test_support_MAXLISTdocstring talks about a mapping, but the test does things with tuples.
test_support_NETWORKdocstring isn't detailed enough. Ditto for
test_support_SAFELIST,
test_support_STATUSMSG.
isupport_NICKLENis implemented with an
_intOrDefaultcall which passes in a default value, but no test exercises this default value feature.
comment:42 Changed 12 years ago by
Changed 12 years ago by
Created against isupport-3285-2 branch
comment:43 Changed 12 years ago by
- I think that what you said about IDCHANS makes sense for just about every use of
_splitParameterArgs, so I changed it return a list and modified the relevant code.
- Some
isupport_methods were reported not to be covered by tests, so I added tests for these (even though they're pretty trivial) and
isupport_NICKLENfalls into this category.
I improved the docstrings for the methods in points 5 and 6.
As I mentioned on IRC, the "PREFIX" support parameter's arguments appear in a significant order (which the
dict implementation loses.) I followed your suggestion to introduce a priority value and updated the tests to both know about this value and to make sure it is correct.
comment:44 Changed 12 years ago by
comment:45 Changed 12 years ago by
- As discussed on IRC, let's make
isupportedget called after the
supportedthingy is made consistent with the message being dispatched.
- I don't think there is test coverage for the default features/feature values
- I don't think there is test coverage for the
or 'e'in
isupport_EXCEPTS
Changed 12 years ago by
Created against isupport-parser-4.diff applied to isupport-3285-2 branch.
comment:46 Changed 12 years ago by
I addressed all of your queries. Implementing some of these tests exposed some bugs, which I (hopefully) patched.
comment:47 Changed 12 years ago by
comment:48 Changed 12 years ago by
Some lingering questions (some of which we might have discussed on IRC a while ago), and other stuff:
- Does
ServerSupportedFeaturesneed to be part of the public API (specifically, the ability for application code to instantiate them, subclass it, etc)?
- Does the
parsemethod need to allow multiple ISUPPORT strings to be parsed into the same
ServerSupportedFeaturesinstance?
isupport_INVEXis incompletely tested (its single line is only executed once by the whole test suite, and there are at least two paths through it...)
comment:49 Changed 12 years ago by
- It probably doesn't, but as you mentioned on IRC, making it private does hamper documentation somewhat. I think maybe leaving it public might be the better option.
- As discussed, I've added some comments / documentation indicating why we need to be able to call parse multiple times to mutate
ServerSupportedFeatures.
- Added a test for this.
comment:50 Changed 12 years ago by
(For clarity, I committed to the branch, instead of attaching another patch.)
comment:51 follow-up: 60 Changed 12 years ago by
Noticed one more thing. Previously,
IRCClient had no
__init__. This means that a subclass which overrode
__init__ could not invoke the base implementation. However, with this change, subclasses must invoke the base implementation, or they'll get an
AttributeError raised as soon as ISUPPORT is received. This would be an unfortunate incompatibility. Please add a test for this scenario and make it pass (should be easy -
connectionMade is a good place for protocol initialization). Perhaps it would be good for
NoticingClient to go back to something like its previous behavior, with a note about how it's important that it doesn't call
IRCClient.__init__ for test coverage purposes.
Hopefully this is the last bit of feedback I have.
comment:52 so in parse after
self._features[key] = self.dispatch(key, value)
i put
if self._features[key] is None:
self._features2[key] = value
i'm not sure if that's right, if value there is what i want, and i know _features2 isn't a good name, but just to show you what my idea is. since the features dicts are internal (they have an underscore) i guess i would ideally also provide a separate getFeature function for _features2 so the client would do
if self.getFeature('TARGMAX') is None:
#do something with getFeature2('TARGMAX')
i figured that's more elegant than just doing
if self._features[key] is None:
self._features[key] = value
because then the irc client doesnt have to do
if not isinstance(getFeature('TARGMAX'), int):
...
anyway just an idea, there could be a better way
comment:53 Changed 12 years ago by
[when i say 'i change' i mean in my own personal copy of irc.py. i don't do commits and such.]
comment:54 Changed 12 years ago by
What does dalnet supply for TARGMAX? And why is it not a protocol error for it to supply something that's not an integer?
comment:55 Changed 12 years ago by
I'm not sure it's all of dalnet. I think it was just some particular server. While I'm not an IRC expert, my impression is that a lot of "standards" in the irc protocol are only standards by convention and IRC servers are highly prone to individually expand and defy standards and conventions.
Especially with isupport, i would think, where the entire list of isupport keys falls under one IRC command which was added later, and some key values are expected to be ints, some strings, and some, perhaps, either ints or strings, and the keys you might receive are arbitrary enough that irc.py's isupport was programmed to process and remember any key/value pair the server sends whether the key is recognized as standard or not.
so basically i wouldnt consider it a 'protocol error' in such a way that i'd justify throwing an exception over it. it seems the module should be more flexible than that. also, as long as you have _intOrDefault, it seems to just make sense to at least use it instead of int, it even makes me wonder if not doing so was an oversight.
comment:56 Changed 12 years ago by
here's something else i just added to my own personal copy of irc.py. an irclower() function. feel free to take my code and convert it to reasonable programming practices.
[somewhere near the top..]
irclowertranslations = {
"ascii": string.maketrans(string.uppercase, string.lowercase), "rfc1549": string.maketrans(string.uppercase + "\x7B\x7C\x7D\x7E",
string.lowercase + "\x5B\x5C\x5E\x5F"),
"strict-rfc1549": string.maketrans(string.uppercase + "\x7B\x7C\x7D",
string.lowercase + "\x5B\x5C\x5E")
}
[somewhere in the IRCClient class..]
def irclower(self, text):
try:
trans = irclowertranslations[self.supported.getFeature("CASEMAPPING")]
except:
trans = irclowertranslationsrcf1549?
return text.translate(trans)
information gleaned from
and
the code is untested, so if it doesn't work hopefully the basic idea is clear enough..:)
comment:57 Changed 12 years ago by
comment:58 Changed 12 years ago by
comment:59.
This is probably a good idea.
This is a potential problem, however it is also an extreme edge case. The chances of the value of TARGMAX not being an integer but still being useful must be pretty close to zero. If this is a real and pressing issue, I would suggest filing another ticket for introducing a new API to look up values that could not be parsed.
here's something else i just added to my own personal copy of irc.py. an irclower() function.
I don't think that the old RFC1459 casemapping is an issue worth considering. Even if it is, it probably belongs in a new ticket.
comment:60 Changed 12 years ago by
Noticed one more thing. Previously,
IRCClienthad no
__init__. This means that a subclass which overrode
__init__could not invoke the base implementation. However, with this change, subclasses must invoke the base implementation, or they'll get an
AttributeErrorraised as soon as ISUPPORT is received. This would be an unfortunate incompatibility. Please add a test for this scenario and make it pass (should be easy -
connectionMadeis a good place for protocol initialization). Perhaps it would be good for
NoticingClientto go back to something like its previous behavior, with a note about how it's important that it doesn't call
IRCClient.__init__for test coverage purposes.
Yes, this is a good point. One that would probably have broken every existing use of IRCClient. Nice catch.
comment:61 Changed 12 years ago by
There's a conflict in the
IRCClient class docstring. It looks like a simple one to resolve - keep both pieces - so I'll do the review anyway. Please merge forward and resolve the conflict before merging to trunk. Also, you'll have to merge forward to get a clean test run on buildbot anyway, as this old branch includes unrelated failing tests.
Aside from that, I think you just need to add tests for the
_intOrDefault uses in
isupport_MAXLIST and
isupport_TARGMAX.
#3286 also needs the PREFIX value to parse the users. | https://twistedmatrix.com/trac/ticket/3285 | CC-MAIN-2021-39 | refinedweb | 4,945 | 62.98 |
I.
We will need to choose something to bring to the front. I like to use Notepad for testing as I know it will be on every Windows desktop in existence. Open up Notepad and then put some other application's window in front of it.
Now we're ready to look at some code:
import win32gui def windowEnumerationHandler(hwnd, top_windows): top_windows.append((hwnd, win32gui.GetWindowText(hwnd))) if __name__ == "__main__": results = [] top_windows = [] win32gui.EnumWindows(windowEnumerationHandler, top_windows) for i in top_windows: if "notepad" in i[1].lower(): print i win32gui.ShowWindow(i[0],5) win32gui.SetForegroundWindow(i[0]) break
We only need PyWin32's win32gui module for this little script. We write a little function that takes a window handle and a Python list. Then we call win32gui's EnumWindows method, which takes a callback and an extra argument that is a Python object. According to the documentation, the EnumWindows method "Enumerates all top-level windows on the screen by passing the handle to each window, in turn, to an application-defined callback function". So we pass it our method and it enumerates the windows, passing a handle of each window plus our Python list to our function. It works kind of like a messed up decorator.
Once that's done, your top_windows list will be full of lots of items, most of which you didn't even know were running. You can print that our and inspect your results if you like. It's really quite intereting. But for our purposes, we will skip that and just loop over the list, looking for the word "Notepad". Once we find it, we use win32gui's ShowWindow and SetForegroundWindow methods to bring the application to the foreground.
Note that really need to look for a unique string so that you bring up the right window. What would happen if you had multiple Notepad instance running with different files open? With the current code, you would bring the first Notepad instance that it found forward, which might not be what you want.
You may be wondering why anyone would even want to go to the trouble of doing this in the first place. In my case, I once had a project where I had to bring a certain window to the foreground and enter automate it using SendKeys. It was an ugly piece of brittle code that I wouldn't wish on anyone. Fortunately, there are better tools for that sort of thing nowadays such as pywinauto, but you still might find this code helpful in something esoteric that is thrown your way. Have fun!
Note: This code was tested using Python 2.7.8 and PyWin32 219 on Windows 7. | https://www.blog.pythonlibrary.org/2014/10/20/pywin32-how-to-bring-a-window-to-front/ | CC-MAIN-2022-27 | refinedweb | 448 | 72.66 |
#include <xdgforeign.h>
Detailed Description
Wrapper for the zxdg_exporter_v2 interface.
This class provides a convenient wrapper for the zxdg_exp_exporter_v2 pointer as it provides matching cast operators.
Definition at line 53 of file xdgforeign.h.
Member Function Documentation
Destroys the data held by this .
This method is supposed to be used when the connection to the Wayland server goes away. If the connection is not valid anymore, it's not possible to call release anymore as that calls into the Wayland connection and the call would fail. This method cleans up the data, so that the instance can be deleted or set up to a new zxdg_exporter_v2 interface once there is a new connection available.
It is suggested to connect this method to ConnectionThread::connectionDied:
Definition at line 49 of file xdgforeign.cpp.
- Returns
- The event queue to use for creating objects with this .
Definition at line 72 of file xdgforeign.cpp.
The export request exports the passed surface so that it can later be imported via XdgImporter::importTopLevel.
A surface may be exported multiple times, and each exported handle may be used to create an XdgImported multiple times.
- Parameters
-
Definition at line 77 of file xdgforeign.cpp.
- Returns
trueif managing a zxdg_exporter_v2.
Definition at line 62 of file xdgforeign.cpp.
Releases the zxdg_exporter_v2 interface.
After the interface has been released the instance is no longer valid and can be setup with another zxdg_exporter_v2 interface.
Definition at line 44 of file xdgforeign.cpp.
The corresponding global for this interface on the Registry got removed.
This signal gets only emitted if the got created by Registry::create
Sets the
queue to use for creating objects with this .
Definition at line 67 of file xdgforeign.cpp.
Setup this to manage the
.
When using Registry::create there is no need to call this method.
Definition at line 39 of file xdgforeign.cpp.
The documentation for this class was generated from the following files:
Documentation copyright © 1996-2020 The KDE developers.
Generated on Sat Mar 28 2020 08:22:44 by doxygen 1.8.11 written by Dimitri van Heesch, © 1997-2006
KDE's Doxygen guidelines are available online. | https://api.kde.org/frameworks/kwayland/html/classKWayland_1_1Client_1_1XdgExporter.html | CC-MAIN-2020-16 | refinedweb | 354 | 50.53 |
On Wed, Apr 01, 2015 at 10:33:14AM +0000, Wang Nan wrote:> This patch introduces a --map-adjustment argument for perf report. The> goal of this option is to deal with private dynamic loader used in some> special program.> SNIP> diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c> index 051883a..dc9e91e 100644> --- a/tools/perf/util/machine.c> +++ b/tools/perf/util/machine.c> @@ -1155,21 +1155,291 @@ out_problem:> return -1;> }> > +/*> + * Users are allowed to provide map adjustment setting for the case> + * that an address range is actually privatly mapped but known to be> + * ELF object file backended. Like this:> + *> + * |<- copied from libx.so ->| |<- copied from liby.so ->|> + * |<-------------------- MMAP area --------------------->|> + *> + * When dealing with such mmap events, try to obey user adjustment.> + * Such adjustment settings are not allowed overlapping.> + * Adjustments won't be considered as valid code until real MMAP events> + * take place. Therefore, users are allowed to provide adjustments which> + * cover never mapped areas, like:> + *> + * |<- libx.so ->| |<- liby.so ->|> + * |<-- MMAP area -->|> + *> + * This feature is useful when dealing with private dynamic linkers,> + * which assemble code piece from different ELF objects.> + *> + * map_adj_list is an ordered linked list. Order of two adjustments is> + * first defined by their pid, and then by their start address.> + * Therefore, adjustments for specific pids are groupped together> + * naturally.> + */> +static LIST_HEAD(map_adj_list);we dont like global stuff ;-)I think this belongs to the machine object, which is createdwithin the perf_session__new, so after options parsing.. humperhaps you could stash stash 'struct map_adj' objects andadd some interface to init perf_session::machines::hostonce it's created?> +struct map_adj {IMHO 'struct map_adjust' suits better.. using 'adjust' insteadof 'adj' is not such a waste of space and it's more readable(for all 'adj' instances in the patch)> + u32 pid;> + u64 start;> + u64 len;> + u64 pgoff;> + struct list_head list;> + char filename[PATH_MAX];> +};> +> +enum map_adj_cross {'enum map_adjust' ?> + MAP_ADJ_LEFT_PID,> + MAP_ADJ_LEFT,> + MAP_ADJ_CROSS,> + MAP_ADJ_RIGHT,> + MAP_ADJ_RIGHT_PID,> +};> +> +/*> + * Check whether two map_adj cross over each other. This function is> + * used for comparing adjustments. For overlapping adjustments, it> + * reports different between two start address and the length of> + * overlapping area. Signess of pgoff_diff can be used to determine> + * which one is the left one.> + *> + * If anyone in r and l has pid set as -1, don't consider pid.> + */SNIP> static int machine_map_new(struct machine *machine, u64 start, u64 len,> u64 pgoff, u32 pid, u32 d_maj, u32 d_min, u64 ino,> u64 ino_gen, u32 prot, u32 flags, char *filename,> enum map_type type, struct thread *thread)> {> + struct map_adj *pos;> struct map *map;> > - map = map__new(machine, start, len, pgoff, pid, d_maj, d_min,> - ino, ino_gen, prot, flags, filename, type, thread);could you please loop below into separate function?> + list_for_each_entry(pos, &map_adj_list, list) {> + u64 adj_start, adj_len, adj_pgoff, cross_len;> + enum map_adj_cross cross;> + struct map_adj tmp;> + int pgoff_diff;just curious.. how many --map-adjust entries do you normaly use?maybe if it's bigger number then a) using rb_tree might be fasterand b) using some sort of config file could be better way forinput might be easier> +> +again:> + if (len == 0)> + break;> +> + tmp.pid = pid;> + tmp.start = start;> + tmp.len = len;> +> + cross = check_map_adj_cross(&tmp,> + pos, &pgoff_diff, &cross_len);> +> + if (cross < MAP_ADJ_CROSS)> + break;> + if (cross > MAP_ADJ_CROSS)> + continue;> +> + if (pgoff_diff <= 0) {> + /*> + * |<----- tmp ----->|> + * |<----- pos ----->|> + */> +> + adj_start = tmp.start;SNIP> +int parse_map_adjustment(const struct option *opt, const char *arg, int unset);> +> #endif /* __PERF_MACHINE_H */> -- > 1.8.3.4> | https://lkml.org/lkml/2015/4/1/316 | CC-MAIN-2021-25 | refinedweb | 549 | 64.81 |
django-athumb 2.0
A simple, S3-backed thumbnailer field.# and making the user wait, we get that
out of the way from the beginning. This leads to a few big benefits:
* We never check for the existence of a file, after the first save/upload. We
assume it exists, and skip a whole lot of Disk I/O trying to determine that.
This was horrendously slow on sorl + S3, as it had to hit a remote service
every time it wanted to know if a thumbnail needed generating.
* Since we define every possible thumbnail in advance via models.py, we have
a defined set of possible values. They can also be more intelligently named
than other packages. It is also possible to later add more sizes/thumbs.
* This may be ran on your own hardware with decent speed. Running it on EC2
makes it just that much faster.
All code is under a BSD-style license, see LICENSE for details.
Source:
## Requirements
- python >= 2.5
- django >= 1.0
- boto
- PIL
## Installation
To install run
python setup.py install
which will install the application into python's site-packages directory.
##.
## Template Tags
When referring to media in HTML templates you can use custom template tags.
These tags can by accessed by loading the athumb template tag collection.
{% load thumbnail %}
If you'd like to make the athumb tags global, you can add the following to
your master urls.py file:
from django.template import add_to_builtins
add_to_builtins('athumb.templatetags.thumbnail')
Some backends (S3) support https URLs when the requesting page is secure.
In order for the https to be detected, the request must be placed in the
template context with the key 'request'. This can be done automatically by adding
'django.core.context_processors.request' to __TEMPLATE\_CONTEXT\_PROCESSORS__
in settings.py
#### thumbnail
Returns the URL for the specified thumbnail size (as per the object's
models.py Model class):
{% thumbnail some_obj.image '50x50_cropped' %}
or, to save the value in a template context variable:
{% thumbnail some_obj.image 'front_page' as 'some_var' %}
As long as you've got Django's request context processor in, the thumbnail tag
will detect when the current view is being served over SSL, and automatically
convert any http to https in the thumbnail URL. If you want to always force
SSL for a thumbnail, add it as an argument like this:
{% thumbnail some_obj.image '60x60' force_ssl=True %}
To put the thumbnail URL on the context instead of just rendering
it, finish the tag with `as [context_var_name]`:
{% thumbnail image '60x60' as 'thumb' %}
<img src="{{thumb}}"/>
## To-Do
* See the issue tracker for a list of outstanding things needing doing.
## Change Log
### 2.0
* Complete re-work of the way thumbnails are specified in models.py.
* Removal of the attribute-based image field size retrieval, since we no
longer are just limited to dimensions.
* Further misc. improvements.
### 1.0
* Initial release.
- Downloads (All Versions):
- 0 downloads in the last day
- 54 downloads in the last week
- 235 downloads in the last month
- Author: Gregory Taylor
- License: BSD License
- Platform: any
- Requires django, boto, pil
- Categories
- Package Index Owner: gtaylor
- DOAP record: django-athumb-2.0.xml | https://pypi.python.org/pypi/django-athumb | CC-MAIN-2015-40 | refinedweb | 525 | 66.33 |
I'm trying to create a program with main frame and three buttons that opens new frame. I was able to get URL .gif image as background for main frame but I'm having difficulty changing URL .gif image for new frame when it is loaded.
I been trying to figure it out but there is not much information regarding new frame window with URL .gif background. Can any one give me a hand? Thank you
from Tkinter import*
import urllib
import base64
import Tkinter
def epl_Window():
epl = Tk()
epl.title("E")
URL = "h"
epl.a = urllib.urlopen(URL)
raw_input = epl.a.read()
epl.a.close()
c = base64.encodestring(raw_input)
image = PhotoImage(data=c)
label = Label(image=image)
label.pack()
Your program does not work for 2 reasons related to
epl_Window() method:
Tk()
labelto
epl
You can fix those 2 problems respectively by:
Toplevel()to change the line
epl = Tk()to
epl = Tkinter.Toplevel()
label = Label(image = image)to
label = Label(epl, image = image)
Once you apply the modifications above, you will get this (I clicked on the 3 buttons):
import Tkinter as Tk
label = Tk.Label(...) | https://codedump.io/share/uxgaFR96s9wl/1/setting-url-gif-background-but-not-working-with-new-frame-on-python-2 | CC-MAIN-2018-05 | refinedweb | 187 | 58.79 |
Adrian Bunk wrote:> On Fri, Nov 05, 2004 at 09:10:08PM -0500, Len Brown wrote:> >>On Fri, 2004-11-05 at 16:50, Adrian Bunk wrote:>>>>>The patch below completely removes 7 functions that were>>>EXPORT_SYMBOL'ed but had exactly zero users in the kernel and makes>>>another one that was previously EXPORT_SYMBOL'ed static.>>>>>>It also removes another unused global function to completely remove>>>drivers/acpi/hardware/hwtimer.c which contained no function used>>>anywhere in the kernel.>>>>>>Please comment on whether this patch is correct or whether in-kernel>>>users of these functions are pending.>>>>>>>>>diffstat output:>>> drivers/acpi/acpi_ksyms.c | 8 ->>> drivers/acpi/events/evxfevnt.c | 191 ----------------------------->>> drivers/acpi/hardware/Makefile | 2>>> drivers/acpi/hardware/hwtimer.c | 200>>>------------------------------->>> drivers/acpi/resources/rsxface.c | 52 -------->>> drivers/acpi/scan.c | 6>>> drivers/acpi/utilities/utxface.c | 89 ------------->>> include/acpi/achware.h | 17 -->>> include/acpi/acpi_bus.h | 1>>> include/acpi/acpixf.h | 24 --->>> 10 files changed, 6 insertions(+), 584 deletions(-)>>>>No, I can't apply this one as-is.>>Some of these routines are not called now>>simply because Linux/ACPI is evolving and we don't>>yet take advantage of some of the things supported>>by ACPICA core we use.> > > I understand this, that's why I asked for comments on this patch.> > But it seems a bit strange for me that e.g. the file hwtimer.c was added > nearly three years ago and exports functions - but currently has exactly > zero users. One effect is a needless code bloat for every single user > with CONFIG_ACPI_INTERPRETER=y.> > Removing unused global functions is a pretty cheap way to get the kernel > smaller without any loss of functionality. Please check which of the > functions touched in my patch will actually be used in the foreseeable > future (if it would e.g. take another three years until hwtimer.c will > be used, it might be better to re-add it when it will actually be used).Suggestion that satisfies both of you, I think:#undef ACPI_FUTURE_USAGE#ifdef ACPI_FUTURE_USAGEtons of unused exported functions#endif /* ACPU_FUTURE_USAGE */This is what is being done in at least one case in the kernel networksubsystem, incremental patches adds new functions, to be used byfuture patches, but sometimes Real Life (tm) gets in the way and theprogrammer stalls development for some time, no problem, just ifdef it.When, in the future, some functions start being used, hey, very easyto remove the #ifdef.Even for people trying to debug such subsystems eventually to getsomething working its _nice_ to know at first glance what is reallybeing used, speeding up the process for the benefit or everybody.Best Regards,- Arnaldo> > BTW: ACPI has tons of other unused global functions.> And other areas as well, keep up the good work Adrian.-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at | https://lkml.org/lkml/2004/11/6/81 | CC-MAIN-2017-43 | refinedweb | 487 | 58.28 |
a particular donut shop) you’d have to call your friend and ask for more information. Because this would be confusing and inefficient (particularly for your mailman), in most countries, all street names and house addresses within a city are required to be unique.
Similarly, C++ requires that all identifiers (variable and/or function names) be non-ambiguous. If two identifiers are introduced into the same program in a way that the compiler can’t tell them apart, the compiler or linker will produce an error. This error is generally referred to as a naming collision (or naming conflict).
An example of a naming collision
a.cpp:
b.cpp:
main.cpp:
Files a.cpp, b.cpp, and main.cpp will all compile just fine, since individually there’s no problem. However, when a.cpp and b.cpp are put in the same project together, a naming conflict will occur, since the function doSomething() is defined in both. This will cause a linker error.
Most naming collisions occur in two cases:
1) Two files are added into the same project that have a function (or global variable) with the same name (linker error).
2) A code file includes a header file that contains an identifier that conflicts with something else (compile error). We’ll discuss header files in the next lesson.
As programs get larger and use more identifiers, the odds of a naming collision being introduced increases significantly. The good news is that C++ provides plenty of mechanisms for avoiding naming collisions (such as local scope, which keeps variables inside functions from conflicting with each other, and namespaces, which we’ll introduce shortly), so most of the time you won’t need to worry about this.
The std namespace
When C++ was originally designed, all of the identifiers in the C++ standard library (such as cin and cout) were available to be used directly. However, this meant that any identifier in the standard library could potentially conflict with a name you picked for your own identifiers. Code that was working might suddenly have a naming conflict when you #included a new file from the standard library. Or worse, programs that would compile under one version of C++ might not compile under a future version of C++, as new functionality introduced into the standard library could conflict. So C++ moved all of the functionality in the standard library into a special area called a namespace.
Much like a city guarantees that all roads within the city have unique names, a namespace guarantees that identifiers within the namespace are unique. This prevents the identifiers in a namespace from conflicting with other identifiers.
It turns out that std::cout’s name isn’t really “std::cout”. It’s actually just “cout”, and “std” is the name of the namespace it lives inside. All of the functionality in the C++ standard library is defined inside a namespace named std (short for standard). In this way, we don’t have to worry about the functionality of the standard library having a naming conflict with our own identifiers.
We’ll talk more about namespaces in a future lesson and also teach you how to create your own. For now, the only thing you really need to know about namespaces is that whenever we use an identifier (like std::cout) that is part of the standard library, we need to tell the compiler that that identifier lives inside the std namespace.
Rule: When you use an identifier in a namespace, you always have to identify the namespace along with the identifier
Explicit namespace qualifier std::
The most straightforward way to tell the compiler that cout lives in the std namespace is by using the “std::” prefix. For example:
This is the safest way to use cout, because there’s no ambiguity about where cout lives (it’s clearly in the std namespace).
C++ provides other shortcuts for indicating what namespace an identifier is part of (via using statements). We cover those in lesson 4.3c -- Using statements. | http://www.learncpp.com/cpp-tutorial/1-8a-naming-conflicts-and-the-std-namespace/ | CC-MAIN-2017-26 | refinedweb | 670 | 61.77 |
- kubectl
- Helm
- Next steps
- Additional information
Required tools
Before deploying GitLab to your Kubernetes cluster, there are some tools you must have installed locally.
kubectl
kubectl is the tool that talks to the Kubernetes API. kubectl 1.12 or higher is required and it needs to be compatible with your cluster (+/- 1 minor release from your cluster).
> Install kubectl locally by following the Kubernetes documentation.
The server version of kubectl cannot be obtained until we connect to a cluster. Proceed with setting up Helm.
Helm
Helm is the package manager for Kubernetes. The
gitlab chart is tested and
supported with Helm v2 (2.12 or higher required, excluding 2.15).
Starting with version
v3.0.0 of the chart, Helm v3 (3.0.2 or higher required)
is also fully supported.
[Helm3].
helm(client) installed locally, and
tiller(server) installed inside Kubernetes.
Getting Helm
You can get Helm from the project’s releases page, or follow other options under the official documentation of installing Helm.
Tiller is deployed into the cluster and interacts with the Kubernetes API to deploy your applications. If role based access control (RBAC) is enabled, Tiller will need to be granted permissions to allow it to talk to the Kubernetes API.
If RBAC is not enabled, skip to initializing Helm.
If you are not sure whether RBAC is enabled in your cluster, or to learn more, read through our RBAC documentation.
Preparing for Helm with RBAC
kubectlinstalled and it’s up to date. Older versions do not have support for RBAC and will generate errors.
Helm v3.0 does not install Tiller in the cluster and as such uses the user’s RBAC permissions to peform the deployment of the chart.
Prior versions of Helm do install Tiller on the cluster and will need to be granted permissions to perform operations. These instructions grant cluster wide permissions, however for more advanced deployments permissions can be restricted to a single namespace.
To grant access to the cluster, we will create a new
tiller service account
and bind it to the
cluster-admin role:
For ease of use, these instructions will utilize the sample YAML file in this repository. To apply the configuration, we first need to connect to the cluster.
Connecting to the GKE cluster
The command to connect to the cluster can be obtained from the Google Cloud Platform Console by the individual cluster, by looking for the Connect button in the clusters list page.
Alternatively, use the command below, filling in your cluster’s information:
gcloud container clusters get-credentials <cluster-name> --zone <zone> --project <project-id>
Connecting to an EKS cluster
For the most up to date instructions, follow the Amazon EKS documentation on connecting to a cluster.
Connect to.
Upload the RBAC config
Upload the RBAC config in GKE
For GKE, you need to grab the admin credentials:
gcloud container clusters describe <cluster-name> --zone <zone> --project <project-id> --format='value(masterAuth.password)'
This command will output the admin password. We need the password to authenticate
with
kubectl and create the role.
We will also create an admin user for this cluster. Use a name you prefer but for this example we will include the cluster’s name in it:
CLUSTER_NAME=name-of-cluster kubectl config set-credentials $CLUSTER_NAME-admin-user --username=admin --password=xxxxxxxxxxxxxx kubectl --user=$CLUSTER_NAME-admin-user create -f
Upload the RBAC config in non-GKE clusters
For other clusters like Amazon EKS, you can directly upload the RBAC configuration:
kubectl create -f
Initializing Helm
If Helm v3 is being used, there no longer is an
init sub command and the
command is ready to be used once it is installed. Otherwise if Helm v2 is
being used, then Helm needs to deploy Tiller with a service account:
helm init --service-account tiller
If your cluster previously had Helm/Tiller installed, run the following command to ensure that the deployed version of Tiller matches the local Helm version:
helm init --upgrade --service-account tiller
Next steps
Once kubectl and Helm are configured, you can continue to configuring your Kubernetes cluster.
Additional information
The Distribution Team has a training presentation for Helm Charts.
Templates
Templating in Helm is done via golang’s text/template and sprig.
Some information on how all the inner workings behave:
Tips and tricks
Helm repository has some additional information on developing with Helm in it’s tips and tricks section.
Local Tiller
If you are using Helm v2 and not able to run Tiller in your cluster, a script is included that should allow you to use Helm with running Tiller in your cluster. The script uses your personal Kubernetes credentials and configuration to apply the chart.
To use the script, skip this entire section about initializing Helm. Instead, make sure you have Docker installed locally and run:
bin/localtiller-helm --client-only
After that, you can substitute
bin/localtiller-helm anywhere these
instructions direct you to run | https://docs.gitlab.com/12.10/charts/installation/tools.html | CC-MAIN-2020-50 | refinedweb | 822 | 51.78 |
Announcing .NET Multi-platform App UI Preview 3
With .NET 6 Preview 3 we are shipping the latest progress for mobile and desktop development with .NET Multi-platform App UI. This release adds the Windows platform with WinUI 3, improves the base application and startup builder, adds native lifecycle events, and continues to add more UI controls and layout capabilities. We are also introducing new semantic properties for accessibility. As we explore each of these in a bit more detail, we invite you to
dotnet new along with us and share your feedback.
Windows Desktop Now Supported
Project Reunion 0.5 has shipped! Now Windows joins Android, iOS, and macOS as target platforms you can reach with .NET MAUI! To get started, follow the Project Reunion installation instructions. For this release we have created a sample project that you can explore and run from the 16.10 preview of Visual Studio 2019.
Once we have the necessary .NET 6 build infrastructure for Project Reunion, we will add Windows to our single project templates.
Getting Started
As we are still in the early stages of preview, the process of installing all the dependencies you need for mobile and desktop development is a bit manual. To help ourselves, and you, Jonathan Dick has put together a useful
dotnet tool that evaluates your system and gathers as many of the required pieces as it can. To get started, install
maui-check globally from the command line:
dotnet tool install -g Redth.Net.Maui.Check
Source:
Now run
> maui-check and follow the instructions. Once you succeed, you’re ready to create your first app:
dotnet new maui -n HelloMaui
Step-by-step instructions for installing and getting started may also be found at
Your First Application
.NET MAUI starts every application using the Microsoft.Extensions HostBuilder. This provides a consistent pattern for application developers as well as library maintainers to quickly develop native applications. Each platform has a different starting point, and the consistent point of entry for your application is
Startup.cs. Here is the most basic example:
public class Startup : IStartup { public void Configure(IAppHostBuilder appBuilder) { appBuilder .UseMauiApp<App>(); } }
This is where you can do such things as register fonts and register compatibility for Xamarin.Forms renderers or your own custom renderers. This is also where you introduce your
App, an implementation of
Application which is responsible for (at least) creating a new
Window:
public partial class App : Application { public override IWindow CreateWindow(IActivationState activationState) { return new MainWindow(); } }
To complete the path to your content, a view is added to the
MainWindow:
public class MainWindow : IWindow { public MainWindow() { Page = new MainPage(); } public IPage Page { get; set; } public IMauiContext MauiContext { get; set; } }
And that’s it! You now have content in a window.
Native Lifecycle Events
Expanding on the startup extensions, Preview 3 introduces
ConfigureLifecycleEvents for easily hooking into native platform lifecycle events. This is an important introduction, especially for the single project experience, to simplify initialization and configuration needed by many libraries.
As a basic example, you can hook to the Android back button event and handle it as needed:
public class Startup : IStartup { public void Configure(IAppHostBuilder appBuilder) { appBuilder .UseMauiApp<App>() .ConfigureLifecycleEvents(lifecycle => { #if ANDROID lifecycle.AddAndroid(d => { d.OnBackPressed(activity => { System.Diagnostics.Debug.WriteLine("Back button pressed!"); }); }); #endif }); } }
Now let’s look at how libraries can use these methods to streamline their platform initialization work. Essentials (Microsoft.Maui.Essentials), a library for cross-platform non-UI services that is now a part of .NET MAUI, makes use of this to configure everything needed for all platforms in a single location:
public class Startup : IStartup { public void Configure(IAppHostBuilder appBuilder) { appBuilder .UseMauiApp<App>() .ConfigureEssentials(essentials => { essentials .UseVersionTracking() .UseMapServiceToken("YOUR-KEY-HERE"); }); } }
Within the Essentials code, you can see how the
ConfigureEssentials extension method is created and hooks into the platform lifecycle events to greatly streamline cross-platform native configuration.
public static IAppHostBuilder ConfigureEssentials(this IAppHostBuilder builder, Action<HostBuilderContext, IEssentialsBuilder> configureDelegate = null) { builder.ConfigureLifecycleEvents(life => { #if __ANDROID__ Platform.Init(MauiApplication.Current); life.AddAndroid(android => android .OnRequestPermissionsResult((activity, requestCode, permissions, grantResults) => { Platform.OnRequestPermissionsResult(requestCode, permissions, grantResults); }) .OnNewIntent((activity, intent) => { Platform.OnNewIntent(intent); }) .OnResume((activity) => { Platform.OnResume(); })); #elif __IOS__ life.AddiOS(ios => ios .ContinueUserActivity((application, userActivity, completionHandler) => { return Platform.ContinueUserActivity(application, userActivity, completionHandler); }) .OpenUrl((application, url, options) => { return Platform.OpenUrl(application, url, options); }) .PerformActionForShortcutItem((application, shortcutItem, completionHandler) => { Platform.PerformActionForShortcutItem(application, shortcutItem, completionHandler); })); #elif WINDOWS life.AddWindows(windows => windows .OnLaunched((application, args) => { Platform.OnLaunched(args); })); #endif }); if (configureDelegate != null) builder.ConfigureServices<EssentialsBuilder>(configureDelegate); return builder; }
You can see the full class on dotnet/maui. We are excited to see more libraries take advantage of this pattern to streamline their usage.
Updates to Controls and Layouts
Work continues enabling more controls, properties, and layout options in .NET MAUI, in addition to the existing compatibility renderers brought in from Xamarin.Forms. If you begin your application with a startup like the code above, then you’ll be using only the handlers currently implemented. To see what’s currently implemented, you can review the Handlers folder at dotnet/maui.
To track incoming work we have a Project Board setup for all the handlers we are accepting pull requests for. Several developers have already contributed, and early feedback indicates this architecture provides a much improved experience for ease of contribution.
“Porting handlers are fun. Any mid-senior developer can handle if they had a little bit of a proper understanding of how Xamarin Forms renderers and how Xamarin in general work.” – Burak
Special thanks to these community contributors:
- almirvuk
- AmrAlSayed0
- bkaankose
- brunck
- hevey
- pictos
- rogihee
Layouts have also received some updates in Preview 3. The
Grid now supports absolute sizes and auto (sizes to the content). LayoutAlignment options are also now available for
Grid and
StackLayout so you can begin positioning views with
HorizontalLayoutAlignment and
VerticalLayoutAlignment properties.
Semantic Properties for Accessibility
We have been working with many customers to better understand common difficulties around implementing accessibility across multiple native platforms, and how we might make this easier in .NET MAUI. One of the initiatives to come from this is adding new semantic properties to map cross-platform properties to native accessibility properties.
<Label Text="Welcome to .NET MAUI!" SemanticProperties. <Label Style="{DynamicResource Glyph}" Text="" SemanticProperties. <Label Text="Click the button. You know you want to!" FontSize="18" x: <Button Text="Click Me!" Clicked="OnButtonClicked" SemanticProperties.
For more information, the original specification and discussion is on this dotnet/maui issue.
Share Your Feedback
We are excited for this release, and look forward to your feedback. Join us at dotnet/maui and let us know what you think of these improvements.
With each new preview of .NET 6 you can expect more and more capabilities to “light up” in .NET MAUI. For now, please focus on the “dotnet new” experience. Xamarin.Forms developers should look forward to a mid-year release when we’ll encourage more migration exploration.
Great work from the team 💪🏽
Hopefully this technology will leave beta by .NET 8.0.
It’s amazing, but I feel that leaving mac and linux out of this feature renders the whole thing useless or at least half-baked :/
Anyway great work, I’m sure it’s not an easy feat 🙂
Thx Lucas!
Mac is in there. You can use Mac Catalyst which is what .NET MAUI uses by default, or you can use the Cocoa/AppKit Mac SDK that also ships with .NET 6.
For Linux, if you’re interested and or wish to contribute, please join this conversation.
Thanks for the answer David, I’ll have a look!
This is awesome.Thanks
this comment has been deleted.
Hey David! Thankful
Does this also run on Windows 7? like WPF
This is running with Project Reunion 0.5 and WinUI 3 which works down to Windows 10 October 2018 Update (Version 1809, OS build 17763).
so, that is why dotnet is not popular !!!!
I will never use MAUI/WinUI 3.0 if I am the leader of a company, because customer is there, I will not give up million’s of customer.
(and there’s tons of the 3rd party company support issue as an un-popularly technology)
That’s also why flutter / electron / qt is way popular than .net winform/wpf.
xamarin is amazing / also the unity3d, but the nature of drop customer/ drop support on there OS/Software in .net team, eventually make no one want to follow.
Windows 7 is from 2009 and is not supported anymore.
Let it go.
Windows 7, Windows 8 and Windows 8.1 still have ~20% of market for today (among all Windows versions) –
Many companies cannot ignore such a big customer group.
That is the real state of customer expectations for a software, not the end of support for Windows 7.
P. S. Windows 7 is still supported via Extended Security Updates (ESU) till Jan 2023
P. P. S. Project UNO (another cross-platform UI framework) supports Windows 7
we appreciate that MS and DNF bring this great lang to us, but we want to “support” the platform that customers in using , no matter MS support or not. Beside, do u notice that the first language/platform that doping customer is MS’s dotnet (Do you notice that MS is ongoing to Golang, Rust and Pyhon that will support every OS that customer still in using). In China, there is a wise word “枪打出头鸟” , I really hope dotnet team have this world in mind. Look at Java, Golang, Rust, and Python , Do their “support win,linux,mac” has a condition on the OS support from Os’s builder company ? no, they just say that they support win,liux,and mac, and the fact is their new version only drop the OS that customer truly not used.
I feel like that Ms is using .Net as a weapon , to kill the old customer who not pay them for new OS.
and because this, windows and UWP is already in the dead circle: win force dev to create new app for new OS => customer like you and me do not want wast money to buy new computer => dev’s company need to support their customer who cared by MS or Not => dev’s company looking for lanauge/platform that can really “crossplatform” => dev’s company out of MS’s ecosystem => dev’s gone => no app then no customer => MS’s new platfom/technology/lang dead => MS then create another “un-crossplatform” crossplatform-platfrom => …… => MS lost thire leadership. => and the circle is still continue (eg. WinUI)
Why, you can use dotnet with Windows 7.
There’s WinForms, WPF and the .NET Framework all ready for you.
The newer .NET Core (resp. now .NET 5 and .NET 6) and MAUI are new tech stacks for non-legacy environments.
If you belong to the minority who still has to support 10 year old operating systems, you’re covered. You just can’t use the brand new stuff.
I’m glad that the majority can get clean, efficient new tech that isn’t burdened with backwards compatibility.
Also, you are comparing native UI solutions with the likes of Electron, which is essentially a container in which your app runs (based on Chromium), i.e. non-native UI. Electron doesn’t support anything older than Windows 7 either and it’s likely that at some point, newer Electron versions will drop Windows 7 support too.
Why, you can use dotnet with Windows 7. There’s WinForms, WPF and the .NET Framework all ready for you.
We are talking not about using dotnet on Windows 7, but about cross-platform UI frameworks which supports Windows 7 among many other OS versions. WinForms, WPF and the .NET Framework cannot be used on Linux and Mac for example.
The newer .NET Core (resp. now .NET 5 and .NET 6) and MAUI are new tech stacks for non-legacy environments.
.NET 5 and upcoming .NET 6 still supports Windows 7. Ony MAUI drops supporting this OS.
you are comparing native UI solutions with the likes of Electron, which is essentially a container in which your app runs (based on Chromium), i.e. non-native UI.
Other examples of cross-platform UI frameworks that support Windows 7 are Uno platform and Qml.Net
Electron doesn’t support anything older than Windows 7 either
We are not talking about supporting Windows XP and Vista (that has less that 1% of market share), we need Windows 7, Windows 8 and Windows 8.1 support (that still have ~21% of market for today among all Windows versions) –
it’s likely that at some point, newer Electron versions will drop Windows 7 support
There are no official announcements about it..)
Exactly , I agree with You
Hi
Can I developed read only production app with this release?
No, I recommend waiting a few more releases. Some new API design may yet change, and we need to especially enable the rest of the layout work.
For the moment, please clone the samples and explore the fundamentals that are there. Let us know if you have any feedback on that. Then look forward to another few releases before you decide to start a prod app.
David I hope things like SemanticProperties.Hint get simplified to Semantic.Hint etc. Simpler and reads better in the code.
Hey is it going to be possible to render MAUI inside android for example? As in Xamarin.Forms sometimes great UI components exist but you have a native app and there is no documentation if this is possible. Since now it uses the builder pattern would it become easier to render for example some button or part of the app UI from a MAUI control while you still have your native android app with its UI for example?
aaaa
Yes, this same embedding scenario you have today in Xamarin.Forms will be supported in .NET MAUI.
Well done, keep it up.
@David Ortinau – there are a lot of requirements to be installed with MAUI, namely those relating to Android and Mac/iOS… is/will there be an option to do development with MAUI on Widows without installing other OSs dependencies? This is just for development, not for producing deployment/release builds.
Still waiting for the new WPF hires. UWP/Reunion is a waste of time, lip service. Do you truly expect me to trudge along with a framework that has no public source code and no actual desktop controls like menubar/MDI/docking? Surely you jest. I learned during the MFC days why app developers need any and all framework source code and that hasn’t changed in 20 years. I’m not turning my powerful desktop apps into bland transplatform apps that don’t visually scale to match the user workload AT ALL, merely to justify the political aftermath of Windows 8. Let WinRT die already, even the .NET team doesn’t respect it anymore. Managed code won, get over it.
Line of business developers have been left now for over 15+ years with buggy WPF, unfortunately you will be waiting another 20 years if you expect any change as Microsoft will be too busy trying to copy the next shiny thing Apple does. Even open sourcing WPF didn’t help as they employed nobody to manage the commits.
Microsoft cannot seem to grasp the wild crazy idea of multi window enterprise applications controlled via mouse and keyboard. WOW!! Wild eh!
Guess what! At a huge IT company, we are just starting new medium-sized commercial PC app project (for Win platform) . We chose .NET Forms! Well tested, well known, very simple. Cons are considered and known. Still. | https://devblogs.microsoft.com/dotnet/announcing-net-multi-platform-app-ui-preview-3/?WT.mc_id=helloworld-17228-cxa | CC-MAIN-2022-21 | refinedweb | 2,604 | 56.96 |
by Alfarhan Zahedi
How to schedule jobs in a Django application using Heroku Scheduler
Recently, I published my first Django application on Heroku.
The application is fairly simple — it lists the score associated with every classical problem on SPOJ.
SPOJ — Sphere Online Judge — is a problemset archive, online judge and contest hosting service accepting solutions in many languages.
You can find the application live here.
The application uses the Python libraries
bs4 and
requests to scrape the contents of the aforementioned website, obtain the required details for every problem (namely — problem code, problem name, users and score), and store them in a database.
Now, the score associated with the problems on SPOJ is dynamic. It is calculated using the following formula:
80 / (40 + number_of_people_who_have_solved_the_problem)
So, the score associated with the problems on SPOJ changes as number_of_people_who_have_solved_the_problem changes.
Hence, the data collected by my application will be rendered useless after a certain interval of time. I need to set up a scheduler to keep my database updated.
Now, it’s a dead simple application. So I wanted to set up the scheduler with the least amount of configuration and code possible.
Custom Django Management Commands and Heroku Scheduler to the rescue!
Let us understand our two saviors.
1. Custom Django Management Commands
Custom Django Management Commands are structured as Python classes that inherit their properties and behavior from
django.core.management.base.BaseCommand class.
They are used to add a
manage.py action for a Django app.
runserver or
migrate are two such actions.
A typical example of such a class would be:
from django.core.management.base import BaseCommand
class Command(BaseCommand): help = "<appropriate help text here>" def handle(self, *args, **options): self.stdout.write("Hello, World!")
The class must be named
Command, and subclass
BaseCommand.
help should hold a short description of the command, which will be printed in help messages.
handle(self, *args, **options) defines the actual logic of the command. In this case, we are just writing the string
Hello, World! into the standard output. In my case,
handle(self, *args, **options) performs the task of scraping the website — spoj.com and updating the database if the score associated with any of the problem changes.
handle(self, *args, **options) is automatically run whenever the following command is used:
python manage.py <name of the python script containing the management class>
If the name of the script is, say,
script.py, then the command would be:
python manage.py script
Notice the handle method declares three input argument:
self to reference the class instance,
*args to reference arguments of the method itself, and
**option to reference arguments passed as part of the management command.
Where in the project structure does this
script.py go?
(Here,
script.py refers to the name of the script containing the custom Django management command.)
It’s quite simple. The official documentation explains it well:
Just add a
management/commandsdirectory to the application. Django will register a
manage.pycommand for each Python module in that directory whose name doesn’t begin with an underscore.
For example:
polls/ __init__.py models.py management/ __init__.py commands/ __init__.py _private.py closepoll.py tests.py views.py
In this example, the
closepollcommand will be made available to any project that includes the
pollsapplication in
INSTALLED_APPS.
The
_private.pymodule will not be available as a management command.
The
closepoll.pymodule has only one requirement – it must define a class
Commandthat extends
BaseCommandor one of its subclasses.
And so, if we run the following command in our terminal:
python manage.py closepoll,
handle(self, *args, **options)inside
closepoll.pywill be run, and any logic/tasks contained inside the aforementioned function will be executed.
My project structure is as follows:
spojscore│ .gitignore│ manage.py│ Procfile│ README.md│ requirements.txt│ runtime.txt│├───core│ │ admin.py│ │ apps.py│ │ models.py│ │ tests.py│ │ views.py│ │ __init__.py│ ││ ├───management│ │ │ __init__.py│ │ ││ │ ├───commands│ │ script.py│ │ __init__.py│ │ │ ││ ├───static│ │ └───core│ │ ├───css│ │ │ style.css│ │ ││ │ └───img│ │ favicon.png│ │ logo.png│ ││ ├───templates│ └───core│ core.html│└───spojscore settings.py urls.py wsgi.py __init__.py
Here,
script.py contains the custom management command — Python code to scrape spoj.com, collect details of all the classical problems, and update the database accordingly.
If you see, it’s situated inside
core\management\commands.
If you are interested, you can find
script.py here.
I think its clear now that I can scrape spoj.com and obtain the desired data by simply running
python manage.py script from the terminal.
So, to keep my database updated, I just need to run the above command at least once a day.
2. Heroku Scheduler
As per Heroku’s website:
Scheduler is a free add-on for running jobs on your app at scheduled time intervals, much like
cronin a traditional server environment.
A dashboard allows you to configure jobs to run every 10 minutes, every hour, or every day, at a specified time. When invoked, these jobs will run as one-off dynos and show up in your logs as a dyno named like
scheduler.X.
Once you’ve deploying the application, install the Heroku Scheduler add-on.
To schedule a frequency and time for a job, open the Heroku Scheduler dashboard by finding the app in My Apps, clicking “Overview”, then selecting “Heroku Scheduler” from the Installed add-ons list.
On the Scheduler Dashboard, click “Add Job…”, enter a task, select a frequency, dyno size, and next run time.
In my case, the task is
python manage.py script, which is to be executed daily (frequency) using my free dynos (dyno size) at 00:00 UTC (next run time).
That’s it!
My database will be updated at 00:00 UTC every day, and I didn’t have to install any extra Python libraries, or write any extra pieces of code. Yay!
If you get stuck anywhere, drop a comment and I will try my best to help you.
Some final notes:
- Heroku’s official website says that — “Scheduler job execution is expected but not guaranteed. Scheduler is known to occasionally (but rarely) miss the execution of scheduled jobs. If scheduled jobs are a critical component of your application, it is recommended to run a custom clock process instead for more reliability, control, and visibility.” This point should be kept in mind while using Heroku Scheduler.
Mine is dead simple application which uses the Heroku Scheduler to run a simple script just once a day. So, I guess it would do a great job!
- My application, I suppose, is useful for competitive programmers. Why? I have explained it in great detail here.
- You can find the source code of my application here.
A piece from my personal musings:
I am just another self-taught programmer.
I have being writing code for a couple of years now, and have always wanted to write about my experiences, endeavors, failures and successes.
But alas, I could not.
I thought that my endeavors were not exciting enough or that my experiences were not going to help anyone. And so I restrained myself from writing about them.
To be honest, I think the same now.
So, how come I wrote this article?
Well, this is going to be my first of many articles.
And the reason for the change, you ask?
A newsletter.
Last week, as usual, I received the weekly newsletter from CSS-Tricks — “This week in Web Design and Development”.
Here, is an excerpt from the same:
It’s buck wild to have so many helpful resources available to help us at any moment: from blog posts and books to random node.js conference talks that only have 8 views and 7 of them are now mine. So I think this weekend has reinforced my faith in blogging and sharing what you know, where random notes left on some developer’s old blog have helped me tremendously.
Anywho, on a similar note, I’ve been thinking a bunch about how social networks prioritize fame over value. If you publish something on Medium for example and it only gets a single clap then it makes you feel like, why bother? What’s the point if no-one’s reading this thing? But I think we have to fight that inclination to be woo’d with fame and social-network notoriety because I wonder how many helpful blog posts and videos weren’t made simply because someone thought they weren’t going to get half a million likes or retweets from it.
My advice after learning from so many helpful people this weekend is this: if you’re thinking of writing something that explains something you struggled with, do it! Don’t worry about the views and likes and Internet hugs. If you’ve struggled with figuring out this one weird thing then jot it down, even if it’s unedited and it uses too many commas and you don’t like the tone of it.
That’s because someone like me is bound to find what you’ve written and it’ll make their whole weekend a lot less stressful than it could’ve been.
That’s it. Those few lines inspired me to write about my endeavors and experiences.
Maybe, you should too. | https://www.freecodecamp.org/news/scheduling-jobs-in-a-django-application-using-heroku-scheduler-13c971a22979/ | CC-MAIN-2019-43 | refinedweb | 1,538 | 65.83 |
Georgia 🇬🇪
Get import and export customs regulation before travelling to Georgia. Items allowed to import are Goods weighing less than 30 kilograms and with a value less than GEL3000. Prohibited items are Counterfeit and pirated goods. Some of restricted items are Travellers may import up to five animals not for commercial use. Georgia is part of Asia with main city at
Tbilisi. Its Developing country with a
population of 4M people. The main currency is Lari. The languages
spoken are Georgian.
👍 Developing 👨👩👦👦 4M people
chevron_left
import
export
Useful Information
Find other useful infromation when you are travelling to other country like visa details, embasssies, customs, health regulations and so on. | https://visalist.io/georgia/customs | CC-MAIN-2020-24 | refinedweb | 110 | 59.09 |
...one of the most highly
regarded and expertly designed C++ library projects in the
world. — Herb Sutter and Andrei
Alexandrescu, C++
Coding Standards
Spreadsort combines generic implementations of multiple high-speed sorting algorithms that outperform those in the C++ standard in both average and worst case performance when there are over 1000 elements in the list to sort.
They fall back to STL std::sort on small data sets. example directory.
string_sort
Unlike many radix-based algorithms, the underlying
algorithm is designed around worst-case performance.
It performs better on chunky data (where it is not widely distributed),
so that on real data it can perform substantially better than on random
data. Conceptually,
spreadsort
can sort any data for which an absolute ordering can be determined, and
spreadsort
is sufficiently flexible that this should be possible.
string_sort
Situations where
is fastest relative to std::sort:
spreadsort
has an optimization to quit early in this case.has an optimization to quit early in this case.
spreadsort
Situations where
is slower than std::sort:
spreadsort
are faster on reverse-ordered data than randomized data, but std::sort speeds up more in this special case.are faster on reverse-ordered data than randomized data, but std::sort speeds up more in this special case.
spreadsort
to std::sort if the input size is less than 1000, so performance is identical for small amounts of data in practice.to std::sort if the input size is less than 1000, so performance is identical for small amounts of data in practice.
spreadsort
These functions are defined in
namespace boost::sort::spreadsort.
Each of
,
integer_sort
,
and
float_sort
have 3 main versions: The base version, which takes a first iterator
and a last iterator, just like std::sort:
string_sort
integer_sort(array.begin(), array.end()); float_sort(array.begin(), array.end()); string_sort(array.begin(), array.end());
The version with an overridden shift functor, providing flexibility in
case the
operator>> already does something other than
a bitshift. The rightshift functor takes two args, first the data type,
and second a natural number of bits to shift right.
For
this variant is slightly different; it needs a bracket functor equivalent
to
string_sort
operator[], taking a number corresponding to the character
offset, along with a second
getlength functor to get the
length of the string in characters. In all cases, this operator must
return an integer type that compares with the
operator<
to provide the intended order (integers can be negated to reverse their
order).
In other words (aside from negative floats, which are inverted as ints):
rightshift(A, n) < rightshift(B, n) -> A < B A < B -> rightshift(A, 0) < rightshift(B, 0)
integer_sort(array.begin(), array.end(), rightshift());
string_sort(array.begin(), array.end(), bracket(), getsize());
See rightshiftsample.cpp for a working example of integer sorting with a rightshift functor.
And a version with a comparison functor for maximum flexibility. This functor must provide the same sorting order as the integers returned by the rightshift (aside from negative floats):
rightshift(A, n) < rightshift(B, n) -> compare(A, B) compare(A, B) -> rightshift(A, 0) < rightshift(B, 0)
integer_sort(array.begin(), array.end(), negrightshift(), std::greater<DATA_TYPE>());
Examples of functors are:
struct lessthan { inline bool operator()(const DATA_TYPE &x, const DATA_TYPE &y) const { return x.a < y.a; } };
struct bracket { inline unsigned char operator()(const DATA_TYPE &x, size_t offset) const { return x.a[offset]; } };
struct getsize { inline size_t operator()(const DATA_TYPE &x) const{ return x.a.size(); } };
and these functors are used thus:
string_sort(array.begin(), array.end(), bracket(), getsize(), lessthan());
See stringfunctorsample.cpp for a working example of sorting strings with all functors.
The
algorithm is a hybrid algorithm; when the number of elements being sorted
is below a certain number, comparison-based sorting is used. Above it,
radix sorting is used. The radix-based algorithm will thus cut up the
problem into small pieces, and either completely sort the data based
upon its radix if the data is clustered, or finish sorting the cut-down
pieces with comparison-based sorting.
spreadsort
The Spreadsort algorithm dynamically chooses either comparison-based or radix-based sorting when recursing, whichever provides better worst-case performance. This way worst-case performance is guaranteed to be the better of 𝑶(N⋅log2(N)) comparisons and 𝑶(N⋅log2(K/S + S)) operations where
This results in substantially improved performance for large N;
tends to be 50% to 2X faster than std::sort,
while
integer_sort
and _string_sort are roughly 2X faster than std::sort.
float_sort
Performance graphs are provided for
,
integer_sort
,
and
float_sort
in their description.
string_sort
Runtime Performance comparisons and graphs were made on a Core 2 Duo laptop running Windows Vista 64 with MSVC 8.0, and an old G4 laptop running Mac OSX with gcc. Boost bjam/b2 was used to control compilation.
Direct performance comparisons on a newer x86 system running Ubuntu, with the fallback to std::sort at lower input sizes disabled are below.
starts to become faster than std::sort
at about 1000 integers (4000 bytes), and
integer_sort
becomes faster than std::sort
at slightly fewer bytes (as few as 30 strings).
string_sort
times are very similar to
float_sort
times.
integer_sort
Histogramming with a fixed maximum number of splits is used because it reduces the number of cache misses, thus improving performance relative to the approach described in detail in the original SpreadSort publication.
The importance of cache-friendly histogramming is described in Arne Maus, Adaptive Left Reflex, though without the worst-case handling described below.
The time taken per radix iteration is:
𝑶(N) iterations over the data
𝑶(N) integer-type comparisons (even for _float_sort
and
)
string_sort
𝑶(N) swaps
𝑶(2S) bin operations.
To obtain 𝑶(N) worst-case performance per iteration, the restriction S <= log2(N) is applied, and 𝑶(2S) becomes 𝑶(N). For each such iteration, the number of unsorted bits log2(range) (referred to as K) per element is reduced by S. As S decreases depending upon the amount of elements being sorted, it can drop from a maximum of Smax to the minimum of Smin.
Assumption: std::sort is assumed to be 𝑶(N*log2(N)), as introsort exists and is commonly used. (If you have a quibble with this please take it up with the implementor of your std::sort; you're welcome to replace the recursive calls to std::sort to calls to introsort if your std::sort library call is poorly implemented).
Introsort is not included with this algorithm for simplicity and because the implementor of the std::sort call is assumed to know what they're doing.
To maintain a minimum value for S (Smin), comparison-based
sorting has to be used to sort when n <= log2(meanbinsize),
where log2(meanbinsize) (lbs) is a small constant,
usually between 0 and 4, used to minimize bin overhead per element. There
is a small corner-case where if K < Smin and
n >= 2^K, then the data can be sorted in a single
radix-based iteration with an S = K (this bucketsorting
special case is by default only applied to
).
So for the final recursion, worst-case performance is:
float_sort
1 radix-based iteration if K <= Smin,
or Smin + lbs comparison-based iterations if K > Smin but n <= 2(Smin + lbs).
So for the final iteration, worst-case runtime is 𝑶(N*(Smin + lbs)) but if K > Smin and N > 2(Smin + lbs) then more than 1 radix recursion will be required.
For the second to last iteration, K <= Smin * 2 + 1 can be handled, (if the data is divided into 2(Smin + 1) pieces) or if N < 2(Smin + lbs + 1), then it is faster to fallback to std::sort.
In the case of a radix-based sort plus recursion, it will take 𝑶(N*(Smin + lbs)) + 𝑶(N) = 𝑶(N*(Smin + lbs + 1)) worst-case time, as K_remaining = K_start - (Smin + 1), and K_start <= Smin * 2 + 1.
Alternatively, comparison-based sorting is used if N < 2(Smin + lbs + 1), which will take 𝑶(N*(Smin + lbs + 1)) time.
So either way 𝑶(N*(Smin + lbs + 1)) is the worst-case time for the second to last iteration, which occurs if K <= Smin * 2 + 1 or N < 2(Smin + lbs + 1).
This continues as long as Smin <= S <= Smax, so that for K_m <= K_(m-1) + Smin + m where m is the maximum number of iterations after this one has finished, or where N < 2(Smin + lbs + m), then the worst-case runtime is 𝑶(N*(Smin + lbs + m)).
K_m at m <= (Smax - Smin) works out to:
K_1 <= (Smin) + Smin + 1 <= 2Smin + 1
K_2 <= (2Smin + 1) + Smin + 2
as the sum from 0 to m is m(m + 1)/2
K_m <= (m + 1)Smin + m(m + 1)/2 <= (Smin + m/2)(m + 1)
substituting in Smax - Smin for m
K_(Smax - Smin) <= (Smin + (Smax - Smin)/2)*(Smax - Smin + 1)
K_(Smax - Smin) <= (Smin + Smax) * (Smax - Smin + 1)/2
Since this involves Smax - Smin + 1 iterations, this works out to dividing K into an average (Smin + Smax)/2 pieces per iteration.
To finish the problem from this point takes 𝑶(N * (Smax - Smin)) for m iterations, plus the worst-case of 𝑶(N*(Smin + lbs)) for the last iteration, for a total of 𝑶(N *(Smax + lbs)) time.
When m > Smax - Smin, the problem is divided into Smax pieces per iteration, or std::sort is called if N < 2^(m + Smin + lbs). For this range:
K_m <= K_(m - 1) + Smax, providing runtime of
𝑶(N *((K - K_(Smax - Smin))/Smax + Smax + lbs)) if recursive,
or 𝑶(N * log(2^(m + Smin + lbs))) if comparison-based,
which simplifies to 𝑶(N * (m + Smin + lbs)), which substitutes to 𝑶(N * ((m - (Smax - Smin)) + Smax + lbs)), which given that m - (Smax - Smin) <= (K - K_(Smax - Smin))/Smax (otherwise a lesser number of radix-based iterations would be used)
also comes out to 𝑶(N *((K - K_(Smax - Smin))/Smax + Smax + lbs)).
Asymptotically, for large N and large K, this simplifies to:
𝑶(N * (K/Smax + Smax + lbs)),
simplifying out the constants related to the Smax - Smin range, providing an additional 𝑶(N * (Smax + lbs)) runtime on top of the 𝑶(N * (K/S)) performance of LSD radix sort, but without the 𝑶(N) memory overhead. For simplicity, because lbs is a small constant (0 can be used, and performs reasonably), it is ignored when summarizing the performance in further discussions. By checking whether comparison-based sorting is better, Spreadsort is also 𝑶(N*log(N)), whichever is better, and unlike LSD radix sort, can perform much better than the worst-case if the data is either evenly distributed or highly clustered.
This analysis was for
and
integer_sort
.
float_sort
differs in that Smin = Smax = sizeof(Char_type) * 8,
lbs is 0, and that std::sort's
comparison is not a constant-time operation, so strictly speaking
string_sort
runtime is
string_sort
𝑶(N * (K/Smax + (Smax comparisons))).
Worst-case, this ends up being 𝑶(N * K) (where K is the mean string length in bytes), as described for American flag sort, which is better than the
𝑶(N * K * log(N))
worst-case for comparison-based sorting.
and
integer_sort
have tuning constants that control how the radix-sorting portion of those
algorithms work. The ideal constant values for
float_sort
and
integer_sort
vary depending on the platform, compiler, and data being sorted. By far
the most important constant is max_splits, which
defines how many pieces the radix-sorting portion splits the data into
per iteration.
float_sort
The ideal value of max_splits depends upon the size of the L1 processor cache, and is between 10 and 13 on many systems. A default value of 11 is used. For mostly-sorted data, a much larger value is better, as swaps (and thus cache misses) are rare, but this hurts runtime severely for unsorted data, so is not recommended.
On some x86 systems, when the total number of elements being sorted is small ( less than 1 million or so), the ideal max_splits can be substantially larger, such as 17. This is suspected to be because all the data fits into the L2 cache, and misses from L1 cache to L2 cache do not impact performance as severely as misses to main memory. Modifying tuning constants other than max_splits is not recommended, as the performance improvement for changing other constants is usually minor.
If you can afford to let it run for a day, and have at least 1GB of free
memory, the perl command:
./tune.pl -large -tune (UNIX)
or
perl tune.pl -large -tune -windows (Windows) can be used
to automatically tune these constants. This should be run from the
libs/sort
directory inside the boost home directory. This will work to identify
the
ideal constants.hpp settings for your system, testing
on various distributions in a 20 million element (80MB) file, and additionally
verifies that all sorting routines sort correctly across various data
distributions. Alternatively, you can test with the file size you're
most concerned with
./tune.pl number -tune (UNIX) or
perl
tune.pl number -tune -windows (Windows). Substitute the number
of elements you want to test with for
number. Otherwise,
just use the options it comes with, they're decent. With default settings
./tune.pl -tune (UNIX)
perl tune.pl -tune -windows
(Windows), the script will take hours to run (less than a day), but may
not pick the correct max_splits if it is over 10.
Alternatively, you can add the
-small option to make it
take just a few minutes, tuning for smaller vector sizes (one hundred
thousand elements), but the resulting constants may not be good for large
files (see above note about max_splits on Windows).
The tuning script can also be used just to verify that sorting works
correctly on your system, and see how much of a speedup it gets, by omiting
the "-tune" option. This runs at the end of tuning runs. Default
args will take about an hour to run and give accurate results on decent-sized
test vectors.
./tune.pl -small (UNIX)
perl tune.pl
-small -windows (Windows) is a faster option, that tests on smaller
vectors and isn't as accurate.
If any differences are encountered during tuning, please call
tune.pl
with
-debug > log_file_name. If the resulting log file
contains compilation or permissions issues, it is likely an issue with
your setup. If some other type of error is encountered (or result differences),
please send them to the library author at spreadsort@gmail.com. Including
the zipped
input.txt that was being used is also helpful. | https://www.boost.org/doc/libs/develop/libs/sort/doc/html/sort/single_thread/spreadsort.html | CC-MAIN-2021-43 | refinedweb | 2,427 | 50.16 |
Ext.Direct and HTTP sessions
Hi,
I hope I'm going mad, but I suspect not.
I'm calling Ext.Direct methods, and getting different session ids each time!
Using IIS7, and a tweaked version of Evan's Ext.Direct router.
Each action class implements IReadOnlySessionState.
Server-side (simplifed):
Code:
[DirectAction()] public class MyHandler : DirectHandler, IReadOnlySessionState { [DirectMethod("getSessionId")] public string GetSessionId() { return HttpContext.Current.Session.SessionID; } }
Code:
var repeatCount = 20; var func = function(counter) { MyHandler.getSessionId( function(response, e) { if (e.status) { me.update(Ext.String.format('{0}<br/>SessionId{1}: {2}', me.html, counter, Ext.isEmpty(response) ? '<empty>' : response)); } else { me.update(Ext.String.format('{0}<br/>Exception{1}: {2}', me.html, counter, e.message)); } if (counter < repeatCount) { func(++counter); } } ); } func(1);
Code:
SessionId1: rbfqlb55iyowcp45oly11w55 SessionId2: cy0gwvu0sivxax555jochdfm SessionId3: mmxizyia2crhla45aowtf4ft SessionId4: y5lqv345m2fanp554v0batrf SessionId5: cshgms5503i4hs55yzi0t155 SessionId6: zwmaxfjvu2qkueuxpds1bem4 SessionId7: vxwh2dynkdwpkeyt3e5j4y45 SessionId8: omsbawv042eju155ebt2gr2i SessionId9: dsa31dz5eys2gd45qrmtmn55 SessionId10: 3ac1lv55te0odk55ii0poz45 SessionId11: sekf1w452rzbq52zji4hrn45 SessionId12: trabhsugqt3rsf55w55arn2s SessionId13: z4tk1y3wiqonftejpa0nddri SessionId14: cq1vqaznudfclq553q10syq1 SessionId15: ksylrcawjx23j3555es3qrnq SessionId16: oiju1h452uywvy45cecju245 SessionId17: lkfljwnk0rvgt52f1ikemorm SessionId18: a453sinrbasaws55yiwbc545 SessionId19: fghs0p55zymu1vywh4vg5a55 SessionId20: j0sa05efxicc40bo3qwoyl2y
Please help, session data is crucial to our authentication mechanism!
Cheers,
Westy
Hmm, from extensive googling it seems that all calls to a IHttpHandler get a new session.
There must be a way around this!
Can't help think that it's something to do with the ASP.NET_SessionId cookie.
Does Ext store the session id returned by the server, and pass it back in with subsequent requests?
Does the same thing happen with standard Ajax requests (that'll be my next test I think)?
It certainly didn't when I used Ext 2 and standard HTTP post/get asmx web services or JayRock services.
Seems that response cookie is set to pass back the aspnet session but something is then striping it.
I'll keep looking...
Sorted it, eventually.
Went through process of creating my own cookies, verifying they work, using them to restore my session etc.
Lots of head scratching and experimentation later and I've come to the conclusion that the IReadOnlySessionState implementation is bugged, you need to use IRequiresSessionState if you require a persistent session id.
Odd, because sure that was one of the the first things I tried...
Oh, also spotted that if you have an IIS application within another one you cannot change the stateServer settings in the child.
Hope this helps someone else, since has been doing my head in
Cheers,
Westy
LOL, just broke it again whilst preparing to check-in!
It seems that if you have no Global.asax for your web service project (even an empty one) then the session cookie is not sent back.
Grrr!
PS: Heh, damn caching. You also have to write something to session data in Session_Start, e.g. HttpContext.Current.Session[Guid.NewGuid().ToString()] = 0;
Haha, yeah, this is kind of like my own private blog at times
The point is that I use the session id to lookup a authentication token in a database. I need the session id to persist between calls to avoid the user having to authenticate for every call made.
The way ASP.Net handles sessions is to send a cookie in the initial response, which is echoed back to the server on subsequent calls so the session can be restored. Each hit extends the life of the session.
I've got it working fine now, and as I say I only need the session id itself. I don't need to store anything in session data, and always think that the need to do so indicates a flawed design somewhat.
Thanks for chipping in
Westy
Westy, i think this is dangerous, i never would rely on this. It gives a possible attacker the possibility to use these id to hack in without auth.
i have the same problem in TYPO3 where each request is expensive because of the auth process (each request does a complete init process of the BE). We additionally used ExtDirect to use a securityToken with CSRF to ensure noone can catch the session. The token itself gets stored in the user session.
For ExtJS we generate one token per instance and render it as variable. This token has to be added to each request and this validates.
Again i never would use it for auth
I don't intend on going into the complete detail of our authentication mechanism suffice to say it's very much like Kerberos, with multiple short-lived tokens that are exchanged over HTTPS; one to prove who the user is, and another to get access to a service given a valid user token.
It is very unlikely anyone could steal a users session given that they'd need the session id and IP.
Thanks again for your input.
PS: I also believe ASP.Net has protection around its session, meaning they are bound to the caller.
Similar Threads
Adding HTTP Headers to Direct RequestsBy dancablam in forum Ext.DirectReplies: 2Last Post: 10 Jun 2013, 5:24 AM
Ext.Direct call sending an HTTP OPTIONS cmdBy ykoehler in forum Ext.DirectReplies: 5Last Post: 5 Jun 2013, 10:18 PM
Sessions problemBy Estefan in forum Ext 2.x: Help & DiscussionReplies: 0Last Post: 23 Mar 2010, 8:05 AM
sessions: from php to extjs & phpBy sicher in forum Ext 2.x: Help & DiscussionReplies: 1Last Post: 9 Apr 2009, 7:16 AM
Handling SessionsBy acontreras in forum Ext 2.x: Help & DiscussionReplies: 0Last Post: 9 Jan 2008, 11:25 AM | https://www.sencha.com/forum/showthread.php?128763-Ext.Direct-and-HTTP-sessions&p=586674&viewfull=1 | CC-MAIN-2015-35 | refinedweb | 900 | 56.96 |
error after running v0.15
Hi I need help with the newest version when I command quasar dev and i encountered this error.
X:\some\directories\client\src\pages\404.vue 1:1 error Definition for rule 'import/named' was not found import/named 1:1 error Definition for rule 'import/namespace' was not found import/namespace 1:1 error Definition for rule 'import/default' was not found import/default 1:1 error Definition for rule 'import/export' was not found import/export
can anyone help me its my first time using this new version thank you!
- benoitranque last edited by benoitranque
Make sure you have node version 8 +
Also what options did you choose during project init for linting? Should maybe try the defaults
hi, @benoitranque thanks for the response. here is what I choose during the project init:
? Pick an ESLint preset none ? Cordova id (disregard if not building mobile apps) org.cordova.shopsellah.app ? Use Vuex? (recommended for complex apps/websites) Yes ? Use Axios for Ajax calls? Yes ? Use Vue-i18n? (recommended if you support multiple languages) Yes ? Support IE11? Yes ? Should we run `npm install` for you after the project has been created? (recommended) NPM
my node version is 8.9.x
@jeimz173 These are errors raised by ESlint. While you chose not to take any ESLint preset, the ESlint plugin
eslint-plugin-importhas not been installed.
So the solution is …
Well, at the end the solution was to install
eslint-plugin-import, but also all the dependencies eslint-standard has and the standard plugin itself:
npm install --save-dev eslint-config-standard eslint-plugin-standard eslint-plugin-promise eslint-plugin-import eslint-plugin-node
Then on the eslint config you have to add standard to the list of extends
extends: ['plugin:vue/essential','standard'],
To be honest, it is a bit confusing that answering no to an optional question totally breaks your startup experience. I spent almost one hour on this.
Regards
Hi all, I ended up reinstalling new project :)… anyway thank you Ill take this to my note for reference.
To be honest, it is a bit confusing that answering no to an optional question totally breaks your startup experience. I spent almost one hour on this.
@danielo515 If you answered no to “Use ESLint to lint your code?” and the build had issues, it’s legit to complaign and It probably requires a new ticket on
quasar-clirepository.
But It seems @jeimz173 answered
noneonly to
Pick an ESLint preset. Then, It looks legit for application build to break while application uses es6+ rules and ESLint needs definition for them.
Hello @Akaryatrh, I have picked eslint as an option for linting, and I picked none as preset, which I think is exactly the same as @jeimz173 did. Why I did such thing ? Because I didn’t liked any of the provided presets and I wanted to install one myself. In that situation is not legit to break the application. I don’t know how quasar-cli works internally, but if the user answers that he wants to use eslint, then all required dependencies should be installed. Allowing the user to shoot himself on the foot is never a good option.
I think my use case is legit, and should be supported too.
Regards | https://forum.quasar-framework.org/topic/1843/error-after-running-v0-15 | CC-MAIN-2018-22 | refinedweb | 549 | 55.13 |
45 -34 8 7 48 -30
45 -34 8 7 48 <- max 55
45 -34 8 7 <- here I find the 63
45 -34 8 <- max 45
45 -34 <-max 74
45 <- max 74
45 11 19 26 74 44
(apply
max
(flatten
(for
[i (range (count v))]
[(apply + (take i v)) (apply + (drop i v))]))))
(apply
max
(flatten
(for
[i (range (count v))]
[(apply + (take i v)) (apply + (drop i v))]))))
int rstart = 0;
int rend = 0;
int maxsum = 0;
int csum = 0;
int start = 0;
for(int current = 0; current < size; ++current){
csum += data[current];
if(csum < 1){
start = current+1;
csum = 0;
}else if(csum > maxsum){
rstart = start;
rend = current;
maxsum = csum;
}
}
# Isolate regions of positive and negative numbers
def partition (v)
sets = []
left = 0
i = 1
v.each_cons(2) do |l, r|
# are the signs different?
if (l >= 0) != (r >= 0)
sets.push [(left…i), v[left…i].reduce {|a, n| a + n}]
left = i
end
i += 1
end
sets.push [(left…i), v[left…i].reduce {|a, n| a + n}]
end
# Remove negative regions from the front and back - they'll never be included.
def trim! (sets)
sets.shift if sets.first[1] < 0
sets.pop if sets.last[1] < 0
end
# Merge adjacent regions of positive over a lesser region of negative.
def merge! (sets)
original_length = sets.length
(sets.length-3).step(0, -2) do |i|
left, neg, right = sets[i], sets[i+1], sets[i+2]
if [left[1], right[1]].min + neg[1] > 0
sets[i] = [(left[0].first…right[0].last), left[1] + neg[1] + right[1]]
sets.delete_at(i+2)
sets.delete_at(i+1)
end
end
sets.length != original_length
end
def calculate (v)
# Partition the vector into regions by sign.
sets = partition v
# If there's only one entry, return it, whether it's positive or negative.
return sets[0] if sets.length == 1
# Remove negatives from the front and back.
trim! sets
# If there's only one now, it's the only positive island, so return it.
return sets[0] if sets.length == 1
# Iteratively merge adjacent regions to create larger regions
nil while merge! sets
# We have our maxima. Return the largest of our remaining regions.
sets.reduce {|a, x| x[1] > a[1] ? x : a}
end
Given a vector of floating-point numbers, find the largest sum among the contiguous subvectors.
So for example, in the vector x
[45 -34 8 7 48 -30]
The contiguous subvector with the largest sum is x[2…4] totaling 63.
Now this problem would not be so interesting but for the story behind it. The author tells a story that the original algorithm the programmer came up with was going to take 15 days to process a vector of 100,000 numbers (note that this book is relatively old).
However, redesigning the algorithm brought the time down to 5 milliseconds.
So, your challenge, if you choose to accept it, is to try your hand at solving this problem.
At the end of this post you'll find a vector of 5000 random numbers to use. We can make it bigger if necessary.
Time your tests and post the results in the thread. Any language is acceptable. I encourage multiple attempts to make it faster. If you wish, refrain from reading the thread until you post your solution, but if you want to see other people's attempts first feel free.
Good luck! | http://mudbytes.net/forum/topic/3692/ | CC-MAIN-2020-29 | refinedweb | 572 | 75.4 |
Programs and Units (Delphi)
Go Up to Programs and Units Index
This topic covers the overall structure of a Delphi application: the program header, unit declaration syntax, and the uses clause.
- Divide large programs into modules that can be edited separately.
- Create libraries that you can share among programs.
- Distribute libraries to other developers without making the source code available.
Program Structure and Syntax
A complete, executable Delphi application consists of multiple unit modules, all tied together by a single source code module called a project file. In traditional Pascal programming, all source code, including the main program, is stored in .pas files. Embarcadero tools use the file extension .dpr to designate the main program source module, while most other source code resides in unit files having the traditional .pas extension. To build a project, the compiler needs the project source file, and either a source file or a compiled unit file for each unit.
Note: Strictly speaking, you need not explicitly use any units in a project, but all programs automatically use the System unit and the SysInit unit.
The source code file for an executable Delphi application contains:
- a program heading,
- a uses clause (optional), and
- a block of declarations and executable statements.
The compiler, and hence the IDE, expect to find these three elements in a single project (.dpr) file.
The Program Heading
The program heading specifies a name for the executable program. It consists of the reserved word program, followed by a valid identifier, followed by a semicolon. For applications developed using Embarcadero tools, the identifier must match the project source file name.
The following example shows the project source file for a program called Editor. Since the program is called Editor, this project file is called Editor.dpr.
program Editor; uses Forms, REAbout, // An "About" box REMain; // Main form {$R *.res} begin Application.Title := 'Text Editor'; Application.CreateForm(TMainForm, MainForm); Application.Run; end.
The first line contains the program heading. The uses clause in this example specifies a dependency on three additional units: Forms, REAbout, and REMain. The $R compiler directive links the project's resource file into the program. Finally, the block of statements between the begin and end keywords are executed when the program runs. The project file, like all Delphi source files, ends with a period (not a semicolon).
Delphi project files are usually short, since most of a program's logic resides in its unit files. A Delphi project file typically contains only enough code to launch the application's main window, and start the event processing loop. Project files are generated and maintained automatically by the IDE, and it is seldom necessary to edit them manually.
In standard Pascal, a program heading can include parameters after the program name:
program Calc(input, output);
Embarcadero's Delphi ignores these parameters.
In RAD Studio, a the program heading introduces its own namespace, which is called the project default namespace.
The Program Uses Clause
The uses clause lists those units that are incorporated into the program. These units may in turn have uses clauses of their own. For more information on the uses clause within a unit source file, see Unit References and the Uses Clause, below.
The uses clause consists of the keyword uses, followed by a comma delimited list of units the project file directly depends on.
The Block
The block contains a simple or structured statement that is executed when the program runs. In most program files, the block consists of a compound statement bracketed between the reserved words begin and end, whose component statements are simply method calls to the project's Application object. Most projects have a global Application variable that holds an instance of Vcl.Forms.TApplication, Web.WebBroker.TWebApplication, or Vcl.SvcMgr.TServiceApplication. The block can also contain declarations of constants, types, variables, procedures, and functions; these declarations must precede the statement part of the block. Note that the end that represents the end of the program source must be followed by a period (.):
begin . . . end.
Unit Structure and Syntax
A unit consists of types (including classes), constants, variables, and routines (functions and procedures). Each unit is defined in its own source (.pas) file.
A unit file begins with a unit heading, which is followed by the interface keyword. Following the interface keyword, the uses clause specifies a list of unit dependencies. Next comes the implementation section, followed by the optional initialization, and finalization sections. A skeleton unit source file looks like this:
unit Unit1; interface uses // List of unit dependencies goes here... // Interface section goes here implementation uses // List of unit dependencies goes here... // Implementation of class methods, procedures, and functions goes here... initialization // Unit initialization code goes here... finalization // Unit finalization code goes here... end.
The unit must conclude with the reserved word end followed by a period.
The Unit Heading
The unit heading specifies the unit's name. It consists of the reserved word unit, followed by a valid identifier, followed by a semicolon. For applications developed using Embarcadero tools, the identifier must match the unit file name. Thus, the unit heading:
unit MainForm;
would occur in a source file called MainForm.pas, and the file containing the compiled unit would be MainForm.dcu. Unit names must be unique within a project. Even if their unit files are in different directories, two units with the same name cannot be used in a single program.
The Interface Section
The interface section of a unit begins with the reserved word interface and continues until the beginning of the implementation section. The interface section declares constants, types, variables, procedures, and functions that are available to clients. That is, to other units or programs that wish to use elements from this unit. These entities are called public because code in other units can access them as if they were declared in the unit itself.
The interface declaration of a procedure or function includes only the routine's signature. That is, the routine's name, parameters, and return type (for functions). The block containing executable code for the procedure or function follows in the implementation section. Thus procedure and function declarations in the interface section work like forward declarations.
The interface declaration for a class must include declarations for all class members: fields, properties, procedures, and functions.
The interface section can include its own uses clause, which must appear immediately after the keyword interface.
The Implementation Section
The implementation section of a unit begins with the reserved word implementation and continues until the beginning of the initialization section or, if there is no initialization section, until the end of the unit. The implementation section defines procedures and functions that are declared in the interface section. Within the implementation section, these procedures and functions may be defined and called in any order. You can omit parameter lists from public procedure and function headings when you define them in the implementation section; but if you include a parameter list, it must match the declaration in the interface section exactly.
In addition to definitions of public procedures and functions, the implementation section can declare constants, types (including classes), variables, procedures, and functions that are private to the unit. That is, unlike the interface section, entities declared in the implementation section are inaccessible to other units.
The implementation section can include its own uses clause, which must appear immediately after the keyword implementation. The identifiers declared within units specified in the implementation section are only available for use within the implementation section itself. You cannot refer to such identifiers in the interface section.
The Initialization Section
The initialization section is optional. It begins with the reserved word initialization and continues until the beginning of the finalization section or, if there is no finalization section, until the end of the unit. The initialization section contains statements that are executed, in the order in which they appear, on program start-up. So, for example, if you have defined data structures that need to be initialized, you can do this in the initialization section.
For units in the interface uses list, the initialization sections of the units used by a client are executed in the order in which the units appear in the client's uses clause.
The older "begin ... end." syntax still functions. Basically, the reserved word "begin" can be used in place of initialization followed by zero or more execution statements. Code using the older "begin ... end." syntax cannot specify a finalization section. In this case, finalization is accomplished by providing a procedure to the ExitProc variable. This method is not recommended for code going forward, but you might see it used in older source code.
The Finalization Section
The finalization section is optional and can appear only in units that have an initialization section. The finalization section begins with the reserved word finalization and continues until the end of the unit. It contains statements that are executed when the main program terminates (unless the Halt procedure is used to terminate the program). Use the finalization section to free resources that are allocated in the initialization section.
Finalization sections are executed in the opposite order from initialization sections. For example, if your application initializes units A, B, and C, in that order, it will finalize them in the order C, B, and A.
Once a unit's initialization code starts to execute, the corresponding finalization section is guaranteed to execute when the application shuts down. The finalization section must therefore be able to handle incompletely initialized data, since, if a runtime error occurs, the initialization code might not execute completely.
Unit References and the Uses Clause
A uses clause lists units used by the program, library, or unit in which the clause appears. A uses clause can occur in
- the project file for a program, or library
- the interface section of a unit
- the implementation section of a unit
Most project files contain a uses clause, as do the interface sections of most units. The implementation section of a unit can contain its own uses clause as well.
The System unit and the SysInit unit are used automatically by every application and cannot be listed explicitly in the uses clause. (System implements routines for file I/O, string handling, floating point operations, dynamic memory allocation, and so forth.) Other standard library units, such as SysUtils, must be explicitly included in the uses clause. In most cases, all necessary units are placed in the uses clause by the IDE, as you add and remove units from your project.
Case Sensitivity: In unit declarations and uses clauses, unit names must match the file names in case. In other contexts (such as qualified identifiers), unit names are case insensitive. To avoid problems with unit references, refer to the unit source file explicitly:
uses MyUnit in "myunit.pas";
If such an explicit reference appears in the project file, other source files can refer to the unit with a simple uses clause that does not need to match case:
uses Myunit;
The Syntax of a Uses Clause
A uses clause consists of the reserved word uses, followed by one or more comma delimited unit names, followed by a semicolon. Examples:
uses Forms, Main; uses Forms, Main; uses Windows, Messages, SysUtils, Strings, Classes, Unit2, MyUnit;
In the uses clause of a program or library, any unit name may be followed by the reserved word in and the name of a source file, with or without a directory path, in single quotation marks; directory paths can be absolute or relative. Examples:
uses Windows, Messages, SysUtils, Strings in 'C:\Classes\Strings.pas', Classes;
Use the keyword in after a unit name when you need to specify the unit's source file. Since the IDE expects unit names to match the names of the source files in which they reside, there is usually no reason to do this. Using in is necessary only when the location of the source file is unclear, for example when:
- You have used a source file that is in a different directory from the project file, and that directory is not in the compiler's search path.
- Different directories in the compiler's search path have identically named units.
- You are compiling a console application from the command line, and you have named a unit with an identifier that doesn't match the name of its source file.
The compiler also relies on the in ... construction to determine which units are part of a project. Only units that appear in a project (.dpr) file's uses clause followed by in and a file name are considered to be part of the project; other units in the uses clause are used by the project without belonging to it. This distinction has no effect on compilation, but it affects IDE tools like the Project Manager.
In the uses clause of a unit, you cannot use in to tell the compiler where to find a source file. Every unit must be in the compiler's search path. Moreover, unit names must match the names of their source files.
Multiple and Indirect Unit References
The order in which units appear in the uses clause determines the order of their initialization and affects the way identifiers are located by the compiler. If two units declare a variable, constant, type, procedure, or function with the same name, the compiler uses the one from the unit listed last in the uses clause. (To access the identifier from the other unit, you would have to add a qualifier: UnitName.Identifier.)
A uses clause need include only units used directly by the program or unit in which the clause appears. That is, if unit A references constants, types, variables, procedures, or functions that are declared in unit B, then A must use B explicitly. If B in turn references identifiers from unit C, then A is indirectly dependent on C; in this case, C needn't be included in a uses clause in A, but the compiler must still be able to find both B and C in order to process A.
The following example illustrates indirect dependency:
program Prog; uses Unit2; const a = b; // ... unit Unit2; interface uses Unit1; const b = c; // ... unit Unit1; interface const c = 1; // ...
In this example, Prog depends directly on Unit2, which depends directly on Unit1. Hence Prog is indirectly dependent on Unit1. Because Unit1 does not appear in Prog's uses clause, identifiers declared in Unit1 are not available to Prog.
To compile a client module, the compiler needs to locate all units that the client depends on, directly or indirectly. Unless the source code for these units has changed, however, the compiler needs only their .dcu files, not their source (.pas) files.
When a change is made in the interface section of a unit, other units that depend on the change must be recompiled. But when changes are made only in the implementation or other sections of a unit, dependent units don't have to be recompiled. The compiler tracks these dependencies automatically and recompiles units only when necessary.
Circular Unit References
When units reference each other directly or indirectly, the units are said to be mutually dependent. Mutual dependencies are allowed as long as there are no circular paths connecting the uses clause of one interface section to the uses clause of another. In other words, starting from the interface section of a unit, it must never be possible to return to that unit by following references through interface sections of other units. For a pattern of mutual dependencies to be valid, each circular reference path must lead through the uses clause of at least one implementation section.
In the simplest case of two mutually dependent units, this means that the units cannot list each other in their interface uses clauses. So the following example leads to a compilation error:
unit Unit1; interface uses Unit2; // ... unit Unit2; interface uses Unit1; // ...
However, the two units can legally reference each other if one of the references is moved to the implementation section:
unit Unit1; interface uses Unit2; // ... unit Unit2; interface //... implementation uses Unit1; // ...
To reduce the chance of circular references, it's a good idea to list units in the implementation uses clause whenever possible. Only when identifiers from another unit are used in the interface section is it necessary to list that unit in the interface uses clause. | http://docwiki.embarcadero.com/RADStudio/XE3/en/Programs_and_Units | CC-MAIN-2014-35 | refinedweb | 2,730 | 53.71 |
Hello.
Does anyone know how to export layers as individual files via script?
I have several dozens layers that I need to export as individual obj/fbx files, for easier import/update into the rendering software.
Thanks
Hello.
Does anyone know how to export layers as individual files via script?
I have several dozens layers that I need to export as individual obj/fbx files, for easier import/update into the rendering software.
Thanks
I did this the other day, just for meshes in stl format but if this helps you get started try changing ‘.stl’ for ‘.obj’
You will have to set the default options first manually by saving a single object with the parameters you want, then it should retain these for the others exported
Sorry it’s half in French !
import rhinoscriptsyntax as rs import os def export_mesh_by_layer(maillages): export_path = rs.BrowseForFolder(rs.WorkingFolder(), 'Exporter vers quel dossier?', 'Export stl',) mesh_dict = {} for mesh in maillages: obj_layer = rs.ObjectLayer(mesh) print obj_layer if obj_layer in mesh_dict: mesh_dict[obj_layer].append(mesh) else: mesh_dict[obj_layer] = [mesh, ] for layer, meshes in mesh_dict.iteritems(): layer_export_name = layer.replace(':', '_') if meshes: meshids = '' for mesh in meshes: meshids += "_SelId " + str(mesh) + " " filename = os.path.join(export_path, layer_export_name + ".stl") command_str = "-_Export " + meshids + " _Enter " + chr(34) + filename + chr(34) + " _Enter " + "_Enter" rs.Command(command_str) return None if __name__ == '__main__': export_mesh_by_layer(rs.GetObjects(filter = 32))
Also this discussion may help
Here are a couple of scripts to try out - they work in Windows, but not sure on Mac.
One file per object - named after the file name and sequentially numbered.
Objects/layers need to be visible and unlocked.
BatchExportOBJByObject.py (2.2 KB)
One file per layer - objects/layers need to be visible and unlocked.
BatchExportOBJByLayer.py (2.1 KB)
Let me know how either one of them works… They use standard meshing settings for non-mesh objects, which you can modify in the strings at the top of the script.
–Mitch
Unfortunately none of the scripts above worked on Rhino for Mac…
OK, did you get any error messages?
All comments turn to errors.
Command lines, starting with “import”, like, “import rhinoscriptsyntax as rs” prompt the Import file command.
Hello - to run the script, use RunPythonScript - these are not command line macros.
To use a Python script use
RunPythonScript, or a macro:
_-RunPythonScript "Full path to py file inside double-quotes"
-Pascal
Thanks this workedm with @Helvetosaur BatchExportOBJByLayer.py
Although the path in which the files was exported did not work on the mac.
My Test file is saved at the desktop, but the exported layers output was "/Users/MyUserName"and the file name came in as “Desk-00//2D_lns_01.obj”
SOLVED IT, by replacing:
#e_file_name = “{}-{}.obj”.format(filename[:-4], layer_name)
e_file_name = “{}-{}.obj”.format(filename, layer_name)
OK, that’s interesting, this works as expected on the Windows platform, so something is different on Mac with how filenames/paths are set up (not surprising). I’ll need to drag my Mac out and do some testing, I haven’t had much time these last couple of weeks… I can probably put in a switch that will make it work correctly on both platforms. | https://discourse.mcneel.com/t/export-layers-script/76073 | CC-MAIN-2020-40 | refinedweb | 527 | 55.95 |
When it comes to programming, I have a belt and suspenders philosophy. Anything that can help me avoid errors early is worth looking into.
The type annotation support that's been gradually added to Python is a good example. Here's how it works and how it can be helpful.
Introduction
The first important point is that the new type annotation support has no effect at runtime. Adding type annotations in your code has no risk of causing new runtime errors: Python is not going to do any additional type-checking while running.
Instead, you'll be running separate tools to type-check your programs statically during development. I say "separate tools" because there's no official Python type checking tool, but there are several third-party tools available.
So, if you chose to use the mypy tool, you might run:
$ mypy my_code.py
and it might warn you that a function that was annotated as expecting string arguments was going to be called with an integer.
Of course, for this to work, you have to be able to add information to your code to let the tools know what types are expected. We do this by adding "annotations" to our code.
One approach is to put the annotations in specially-formatted comments. The obvious advantage is that you can do this in any version of Python, since it doesn't require any changes to the Python syntax. The disadvantages are the difficulties in writing these things correctly, and the coincident difficulties in parsing them for the tools.
To help with this, Python 3.0 added support for adding annotations to functions (PEP-3107), though without specifying any semantics for the annotations. Python 3.6 adds support for annotations on variables (PEP-526).
Two additional PEPs, PEP-483 and PEP-484, define how annotations can be used for type-checking.
Since I try to write all new code in Python 3, I won't say any more about putting annotations in comments.
Getting started
Enough background, let's see what all this looks like.
Python 3.6 was just released, so I’ll be using it. I'll start with a new virtual environment, and install the type-checking tool mypy (whose package name is mypy-lang).:
$ virtualenv -p $(which python3.6) try_types $ . try_types/bin/activate $ pip install mypy-lang
Let's see how we might use this when writing some basic string functions. Suppose we're looking for a substring inside a longer string. We might start with:
def search_for(needle, haystack): offset = haystack.find(needle) return offset
If we were to call this with anything that's not text, we'd consider it an error. To help us avoid that, let's annotate the arguments:
def search_for(needle: str, haystack: str): offset = haystack.find(needle) return offset
Does Python care about this?:
$ python search1.py $
Python is happy with it. There's not much yet for mypy to check, but let's try it:
$ mypy search1.py $
In both cases, no output means everything is okay.
(Aside: mypy uses information from the files and directories on its command line plus all packages they import, but it only does type-checking on the files and directories on its command line.)
So far, so good. Now, let's call our function with a bad argument by adding this at the end:
If we tried to run this, it wouldn't work:
$ python search2.py Traceback (most recent call last): File "search2.py", line 4, in <module> search_for(12, "my string") File "search2.py", line 2, in search_for offset = haystack.find(needle) TypeError: must be str, not int
In a more complicated program, we might not have run that line of code until sometime when it would be a real problem, and so wouldn't have known it was going to fail. Instead, let's check the code immediately:
$ mypy search2.py search2.py:4: error: Argument 1 to "search_for" has incompatible type "int"; expected "str"
Mypy spotted the problem for us and explained exactly what was wrong and where.
We can also indicate the return type of our function:
def search_for(needle: str, haystack: str) -> str: offset = haystack.find(needle) return offset
and ask mypy to check it:
$ mypy search3.py search3.py: note: In function "search_for": search3.py:3: error: Incompatible return value type (got "int", expected "str")
Oops, we're actually returning an integer but we said we were going to return a string, and mypy was smart enough to work that out. Let's fix that:
def search_for(needle: str, haystack: str) -> int: offset = haystack.find(needle) return offset
And see if it checks out:
$ mypy search4.py $
Now, maybe later on we forget just how our function works, and try to use the return value as a string:
x = len(search_for('the', 'in the string'))
Mypy will catch this for us:
$ mypy search5.py search5.py:5: error: Argument 1 to "len" has incompatible type "int"; expected "Sized"
We can't call len() on an integer. Mypy wants something of type Sized -- what's that?
More complicated types
The built-in types will only take us so far, so Python 3.5 added the typing module, which both gives us a bunch of new names for types, and tools to build our own types.
In this case, typing.Sized represents anything with a __len__ method, which is the only kind of thing we can call len() on.
Let's write a new function that'll return a list of the offsets of all of the instances of some string in another string. Here it is:
from typing import List def multisearch(needle: str, haystack: str) -> List[int]: # Not necessarily the most efficient implementation offset = haystack.find(needle) if offset == -1: return [] return [offset] + multisearch(needle, haystack[offset+1:])
Look at the return type: List[int]. You can define a new type, a list of a particular type of elements, by saying List and then adding the element type in square brackets.
There are a number of these - e.g. Dict[keytype, valuetype] - but I'll let you read the documentation to find these as you need them.
mypy passed the code above, but suppose we had accidentally had it return None when there were no matches:
def multisearch(needle: str, haystack: str) -> List[int]: # Not necessarily the most efficient implementation offset = haystack.find(needle) if offset == -1: return None return [offset] + multisearch(needle, haystack[offset+1:])
mypy should spot that there's a case where we don't return a list of integers, like this:
$ mypy search6.py $
Uh-oh - why didn't it spot the problem here? It turns out that by default, mypy considers None compatible with everything. To my mind, that's wrong, but luckily there's an option to change that behavior:
$ mypy --strict-optional search6.py search6.py: note: In function "multisearch": search6.py:7: error: Incompatible return value type (got None, expected List[int])
I shouldn't have to remember to add that to the command line every time, though, so let's put it in a configuration file just once. Create mypy.ini in the current directory and put in:
[mypy] strict_optional = True
And now:
$ mypy search6.py search6.py: note: In function "multisearch": search6.py:7: error: Incompatible return value type (got None, expected List[int])
But speaking of None, it's not uncommon to have functions that can either return a value or None. We might change our search_for method to return None if it doesn't find the string, instead of -1:
def search_for(needle: str, haystack: str) -> int: offset = haystack.find(needle) if offset == -1: return None else: return offset
But now we don't always return an int and mypy will rightly complain:
$ mypy search7.py search7.py: note: In function "search_for": search7.py:4: error: Incompatible return value type (got None, expected "int")
When a method can return different types, we can annotate it with a Union type:
from typing import Union def search_for(needle: str, haystack: str) -> Union[int, None]: offset = haystack.find(needle) if offset == -1: return None else: return offset
There's also a shortcut, Optional, for the common case of a value being either some type or None:
from typing import Optional def search_for(needle: str, haystack: str) -> Optional[int]: offset = haystack.find(needle) if offset == -1: return None else: return offset
Wrapping up
I've barely touched the surface, but you get the idea.
One nice thing is that the Python libraries are all annotated for us already. You might have noticed above that mypy knew that calling find on a str returns an int - that's because str.find is already annotated. So you can get some benefit just by calling mypy on your code without annotating anything at all -- mypy might spot some misuses of the libraries for you.
For more reading: | https://www.caktusgroup.com/blog/2017/02/22/python-type-annotations/ | CC-MAIN-2018-39 | refinedweb | 1,486 | 63.29 |
September 21 2011 17:16:09 Kyle Gorman wrote:
> I use Python for all kinds of high-level projects, but also often
> write C for fast low-level stuff. Unlike the rest of Python, the C
> integration is not so well documented, so I was hoping I could use
> SWIG (this is my first attempt) to quickly create a Python module
> wrapping around a C function. The function is:
>
> <swipe.c>
> vector swipe(char wav[], double min, double max, double st, double dt);
> </swipe.c>
>
> , where vector just is:
>
> <vector.h>
> typedef struct { int x; double* v; } vector;
> </vector.h>
>
> What I'd like is to be able to call this function from Python and get
> back some kind of ordered list/container of floating-point numbers.
> But I can't even get to the point of dealing with that.
>
> First, I set up swipe.i and ran SWIG.
>
> <swipe.i>
> %module swipe
> %{
> #define SWIG_FILE_WITH_INIT
> #include "swipe.h"
> %}
> vector swipe(char wav[], double min, double max, double st, double dt);
> </swipe.i>
>
> (I'm not convinced I understand the #define, but I just went at it).
>
> $ swig -python swipe.i
>
> Then I created setup.py and compiled.
Try this file:
<setup.py>
#!/usr/bin/env python
from distutils.core import setup, Extension
setup(name='swipe', version='1.2', author='Kyle Gorman',
description="""SWIPE' pitch estimator""", py_modules=['swipe'],
ext_modules=[Extension('_swipe', sources=['swipe.c', 'swipe_wrap.c'],
libraries=['sndfile'])])
</setup.py>
> $ python setup.py build_ext --inplace
>
> None of that generated any errors. But I can't import the module.
>
> $ python -m swipe
> ...[traceback omitted]...
> ImportError: /home/kgorman/Dropbox/Code/swipe/_swipe.so: undefined
> symbol: sf_close
>
> "sf_close" is a C function from libsndfile, and swipe.c #includes
> <sndfile.h>. The docs say such."
>
> Well, I wasn't really worried about the linker finding sndfile.h: it's
> in /usr/include/. So, I thought, perhaps if I added "#include
> <sndfile.h>" to swipe.i, it would find what I wanted, but I got the
> same error. Can anyone direct me how to get sf_close (and the rest of
> sndfile.h) into the resulting _swipe.so? Presumably there's something
> different I need to do to satisfy the doc's "make sure"s, but I simply
> don't know what.
The compiler probably finds sndfile.h but the linker needs the library.
Johan
> Thanks all,
> K
>
> PS: The same outcome happened when I compiled by hand and with
> "-I/usr/include".
>
> ---------------------------------------------------------------------------
> --- All the data continuously generated in your IT infrastructure contains
> a definitive record of customers, application performance, security
> threats, fraudulent activity and more. Splunk takes this data and makes
> sense of it. Business sense. IT sense. Common sense.
>
> _______________________________________________
> Swig-user mailing list
> Swig-user@...
> | https://sourceforge.net/p/swig/mailman/message/28123699/ | CC-MAIN-2018-05 | refinedweb | 451 | 70.6 |
i am trying to read/write to/from two separate files, my code works as long as i execute it from the Visual C++ platform. when i double click on the exe file in the debug folder it doesn't write to the specified file. i think it has to do with the path names. my question is how to i define the path name completely? as in "H:\C2plus\monte_carlo\results.txt" ?
the portion of my code is as follows:
any help would be great!any help would be great!Code:#include <iostream> #include <cmath> #include <cstdlib> #include <fstream> #include <time.h> using namespace std; const char * filei="simMC1.txt"; const char * fileo="resultsMC.txt"; int N,call_put; double X, TP, T, t1, z, r, So, Pavg, volatility; double randaud(); int main () { //read in file (call_put,N,TP,So,X,r,volatility) ifstream infile(filei,ios::in); ofstream outfile(fileo,ios: : out); if(! infile.is_open()) { cout << "error"; exit(1); } .... outfile << Pavg << endl; return 0; } | http://cboard.cprogramming.com/cplusplus-programming/24637-changing-directory-io-files.html | CC-MAIN-2014-41 | refinedweb | 165 | 69.28 |
Mac OS X. This will require 530 Mb of free space. Please ensure you have that much room available before you begin. Other posts in the series concentrate on installation with:
Installing Anaconda on Mac OS X
Open up your web browser and head to the following address: The website should determine the correct download for your system.
Click the green download button. Your download should begin immediately. Once the download is complete the installer will open and and you will be taken to the Anaconda setup wizard.
You will need to agree to the license agreement to continue the install
Select how you wish to install Anaconda, we recommend installing just for the current user, and click next. You will then be asked to confirm your download location. We recommend leaving this as the default. The download will take 530Mb of space.
Once completed you should see a screen providing you with a link to download PyCharm, an Python IDE. You can download this at any time if you wish but it is not essential. Clicking next brings you to the final screen where you can click finish to complete the setup.
In order to check Python has been correctly installed we will open up the terminal. If you are unfamiliar with this application go to the search function and type terminal. This should bring up the following options.
Select the Terminal. This will bring up the command line prompt or shell. Type
python into the prompt and press enter. You should see a couple of lines of text telling you which version of Python you are running followed by three chevrons (>>>), this is the Python prompt, it indicates that you can enter Python commands. You are now in a Python console and can begin coding in Python.
Try typing
import pandas as pd into the prompt. After pressing enter you will see that nothing has changed. you will be presented with a new line contianing the Python prompt. If you type
pd.__version__and press enter you can find the version of the Pandas library you have just imported into the Anaconda prompt.
To exit python and return to the Anaconda prompt you can simply type
exit(). You are now back in the command line prompt. versions of Python and pipenv or virtualenv to manage virtual environments. A good tutorial on this can be found here.
In the prompt you will notice that there is a prefix in brackets before the directory information about the user. This appears as
(base) and indicates that you are in the base anaconda environment. Here anaconda prompt into the terminal to activate the environment.
conda activate py3.8
You will notice that the prefix in brackets has changed to display
(py3.8) We can now begin to add our dependencies.
conda install pandas pandas-datareader matplotlib
We will also be using datetime from the standard library (which is supplied with Python). Type
python into the terminal to open a python console. You should see the familiar three chevrons. We can now import our libraries into our namespace and begin. | https://quantstart.com/articles/installing-an-algorithmic-trading-research-environment-with-python-on-mac/ | CC-MAIN-2022-33 | refinedweb | 516 | 75.71 |
<:
In the target folder a file with name like "oleg@example.com_ZendMail_1276322086_1871539247.tmp" appears and the content is
12 Commentscomments.show.hide
Jun 13, 2010
Jeroen Keppens
<p>Looks very, very usefull for development!</p>
Jun 13, 2010
Dmitry Belyakov
<p>It would be nice to have an option to set output message format to XML in order to be able to do assertions against sent messages.</p>
<p>Good luck.</p>
Jun 14, 2010
Marc Bennewitz (private)
<p>Sounds interesting!</p>
<p>What I'm asking me is: Why not implement a different method to store/read mails (not send) and handle different formats like the following:</p>
<ac:macro ac:<ac:plain-text-body><![CDATA[
namespace Zend\Mail;
$mail = new Mail();
// ...
// store a mail
$fileFormat = FileFormat::factory('eml'); // mbox, xml, emlx (Apple Mail), msg (Outlook) ...
$fileFormat->save('/tmp/my_mail.eml', $mail, $flags, $context);
// or
$mail->save('/tmp/my_mail.eml', $fileFormat, $flags, $context);
// $fileFormat could be optional by defining a default Mail::setDefaultFileFormat($fileFormat)
// read a mail
$mail = $fileFormat->read('/tmp/my_mail.eml', $flags, $context);
]]></ac:plain-text-body></ac:macro>
<p>Now you can simply define your own format.</p>
Jun 14, 2010
Dmitry Belyakov
<p>Hi Mark,</p>
<p>The general purpose of this transport is to substitute regular mail transport when you do testing and don't need to actually send the messages. Instead you get them dumped in a directory and can do assertions agains messges and their content (like sender, recepient, body etc).</p>
Jun 14, 2010
Alexander Steshenko
<p>Hi Mark, thanks for the comment.</p>
<p>If I understand you correctly, what you're asking is essentially what Zend_Mail storages are for. Check them out in \Zend\Mail\Storage* and in the manual.</p>
<p>The question whether this transport needs to be able to work with different storages as well or not is the biggest question now. I'm waiting for some feedback from Dolf (Freeaqingme). He's going to discuss this question with Mathew.</p>
<p>Check my note about that in 5. Theory of Operation >> Adapters</p>
Jun 15, 2010
Marc Bennewitz (private)
<p><strong>\Zend\Mail\Storage</strong> is a storage for handling more than one mails.<br />
What I mean is only a reader/writer for one mail.<br />
-> For example the MBox storage uses the mbox format to read/write one mail within the storage</p>
<p><strong>different formats</strong><br />
It would be useful not only for debugging<br />
-> e.g: upload/download a mail to/from a web mailer</p>
<p><strong>Adapters</strong><br />
The same - adaptable formats like \Zend\Mail\FileFormat\<Format> - nothing todo with storages</p>
<p><strong>Transport</strong><br />
In my mind it's confusing to define a transport to format a mail.<br />
For debugging we can define a file transport as a wrapper for formats.</p>
Jul 28, 2010
Alexander Steshenko
<p>Thanks everyone for your comments.</p>
<p>Mark, I see your point, it does make sense. However I am not sure what direction will be chosen by CR team and/or zf core team (they did tell me it needed to be discussed seriously) and, what's the most important - what is more desired for the community.</p>
<p>I did get different suggestions from people but nothing from the people "who decide" (like the Zend_Mail's maintainer). I fully understand it's a minor feature and everyone's busy etc, but don't want it to be another 'orphaned',dead proposal, so I have decided to move it further as much as I can to get more attention finally. No urgency here at all ofc, but still...</p>
<p>So, as nobody but me is willing to think about this one so far I think that "minimally" what is done is enough (and worth it) to have it as a part of the framework.</p>
Jul 28, 2010
Dolf Schimmel (Freeaqingme)
<p>As far as I can remember I did give some response to this proposal? If there's anything specific you'd like to hear from me being Zend_Mail's maintainer, just let me know.</p>
Jul 29, 2010
Alexander Steshenko
<p>You were going to discuss the issue with adapters with Mathew i.e. should we or should we not add some kind of adapters to the transport or utilize Writable storages interface in the transport/adapters.</p>
Aug 03, 2010
Ryan Mauger
<ac:macro ac:<ac:rich-text-body><p><strong>Community Review Team Recommendation</strong></p>
<p>The CR Team advises accepting this proposal as-is.</p></ac:rich-text-body></ac:macro>
Feb 06, 2011
Michael Kliewe
<p>I would love to see this component. I'm writing a webmailer which has to save a copy of the sent mail to the IMAPs "Sent" folder, so I need the source code of the mail as string to append it to the IMAP folder.<br />
Writing it to a file and reading it again would not be perfect, the ability to optionally get it as a string would be cool (at the moment I put the<br />
into a class attribute and then use a getter.</p>
Feb 22, 2011
Benoît Durand
<p>Zend_Mail_Transport_File is released into the stable branch. Maybe we can archive this proposal <ac:emoticon ac:</p> | http://framework.zend.com/wiki/pages/viewpage.action?pageId=22642954&focusedCommentId=31982214 | CC-MAIN-2014-15 | refinedweb | 891 | 62.48 |
Introduction: How to Make Your First Virtual Reality Game for Beginners
This Tutorial will walk you through making your first virtual reality app:
Step 1: Let's Set Up the Scene.
NOTE*****When installing Unity make sure to install the Android or IOS modules depending on what you need.
Once you have everything downloaded from the intro, create a new Unity project, and call it whatever you want.
Drag in the GoogleVR Unity package, the game assets folder, and the gun 3d model into the assets folder.
Go to file, build settings, and switch the build platform to either Android or IOS.
Delete the directional light from the scene.
Right click in the hierarchy (off to the left, where the light was) and create a 3d object, plane.
Drag the silver texture from the game assets folder onto the plane and off to the right where it says normal map choose the floor normal map. This adds not only a texture to our plane, but a normal map so Unity's lighting system knows how to reflect light off of our plane, giving the illusion of depth.
Right click in the hierarchy again and add a light, point light, and position it over top of our plane.
Now copy and paste this plane 4 times and position them as shown in the second image above.
Step 2: Complete the Scene.
Create another plane and add the brick texture to it as well as the brick normal map, the same as we did before. You will have to use the transform controls on the top right to rotate the plane until the brick pattern is facing the right direction.
Copy and paste this to make all the walls.
Select all of the floor planes and move them all up at the same time, rotate all of them so they are facing down to make the ceiling.
Right click in the hierarchy again and create a 3D object, cube. Add the wooden texture to it and the wooden normal map.
Copy and paste this a scatter a few around the scene wherever you would like.
Step 3: Bake Navigation Mesh.
Now the our scene is setup we can add our zombies. Before we do that though we need to consider how they are going to move around. Unity has built in pathfinding that we can use but first we need to bake a navigation mesh so our enemies know what areas that can and cannot move through.
Go back to the hierarchy and create an empty game object. Rename it scene and drag all your planes and cubes on top of it making them children. This will organize our scene. With that new scene object selected check the static box in the top right of the screen, to make all the children static.
Go to window, navigation, and a new window will appear to the right. Click bake in this window.
Now you will see our scene has a blue area overlaying all the walkable areas. This is our navigation mesh, our zombies are going to be the nav mesh agents, and our camera is going to be the nav mesh agent's destination.
This means that where ever we put a zombie in the scene it will walk across the navigation mesh towards its destination, avoiding all the areas not included in the mesh along the way.
Step 4: Lets Add Some Zombies.
With that out of the way go to the asset store and search free zombie character. Import the one from the picture above.
Go to the model folder and click on "z@walk." Under rig change the animation to legacy. Do the same thing for "z@fall_back."
Drag "z@walk" into the scene, rename it "zombie."
Remove the Animator component and add an Animation component.
Expand the "z@walk" in the assets folder and find the walk animation. Drag that into the animation slot of our zombie in the scene. Do the same for "z@fall_back" and drag the fall back animation into the same slot.
Add a nav mesh agent component to our zombie in the scene and change the speed to 1 and it's stopping distance to 3.
Finally, add a capsule collider component and position it so it surrounds the zombies head. This way the only way we can kill a zombie is with a head shot, to make it a little more difficult. Check is trigger on.
Step 5: Add Some Code to Your Zombie.
Now add a new script component to your zombie and call it zombieScript.cs. Copy and paste in this code: (replacing everything that is already there)
using UnityEngine; using System.Collections; public class zombieScript : MonoBehaviour { //declare the transform of our goal (where the navmesh agent will move towards) and our navmesh agent (in this case our zombie) private Transform goal; private NavMeshAgent agent; // Use this for initialization void Start () { //create references goal = Camera.main.transform; agent = GetComponent(); //set the navmesh agent's desination equal to the main camera's position (our first person character) agent.destination = goal.position; //start the walking animation GetComponent().Play ("walk"); } //for this to work both need colliders, one must have rigid body, and the zombie must have is trigger checked. void OnTriggerEnter (Collider col) { //first disable the zombie's collider so multiple collisions cannot occur GetComponent().enabled = false; //destroy the bullet Destroy(col.gameObject); //stop the zombie from moving forward by setting its destination to it's current position agent.destination = gameObject.transform.position; //stop the walking animation and play the falling back animation GetComponent().Stop (); GetComponent().Play ("back_fall"); //destroy this zombie in six seconds. Destroy (gameObject, 6); //instantiate a new zombie GameObject zombie = Instantiate(Resources.Load("zombie", typeof(GameObject))) as GameObject; //set the coordinates for a new vector 3 float randomX = UnityEngine.Random.Range (-12f,12f); float constantY = .01f; float randomZ = UnityEngine.Random.Range (-13f,13f); //set the zombies position equal to these new coordinates zombie.transform.position = new Vector3 (randomX, constantY, randomZ); //if the zombie gets positioned less than or equal to 3 scene units away from the camera we won't be able to shoot it //so keep repositioning the zombie until it is greater than 3 scene units away. while (Vector3.Distance (zombie.transform.position, Camera.main.transform.position) <= 3) { randomX = UnityEngine.Random.Range (-12f,12f); randomZ = UnityEngine.Random.Range (-13f,13f); zombie.transform.position = new Vector3 (randomX, constantY, randomZ); } } }
Step 6: Almost There!
Create a folder inside your assets folder and call it "Resources"
Drag your zombie into that folder. Now copy and paste that same zombie and add however many you would like anywhere in the scene.
Now we have to create a bullet.
Create a new sphere and scale it down to .01 across the board.
Rename it bullet and drag that into the resources folder as well. You can delete the one in the scene.
Drag in the gun model into the scene. Expand it and find one of its children called n3r-v. This is a smaller model of the gun. Drag it outside of its parent game object because this is the one we are going to use, so delete everything else.
Now re-position and rotate your gun so that it is pointing to the center of the screen and you can see it in play mode. Drag it on top of your camera making it a child.
Right click the gun and create a 3d object, sphere. This will make it a child. Rename it "spawnPoint"
Scale it down to .001 across the board and move it to the very tip of the gun barrel, this is where the bullets are going to spawn from.
Step 7: Add Some Code to Your Camera
Add a new script component to the camera and name is playerScript.cs. Copy and paste in this code:
using UnityEngine; using System.Collections;
public class playerScript : MonoBehaviour {
//declare GameObjects and create isShooting boolean. private GameObject gun; private GameObject spawnPoint; private bool isShooting;
// Use this for initialization void Start () {
//only needed for IOS Application.targetFrameRate = 60;
//create references to gun and bullet spawnPoint objects gun = gameObject.transform.GetChild (0).gameObject; spawnPoint = gun.transform.GetChild (0).gameObject;
//set isShooting bool to default of false isShooting = false; }
//Shoot function is IEnumerator so we can delay for seconds IEnumerator Shoot() { //set is shooting to true so we can't shoot continuosly isShooting = true; //instantiate the bullet GameObject bullet = Instantiate(Resources.Load("bullet", typeof(GameObject))) as GameObject; //Get the bullet's rigid body component and set its position and rotation equal to that of the spawnPoint Rigidbody rb = bullet.GetComponent(); bullet.transform.rotation = spawnPoint.transform.rotation; bullet.transform.position = spawnPoint.transform.position; //add force to the bullet in the direction of the spawnPoint's forward vector rb.AddForce(spawnPoint.transform.forward * 500f); //play the gun shot sound and gun animation GetComponent().Play (); gun.GetComponent().Play (); //destroy the bullet after 1 second Destroy (bullet, 1); //wait for 1 second and set isShooting to false so we can shoot again yield return new WaitForSeconds (1f); isShooting = false; } // Update is called once per frame void Update () { //declare a new RayCastHit RaycastHit hit; //draw the ray for debuging purposes (will only show up in scene view) Debug.DrawRay(spawnPoint.transform.position, spawnPoint.transform.forward, Color.green);
//cast a ray from the spawnpoint in the direction of its forward vector if (Physics.Raycast(spawnPoint.transform.position, spawnPoint.transform.forward, out hit, 100)){
//if the raycast hits any game object where its name contains "zombie" and we aren't already shooting we will start the shooting coroutine if (hit.collider.name.Contains("zombie")) { if (!isShooting) { StartCoroutine ("Shoot"); }
} } } }
Step 8: Click Play!
Click play and go back to your scene view.
We are casting a ray from the spawn point in order to detect if the gun is pointing at a zombie. So, you will see a green line coming from the tip of the gun. Rotate the spawn point until that line is pointing straight out of the gun barrel. Click the top right of the spawn points transform box to copy the component values. Click play again to exist play mode. Click that same area on the spawn point's transform again and paste the component values. We have to do this because changes don't get saved in play mode.
At this point everything should be working!
If you want to create a laser sight, right click in the asset folder and create a new material, change the albedo and emission to red. Create a new cube and make it a child of the gun. Scale its z to 1000 and everything else to .1.
This should make a long line that you can then drag your red material onto.
Reposition the sight so it is coming straight out of the end of the gun barrel.
Step 9: Transfer the Game to Your Phone!
Plug your phone in via usb to your computer.
For IOS go to build settings, player settings and change resolution to landscape left.
Change the bundle identifier to something like com.YourName.YourAppName!
Recommendations
We have a be nice policy.
Please be positive and constructive.
15 Comments
I screwed up, apparently you have to have blender (a free 3d modeling software) installed on your computer to be able to drag it into Unity. If you don't want to download that you could just go to the asset store and search for a free gun. Everything should work the same.
Very interesting!
We both work on VR right now, but on very different approaches. I am fighting really hard on Rpi to keep my FPS high every time I add an object. How is it on Unity/phone? Can you create a complex game running at 60 FPS? Or are you quickly limited even on recent phone? Thanks!
Yeah, pretty limited unfortunately, you have to keep it fairly simple. Some people on the app store have had success making complex games that run well but it is difficult from what I understand.
voted
Thanks!
Excellent KB article - nicely articulated, can't wait to give it a go! Thx for sharing...
Thanks for checking it out, but yeah do try it, it's actually pretty fun to play.
hello sir...actually we would like to built this as a project..so can i expect some kind of assisitance regarding this
VOTED! | http://www.instructables.com/id/How-to-Make-a-Virtual-Reality-Game-for-Beginners/ | CC-MAIN-2018-09 | refinedweb | 2,068 | 65.32 |
How to style React components is a controversial subject. There has been debate about whether styles should be defined in JavaScript using one of the new CSS in JS solutions, or if a more traditional method of styling using external stylesheets is the best approach.
Let's look at the different ways of styling React components and discuss their respective pros and cons. By the end of the article you should have a good idea of all the different techniques available to you, and be more informed to choose which one is right for your project.
The tried and tested way of styling your web app is to use external CSS stylesheets. When you're working in React, styling using plain CSS will look something like this:
import React from "react";
import "./box.css";
const Box = () => (
<div className="box">
<p className="box-text">Hello, World</p>
</div>
);
export default Box;
The styles are imported from a separate CSS file box.css:
box.css
.box {
border: 1px solid #f7f7f7;
border-radius: 5px;
padding: 20px;
}
.box-text {
font-size: 15px;
text-align: center;
}
Your build process will gather all of your imported CSS files and create a separate stylesheet, which will be linked to from your HTML file (projects like create-react-app will handle this for you).
create-react-app
Because you're just using Vanilla CSS you have access to all the great features in css that you are used to:
:before
:after
:hover
:nth-child
You're using plain CSS, one of the fundamental building blocks of the web. This means you don't have to add any dependencies to your project. You also don't have to worry about CSS becoming obsolete in the near future, unlike with some of the other styling solutions.
If you like, you can also use your favourite CSS preprocessor like Sass or Stylus; giving you access to powerful mixins, variables and everything else that these include. You will have to add a build step for this, however.
First of all you need to make sure that your JavaScript build process is set up to handle CSS imports. Luckily, if you are using create-react-app, this is handled out of the box.
Another downside to plain CSS when compared with other solutions is that you can't use JavaScript to directly style a component based on some variable or prop. With Vanilla CSS you have to take a quite meandering approach to this and and apply different classes to the component in response to conditions. For example:
const Button = props => {
const classNames = ["button"];
if (props.large) classNames.push("button-large");
if (props.rounded) classNames.push("button-rounded");
if (props.color) classNames.push(`button-${props.color}`);
return <button className={classNames.join(" ")} />;
};
In this example I have had to add many conditions to change the components className attribute according to the props that are passed in. We will come back to this example later to show how this is easier with some other React styling solutions.
className
The most important problem however, is that CSS was not designed to be used in a component-based architecture. CSS was designed to style documents and webpages, in these environments CSS' global namespace and cascade are powerful tools. However, in a component-based app the global namespace is a hindrance. It gets in the way.
In non-trivial React applications using CSS you are bound to run into some kind of class-name collision or see unexpected behaviour because some global styles have leaked where you didn't expect them to. Naming conventions like BEM can help to alleviate this problem within your project, but you have no guarantee that 3rd party code won't interfere with your styles.
A better solution would have styles which can be scoped directly to a component. Not only would this solve the global namespace issue, but it would allow better separation of concerns, as we could neatly bundle up all the code a component needs without having to worry about how it will affect all our other components.
The next way you can style your React components is by using inline styles. In React you can add styles directly to a component via the style prop using the JavaScript camelCased version of a CSS property:
style
import React from "react";
const boxStyle = {
border: "1px solid #f7f7f7",
borderRadius: "5px",
padding: "20px"
};
const boxTextStyle = {
fontSize: "15px",
textAlign: "center"
};
const Box = () => (
<div style={boxStyle}>
<p style={boxTextStyle}>Hello, World</p>
</div>
);
export default Box;
The good thing about this method is that you don't need any extra dependencies or build steps, you're just using React. It's quick to get started, and great for very small projects or proof of concept examples. Your styles are declared locally so you don't need to worry about global CSS collisions or be mindful of the cascade. Everything is neatly scoped to the component.
Because we are just using JavaScript we can make use of functions to add logic to our styles. We can take the example of the button with lots of classes that we looked at in the previous section on Vanilla CSS, and rewrite it using inline styles:
const styles = props => ({
fontSize: props.large ? "20px" : "14px",
borderRadius: props.rounded ? "10px" : "0",
background: props.color
});
const Button = props => {
return <button style={styles(props)} />;
};
You can see how this is much more declarative, instead of applying certain styles to the component in a round-about way using classes, the needed properties can be applied to the component directly.
In practice, for any projects that aren't just hobby projects, I would never recommend using inline styles. This is because we lose so many of the best things you get in CSS by taking a JS only approach:
mouseover
mouseleave
Missing out on any one of these is a deal-breaker for a lot of apps. You would also lose anything that you have transforming your CSS in a build step, for example vendor auto-prefixing. Subjectively, I also find writing CSS and JavaScript objects very clunky and unenjoyable.
The next method we're going to look at is CSS Modules. CSS Modules are like an evolution of Vanilla CSS where all the class and animation names are scoped locally. This means you avoid all of the problems that arise from the global namespace of CSS.
import React from "react";
import styles from "./box.css";
const Box = () => (
<div className={styles.box}>
<p className={styles.boxText}>Hello, World</p>
</div>
);
export default Box;
You import a box.css CSS file just like in the Vanilla CSS method, but this time it's a named import which will have normal JavaScript properties with your class-names as the keys. The corresponding CSS file here would just look like normal CSS:
.box {
border: 1px solid #f7f7f7;
border-radius: 5px;
padding: 20px;
}
.boxText {
font-size: 15px;
text-align: center;
}
CSS Modules will alter the class names in the resulting css files to be unique, which will effectively scope your styles, making them more appropriate for a component-based web app. By using CSS Modules you completely negate the main downside of using Vanilla CSS (the global namespace), while keeping all of the benefits. You still have access to everything in CSS, and you can still use a CSS preprocessor if you like (it will still require a build step).
Your codebase will still be made up of plain CSS files, so even though you have to add a build step and slightly change the way you import and apply your styles, the footprint left on your app is small. This means that if you ever need to pivot away from CSS Modules in the future, the transition should be quite painless.
CSS modules solve the global nature of CSS by scoping classes, but other than that it has the same problems as plain CSS stylesheets.
Styled Components is a CSS in JS library for React that lets you add local component-scoped styles in JS, but unlike with inlined styles, styled components compiles the JavaScript into real CSS. The library uses the template tag literal syntax to apply style to your components. It looks like this:
import React from "react";
import styled from "styled-components";
const BoxWrapper = styled.div`
border: 1px solid #f7f7f7;
border-radius: 5px;
padding: 20px;
`;
const BoxText = styled.p`
border: 1px solid #f7f7f7;
border-radius: 5px;
padding: 20px;
`;
const Box = () => (
<BoxWrapper>
<BoxText>Hello, World</BoxText>
</BoxWrapper>
);
export default Box;
Because the JavaScript is turned into actual CSS you can still do everything in styled components that you can in CSS. This includes media queries, pseudo-selectors, animations etc. Rather than using the clunky camel-case properties that we saw with inline styles, with styled components you can just write normal CSS. It's easy to pick up if you already know CSS.
Because we are using a CSS in JS solution we also have full access to the JavaScript language to alter and apply different styles based on props. A component's props are actually passed into calls to styled, which is very powerful:
styled
import React from "react";
import styled from "styled-components";
const Button = styled.button`
background: ${props => props.color};
font-size: ${props => props.large ? "18px" : "14px"};
border-radius: ${props => props.rounded ? "10px" : "0"};
`;
export default props => <Button {...props} />;
Unlike with Vanilla CSS your styles are tightly scoped to the component that you create when you declare them, solving the global namespace problem.
If you like the look of Styled Components it's worth mentioning here that there are alternative CSS in JS options. Check out Glamorous as well and see which you prefer.
Bringing Styled Components into your project does add an extra dependency. The CSS in JS world is pretty turbulent at the moment. It's possible that in a year there will be a better library and by using styled components you have given yourself technical debt.
It's also another thing you'll have to get new hires up to speed with when they join the team. As well as this, it's worth considering the debate I alluded to at the start of the article, why might using CSS in JS not be a good idea?
After reading the summaries I've given above you're probably leaning towards using either Styled Components or CSS modules to style your React apps. Inline styles just aren't powerful enough for most apps, and if you want to go the JS route, Styled Components are a better option. Conversely if you want to go the classic CSS way, CSS Modules offer you something that Vanilla CSS can't - scoped selectors.
As I touched on at the start of the article the use of CSS in JS is quite a controversial subject at the moment, and the choice of Styled Components or CSS Modules really brings this debate down to it's essence. It's out of the scope of this article to discuss but if you want to know the argument against styling with JavaScript try this article: Stop using CSS in JavaScript for web development (click-bait alert). Just remember there are 2 sides to every story, and we've only touched on some of the benefits of using Styled Components in this post.
The aim of this article was to give a brief overview of the key different ways we can style React components. As such, I haven't dug deep into the details of each method and encourage you to do more research if any of these have interested you!
From now on in my own projects I am going to be using styled components, I just feel it offers more flexibility and encourages the componentization of my code in a way which keeps my codebase as neat and tidy as possible.
Let me know what you think about styling React components in the comments. Did I overlook anything? Do you have anything to add? Let us know! 😊 | https://www.bhnywl.com/blog/styling-react-components/ | CC-MAIN-2018-51 | refinedweb | 2,001 | 60.14 |
In this post, we are going to build a simple app using React and Redux. This app is going to perform a complex task (pun intended!) of changing the number that is already displayed. The final app will look something like the one shown in the image below, but I believe this will be a great example to start building apps with React.
You may refer to my previous article Getting started with Redux for some of the references.
If we were using only React and not Redux, the development of this application will be very simple. We might create two components one that will have the button to change the number and the other that actually displays the number. And our parent component will render both these components.
Let us name the component to display the number as
DisplayNumber and the component that has the button to change the number as
ChangeNumber and our parent component as
App.
So our <
App/> component will render the two components and the code may look something like this:
class App extends React.Component { constructor() { super(); this.state = { number: "12" }; this.changeNumber = this.changeNumber.bind(this); } changeNum(newNumber) { this.setState({ number: newNumber }); } render() { return ( <div> <ChangeNumber changeNumber={this.changeNum}/> <DisplayNumber numberDisplayed={this.state.number}/> </div> ); } }
The
App component has a constructor that holds the initial state, which is given as
"12" to “
number“. This state is passed as props to the <
DisplayNumber/> Component and it displays this number.
The method
changeNum is passed as props to the <
ChangeNumber/> component. So when the button is clicked in the
ChangeNumber component, it changes the state of “
number” to the new number passed by the
onClick function. The
changeNum method takes an argument which will be the new number.
The
<ChangeNumber> component will look like follows:
<button className="btn btn-primary" onClick={() => this.props.changeNumber('25')}> Change Number </button>
Also, the
<DisplayNumber/> component will be created as follows. It receives the number as props and displays it.
The number is: {this.props.number}
With Redux we will try to perform the similar function. But we won’t create a constructor in the
App component to hold the state.
Using Redux
You can use the officially supported create-react-app boilerplate to build this application.
To install Redux:
npm install redux --save
As mentioned in my earlier post, redux can be used independently without using react. So React is independent and Redux is independent. Therefore we need to bind React and Redux together. In order to do so, we will use the package
react-redux.
npm install --save react-redux
Our directory structure should look something like this:
We will create the folders named
"actions", "components", "containers", "reducer". We will store the stateless components inside the components folders and the smart components inside the containers.
Actions folder contains all the actions and the reducers folder contains all the reducers. Store of the application is created in the
store.js file.
Our
package.json with dependencies might look as follows after installing the redux and react-redux package:
"dependencies": { "babel": "^6.23.0", "babel-core": "^6.23.1", "babel-loader": "^6.4.0", "babel-preset-es2015": "^6.22.0", "babel-preset-react": "^6.23.0", "react": "^15.4.2", "react-dom": "^15.4.2", "react-redux": "^5.0.4", "redux": "^3.6.0", "webpack": "^2.2.1", "webpack-dev-server": "^2.4.1" }
Creating components
As shown above, we will first create two components. One component will display the number and we will name it as
displayNumber as before. So we will create a new JavaScript file called
displayNumber.js inside the
components folder. And we will create another component called
changeNumber and similarly, we will create a JavaScript file inside the components folder.
Both these components are going to be stateless components as they are not going to deal with changing the state of the application directly. To know more about the stateless components you can refer to the video from Mindspace on YouTube.
As we will pass
props as an argument to the function, we write
props.numberDisplayed instead of
this.props.numberDisplayed. Rest of the code for both the components will be same as shown above.
displayNumber.js
import React from "react"; export const displayNumber = (props) => { return ( <div> <div className="row"> <div className="col-xs-12"> <p>The number is: {props.numberDisplayed}</p> </div> </div> </div> ); }
changeNumber.js
import React from "react"; export const changeNumber =(props) => { return ( <div> <div className="row"> <div className="col-xs-12"> <button className="btn btn-default" onClick={() => props.changenumber('25')}>Change Number</button> </div> </div> </div> ); }
Next, we are going to create a
app.js file inside the containers folder that will render both the displayNumber and changeNumber components.
We will begin by importing the above-created components as well as React and render method from the
react and
react-dom respectively.
App.js
import React from "react"; import {render} from "react-dom"; import {DisplayNumber} from '../components/DisplayNumber'; import {ChangeNumber} from '../components/ChangeNumber'; class App extends React.Component { render() { return ( <div className="container"> <ChangeNumber /> <DisplayNumber /> </div> ); } } } export default App;
The image below is a visual demonstration showing that the button is rendered from
<ChangeNumber> component and the number displayed is rendered from the
<DisplayNumber> component.
Creating an Action
We will create an action named
changeNum inside the file
changeActions.js. The type of this action is “
CHANGE_NUMBER” and the
payload is passed as a number when the change number button is clicked in the
ChangeNumber component.
export function changeNum(number) { return { type: "CHANGE_NUMBER", payload: number }; }
You can refer to my previous article on Getting started with Redux or the official documentation of Redux to know more about the Action. This article focusses on creating an Action.
Creating a Reducer
We will create a reducer
changeReducer in the
changeReducer.js file. In this reducer, we define the initial state as
"12" and then as per the action performed we change the state. Note that we are not mutating the initial state. We are instead creating a new object called
state and we are changing the value of the newly created object.
changeReducer.js
const changeReducer = (state = { number : '12' }, action) => { switch (action.type) { case "SET_NAME": state = { ...state, number: action.payload }; break; } return state; } export default changeReducer;
Creating a store
We begin by importing the method
createStore from
redux. We create the store by passing the reducer to the
createStore method. Note I am using combineReducers here. This is to show how we can add multiple reducers in createStore method.
CreateStore method takes a single reducer, but there can be multiple reducers in an application.
store.js
import { createStore, combineReducers } from 'redux'; import change from './reducers/changeReducer'; export default createStore( combineReducers({ change }), {} );
Now, we are done with creating a store, actions, and reducers.
Updating the App.js
Now is the time to make some changes in the
App.js file.
import {connect} from 'react-redux'; import {changeNum} from '../actions/changeActions'; class App extends React.Component { render() { return ( <div className="container"> <ChangeNumber changeNumber={()=> this.props.changeNum('25')} /> <DisplayNumber numberDisplayed={this.props.change.number} /> </div> ); } } const mapStateToProps = (state) => { return { change: state.change } } const mapDispatchToProps = (dispatch) => { return { changeNum: (number) => { dispatch(changeNum(number)); } } } export default connect(mapStateToProps, mapDispatchToProps)(App);
In the above code, we can see two new methods
mapStateToProps and
mapDispatchToProps.
Here, we need to tell redux which properties of the store’s state and actions we want to use in the components. This is done through these two methods:
mapStateToProps and
mapDispatchToProps.
mapStateToProps
mapStateToProps takes the store’s
state as an argument which is provided by redux. It is used to link the component to certain part of store’s state. It returns a javascript object. Here key is the property name that can be used in the component. In this example we are going to use the property name “
change” in our
<DisplayNumber > component.
state.change implies that we are using the state from the
changeReducer. This property can be then used in the component as props.
If we
console.log(this.props.change) we can see the
Object {number: "12}. As we have set the initial state or default state as 12 in the changeReducer.
Now since we want to display this number 12 in the
displayNumber component, we pass this as props. Here we are passing it as
numberDisplayed={this.props.change.number}
mapDispatchToProps
mapDispatchToProps Sets up the actions which we want to use in the component. It takes an argument and returns a javascript object, and value of this object is the action.
changeNum is the props name that is passed as a method where
number is the argument which calls a
dispatch method and it expects to get the same action definition which we have created in the actions with
type and
payload.
The
store always updates itself if there is a change in the state. As we have set the initial state of the number in the
changeReducer as 12, the number is initially displayed as 12. But once the change number button is clicked, it triggers the action and since we pass the number as 25, the reducer changes the state from 12 to 25. The stores updates this state of the application and as the
<App/> component is always listening to the store, the change is reflected in the
DisplayNumber component.
Connect
Connect connects React with Redux. It expects the above created two functions,
mapStateToProps and
mapDispatchToProps. And it then returns another function. This function then takes
<App/> component here and we export this
connect method as default.
Provider
If we were using just React, we would render the
<App/> component directly. But in Redux will we have to use something called as
<Provider>. We begin by importing
Provider from
react-redux.
The
<Provider> makes the Redux store available to the
connect() calls in the component hierarchy below. And we wrap the
<App/> inside the
<Provider>.
We cannot use
connect() without wrapping a parent component,
<App/> in our example, in
<Provider>
Our
index.js file will now look like:
import {render} from 'react-dom'; import React from 'react'; import App from './containers/App'; import {Provider} from 'react-redux'; import store from './store'; render( <Provider store={store}> <App/> </Provider>, window.document.getElementById('root'))
That’s all we are done coding our first simple application using React and Redux. If we run our application, we can see the output as shown in the image below.javascript, react, reactjs, redux, web application | http://parseobjects.com/react-redux-tutorial-build-simple-app/ | CC-MAIN-2019-30 | refinedweb | 1,742 | 58.89 |
In contrast to WPF, there's no support for 3D in Silverlight 2.0. Still, it has the power of the .NET runtime built-in, so there's nothing to stop you from rolling your own little 3D engine in Silverlight. Not a high end one, but something to play around with and to add some neat effects to your RIA.
All you need is some understanding of math and the basic primitive of all things 3D - the triangle primitive. Specifically, I'm talking about a textured triangle with an image mapped on it.
Consequently, this article is all about implementing a triangle primitive as a custom control in Silverlight 2.0.
Here's a snippet that shows how the ImageTriangle control can be used directly in XAML:
ImageTriangle
<UserControl x:Class="Controls.Page"
xmlns=""
xmlns:x=""
xmlns:
<Canvas ...>
<xk:ImageTriangle
</Canvas>
</UserControl>
As always, when you reference custom classes from XAML, you must add a namespace reference to the control assembly first:
xmlns:xk="clr-namespace:XamlKru.Controls.Primitives;assembly=XamlKru.Controls"
You can then reference the triangle in XAML with <xk:ImageTriangle ... /> and set the corner points (Point1, Point2, and Point3) via attributes as well as the TextureSource. TextureSource is an ImageSource that takes the image that will be rendered as texture onto the triangle. You can also set the texture coordinates by setting the TexturePositions, e.g., TexturePositions="0,0 1,0 0,1".
<xk:ImageTriangle ... />
Point1
Point2
Point3
TextureSource
ImageSource
TexturePositions
TexturePositions="0,0 1,0 0,1"
Doing that in XAML is perfect for playing around with the control and getting to know how the various properties work and play together. However, when you want to use it for 3D or something similar, you're going to do it in code, be it C#, VB, or maybe IronPython.
In addition to the properties mentioned above, you can set all three corner points in a single method call in imperative code, which has some performance advantages:
tri1.SetPoints(new Point(0,0), new Point(100,0), new Point(0,100));
The TextureSource can be set directly to a BitmapImage, which allows you to use images downloaded from the web, or even from the local file system, as textures:
BitmapImage
var texture = new BitmapImage();
texture.SetSource(inputStream);
tri1.TextureSource = texture;
There is also an IsClockwise property that you might want to use for backface culling in 3D.
IsClockwise
tri.IsClockwise
How would you draw a textured triangle in Silverlight? There are two ways I could think of:
Image
Path
ImageBrush
I chose the second way, because it seemed to be a little bit faster.
Point1, Point2 and Point3 are implemented as Dependency Properties. Dependency Properties allow to perform calculations in the PropertyChanged callback and add the capability to take part in data-binding and animations.
PropertyChanged
In order to transform the triangle to fit the three corner points, I've added a MatrixTransform to the Path-element's RenderTransform. It is fairly straightforward to calculate the matrix elements from the given points. An article that explains how that works can be found here.
MatrixTransform
RenderTransform
When you look at the control template and the UpdateCorners method, you will notice that there's a second transform applied as RenderTransform. When I got the code to work for the first time and created two triangles next to each other, I noticed a small, but irritating seam between them. This is due to how the antialiasing works in Silverlight. That's what the second transform is for. It scales the triangle up just a little bit (0.5 - 1 pixels on each side), just enough to overlap the seam.
UpdateCorners
Great, now we've got a triangle that we can position arbitrarily. Are we done? Actually, no! Up to that point, the image on the triangle is fixed. What if you want to create a plane made of two triangles (or better 4 or 16, as I will explain below) - you don't want to load a separate image for each half!
We need a way to stretch and move the image position on the triangle itself. We need texture coordinates.
Again, this is done with a MatrixTransform. This time it is applied to the ImageBrush, filling the Path with a texture image. In this case, it is a little bit more tricky, though.
Effectively, what texture coordinates say is, where on the texture image the triangle lies. For the matrix applied to the ImageBrush, it has to be the other way round, we want to position the image as seen from the triangle's coordinate system.
We can do this by first calculating the matrix that maps the triangle onto the image and then inverting that matrix to get a transformation that maps the image onto the triangle. Exactly this is what's happening in the UpdateTexturePositions method:
UpdateTexturePositions
var m = new Matrix(m11, m12, m21, m22, ox, oy).Invert();
_brushTransform.Matrix = m;
When you create a plane from two triangles and transform it to 3D, you might notice that it doesn't look quite right. This is because the depth information is not used. The only way to get around that is by adding more triangles. If you use eight triangles instead of two, the texture will look more realistic in 3D. You have to find the right tradeoff between performance and visual correctness.
This article doesn't explain how to do 3D, it just provides you with a basic building block! However, the project files contain some 3D code to get you started.
My experiments have shown that rendering something around 100 triangles is possible at an acceptable frame-rate on a recent machine. Nothing to get too excited about, you won't be able to build a Halo or Second Life in Silverlight anytime soon. At least, not on a PC. But, it will be enough to do Coverflow-style animations, spinning 3D globes, and 360° panoramas, or maybe even games.
This article, along with any associated source code and files, is licensed under The Microsoft Public License (Ms-PL)
< Grid x:
< xk:ImageTriangle x:
</Grid>
i2 = new ImageTriangle();
i2.SetPoints(new Point(0, 0), new Point(180, 20), new Point(0, 200));
PointCollection p = new PointCollection();
p.Add(new Point(0, 0));
p.Add(new Point(1, 0));
p.Add(new Point(0, 1));
i2.TexturePositions = p;
i2.TextureSource = new BitmapImage(new Uri("images/blue hills.jpg", UriKind.Relative));
LayoutRoot.Children.Add(i2);
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | http://www.codeproject.com/Articles/25786/A-Textured-Triangle-Control-for-Silverlight-the?msg=2592492 | CC-MAIN-2016-36 | refinedweb | 1,108 | 55.64 |
Difference between revisions of "IO inside"
Latest revision as of 09:14, 12 January 2021
Haskell I/O can be a source of confusion and surprises for new Haskellers - if that's you, a good place to start is the Introduction to IO which can help you learn the basics (e.g. the syntax of I/O expressions) before continuing on.
While simple I/O code in Haskell looks very similar to its equivalents in imperative languages, attempts to write somewhat more complex code often result in a total mess. This is because Haskell I/O is really very different in how it actually works.
The following text is an attempt to explain the details of Haskell I/O implementations. This explanation should help you eventually learn all the smart I/O tips. Moreover, I've added a detailed explanation of various traps you might encounter along the way. After reading this text, you will be well on your way towards mastering I/O in Haskell.
Contents
- 1 Haskell is a pure language
- 2 I/O in Haskell, simplified
- 3 Running with the RealWorld
- 4 (>>=) and do notation
- 5 Mutable data (references, arrays, hash tables...)
- 6 I/O actions as values
- 7 Exception handling (under development)
- 8 Interfacing with C/C++ and foreign libraries (under development)
- 9 The dark side of the I/O monad
- 10 Welcome to the machine: the actual GHC implementation
- 11 Further reading
- 12 To-do list
Haskell is a pure language
Haskell is a pure language and even the I/O system can't break this purity. Being pure means that the result of any function call is fully determined by its arguments. Procedural entities's purity allows the compiler to call only functions whose results are really required to calculate the final value of a top-level function (e.g.
main) - this is called lazy evaluation. It's a great thing for pure mathematical computations, but how about I/O actions? A function like
putStrLn "Press any key to begin formatting"
can't return any meaningful result value, so how can we ensure that the compiler will not omit or reorder its execution? And in general: How we can work with stateful algorithms and side effects in an entirely lazy language? This question has had many different solutions proposed while Haskell was developed (see History of Haskell), with one solution eventually making its way into the current standard.
I/O in Haskell, simplified
Let's imagine that we want to implement the well-known
getchar I/O operation in Haskell. What type should it have? Let's try:
getchar :: Char get2chars = [getchar, getchar]
What will we get with
getchar having just the
Char type? You can see one problem in the definition of
get2chars immediately:
- because the Haskell compiler treats all functions as pure (not having side effects), it can avoid "unnecessary" calls to
getcharand use one returned value twice:
get2chars = let x = getchar in [x, x] -- this should be a legitimate optimisation!
How can this problem be solved from the programmer's perspective? that the calls have different parameters. But there's another problem:
-.
We need to give the compiler some clue to determine which function it should call first. The Haskell language doesn't provide any way to specify the sequence needed to evaluate
getchar 1 and
getchar 2 -
Furthermore,
get2chars has the same purity problems as the
getchar function. If you need to call it two times, you need a way to describe the order of these calls. Consider this:
get4chars = [get2chars 1, get2chars 2] -- order of calls to 'get2chars' isn't defined
We already know how to deal with this should the fake return value of
get2chars be?
While that does work, it's error-prone:
get2chars :: Int -> (String, Int) get2chars i0 = ([a, b], i2) where (a, i1) = getchar i2 -- this might take a while... (b, i2) = getchar i1
Using individual
let-bindings is an improvement:
get2chars :: Int -> (String, Int) get2chars i0 = let (a, i1) = getchar i2 in -- error: i2 is undefined! let (b, i2) = getchar i1 in ([a, b], i2)
but only a minor one:
get2chars :: Int -> (String, Int) get2chars i0 = let (a, i1) = getchar i0 in let (b, i2) = getchar i2 in -- here we go again... ([a, b], i2)
So how in Haskell shall we prevent such mistakes from happening? With a monad!
What is a monad?
But what is a monad? For Haskell, it's a three-way partnership between:
- a type:
M a
- an operator
unit(M) :: a -> M a
- an operator
bind(M) :: M a -> (a -> M b) -> M b
where
unit(M) and
bind(M) satisify the monad laws.
This would translate literally into Haskell as:
class Monad m where unit :: a -> m a bind :: m a -> (a -> m b) -> m b
For now, we'll just define
unit and
bind directly - no type classes.
So how does something so
vague abstract help us with I/O? Because this abstraction allows us to hide the manipulation of all those fake values - the ones we've been using to maintain the correct sequence of evaluation. We just need a suitable type:
type IO' a = Int -> (a, Int)
and appropriate defintions for
unit and
bind:
unit :: a -> IO' a unit x = \i0 -> (x, i0) bind :: IO' a -> (a -> IO' b) -> IO' b bind m k = \i0 -> let (x, i1) = m i0 in let (y, i2) = k x i1 in (y, i2)
Now for some extra changes to
getchar and
get2chars:
getchar :: IO' Char {- = Int -> (Char, Int) -} get2chars :: IO' String {- = Int -> (String, Int) -} get2chars = \i0 -> let (a, i1) = getchar i0 in let (b, i2) = getchar i1 in let r = [a, b] in (r, i2)
before we use
unit and
bind:
getchar :: IO' Char get2chars :: IO' String get2chars = getchar `bind` \a -> getchar `bind` \b -> unit [a, b]
We no longer have to mess
up with those fake values directly! We just need to be sure that all the operations on I/O actions like
unit and
bind use them correctly. We can then make
IO',
unit,
bind and (in this example)
getchar into an abstract data type and just use those abstract I/O operations instead -
only the Haskell implementation (e.g. compilers like ghc or jhc) needs to know how I/O actions actually work.
So there you have it - a miniature monadic I/O system in Haskell!
Running with the
RealWorld
Warning: The following story about I/O is incorrect in that it cannot actually explain some important aspects of I/O (including interaction and concurrency). However, some people find it useful to begin developing an understanding.
The
main Haskell function has the type:
main :: RealWorld -> ((), RealWorld)
where
RealWorld is a fake type used instead of our Int. It's something
like the baton passed in a relay race. When
main calls some I/O action,
it passes the
RealWorld it received as a parameter. All I/O actions the "world" it received to the first
getChar. This
getChar returns some new value of type
RealWorld
that gets used in the next call. Finally,
main returns the "world" it got
from the second
getChar.
- Is it possible here to omit any call of
getCharif the
Charit read is not used? No: we need to return the "world" that is the result of the second
getCharand this in turn requires the "world" returned from the first
getChar.
- Is it possible to reorder the
getCharcalls? No: the second
getCharcan I/O action that is called from
main, directly or indirectly.
This means that each action inserted in the chain will be performed
just at the moment (relative to the other I/O
I/O notices that this type is like
(), so
all these parameters and result values can be omitted from the final generated code - they're not needed any more!
(>>=) and
do notation
All beginners (including me) start by thinking that
do is some
super-awesome statement that executes I/O actions. That's wrong -
do is just
syntactic sugar that simplifies the writing of definitions that use I/O (and also other monads, but that's beyond the scope of this tutorial).
do notation eventually gets translated to
a series of I/O actions passing "world" values around like we've manually written above.
This simplifies the gluing of several I/O actions together.
You don't need to use
do for just one action; for example,
main = do putStr "Hello!"
is desugared to:
main = putStr "Hello!"
Let's examine how to desugar a
do-expression with multiple actions in the
following example:
main = do putStr "What is your name?" putStr "How old are you?" putStr "Nice day!"
The
do-expression here just joins several I/O actions that should be
performed sequentially. It's translated to sequential applications
of one of the so-called "binding operators", namely
(>>):
main = (putStr "What is your name?") >> ( (putStr "How old are you?") >> (putStr "Nice day!") )
This binding operator just combines two I/O)
where
(>>=) corresponds to the
bind operation in our miniature I/O system. >>= reaction) world0 = let (a, world1) = action world0 (b, world2) = reaction a world1 in (b, world2)
- What does the type of
reaction- namely
a -> IO b- mean? By substituting the
IOdefinition, we get
a -> RealWorld -> (b, RealWorld). This means that
reactionactually has two parameters - the type
aactually used inside it, and the value of type
RealWorldused for sequencing of I/O actions. That's always the case - any I/O definition has one more parameter compared to what you see in its type signature. This parameter is hidden inside the definition of the type synonym
IO:
type IO a = RealWorld -> (a, RealWorld)
- You can use these
(>>)and
(>>=)operations to simplify your program. For example, in the code above we don't need to introduce the variable, because the result of
readLncan be send directly to
main = readLn >>= print I/O action. Note also that if
action1 has type
IO a then
x will just have type
a; you can think of the effect of
<- as "unpacking" the I/O definition into the low-level
code that explicitly passes "world" values. I think it should be enough to help you finally realize how the
do translation and binding operators work.
Oh, no! I forgot the third monadic operator -
return, which corresponds to
unit in our miniature I/O system. I/O definition. action of some I/O sequence. For example try to
translate the following definition into the corresponding low-level code:
main = do a <- readLn when (a>=0) $ do return () print "a is negative"
and you will realize that the
a. If you need to escape from the middle of an I/O definition, you can use an
if expression:-expression.?
- The two calls to
readVariablelook the same, so the compiler can just reuse the value returned by the first call.
- The result of the
writeVariablecall isn't used so the compiler can (and will!) omit this call completely.
- These three calls may be rearranged in any order because they appear to be independent of each other.
This is obviously not what was intended. What's the solution? You already know this - use I/O actions! Doing that guarantees:
- the result of the "same" action (such as
readVariable varA) will not be reused
- each action will have to be executed
- the execution order will be retained as written
So, the code above really should be written as:
import Data.IORef I/O value-specific read and write operations in the I/O with I/O to the definition!
Encapsulated mutable data: ST
If you're going to be doing things like sending text to a screen or reading data from a scanner,
IO is the type to start with - you can then
customise or add existing or new I/O operations as you see fit. But what if that shiny-new (or classic) algorithm you're working on really only needs
mutable state - then having to drag that
IO type from
main all the way down to wherever you're implementing the algorithm can
get quite annoying.
Fortunately there is a better way! One that remains totally pure and yet allows the use of references, arrays, and so on - and it's done using, you guessed it, Haskell's versatile type system (and one extension). It is the
ST type, and it too is monadic!
So what's the big difference between the
ST and
IO types? In one word -
runST:
runST :: (forall s . ST s a) -> a
Yes - it has a very unusual type. But that type allows you to run your stateful computation as if it was a pure definition!
The
s type variable in
ST is the type of the local state. Moreover, all the fun mutable stuff available for
ST is
quantified over
s:
newSTRef :: a -> ST s (STRef s a) newArray_ :: Ix i => (i, i) -> ST s (STArray s i e)
So why does
runST have such a funky type? Let's see what would happen if we wrote
makeSTRef :: a -> STRef s a makeSTRef a = runST (newSTRef a)
This fails, because
newSTRef a doesn't work for all state types
s - it only works for the
s from the return type
STRef s a.
This is all sort of wacky, but the result is that you can only run an
ST computation where the output type is functionally pure, and makes no references
to the internal mutable state of the computation. In exchange for that, there's no access to I/O operations like writing to the console - only references, arrays, and
such that come in handy for pure computations.
Important note - the state type doesn't actually mean anything. We never have a value of type
s, for instance. It's just a way of getting the type system
to do the work of ensuring purity is preserved.
On the inside,
runST runs a computation with a baton similar to
RealWorld for the
IO type.
Once the computation has completed
runST separates the resulting value from the final baton. This value is then returned by
runST.
The internal implementations are so similar there's there's a function:
stToIO :: ST RealWorld a -> IO a
The difference is that
ST uses the type system to forbid unsafe behavior like extracting mutable objects from their safe
ST wrapping, but allowing purely functional outputs to be performed with all the handy access to mutable references and arrays.
For example, here's a particularly convoluted way to compute the integer that comes after zero:
oneST :: ST s Integer -- note that this works correctly for any s oneST = do var <- newSTRef 0 modifySTRef var (+1) readSTRef var one :: Int one = runST oneST
I/O actions as values
By this point you should understand why it's impossible to use I/O
actions inside non-I/O (pure) functions. Such functions just don't
get a "baton"; they don't know any "world" value to pass to an I/O action.
The
RealWorld type is an abstract datatype, so pure functions
also can't construct
RealWorld values by themselves, and it's
a strict type, so
undefined also can't be used. So, the
prohibition of using I/O actions inside pure functions is maintained by the
type system (as it usually is in Haskell).
But while pure code can't execute I/O actions, it can work with them
as with any other functional values - they can be stored in data
structures, passed as parameters, returned as results, collected in
lists, and partially applied. But an I/O action will remain a
functional value because we can't apply it to the last argument - of
type
RealWorld.
In order to execute the I/O action we need to apply it to some
RealWorld value. That can be done only inside other I/O actions,
in their "actions chains". And real execution of this action will take
place only when this action those local bindings - actions are executed in the exact order in which they're written, because they pass the "world" value from action to action as we described above. Thus, this version of the function is much easier to understand because we don't have to mentally figure out the data dependency of the "world" value.
Moreover, I/O
I/O function called from
main). Until that's done, they will remain like any function, in partially
evaluated form. And we can work with I/O actions as with any other
functions - bind them to names (as we did above), save them in data
structures, pass them as function parameters and return them as results - and
they won't be performed until you give them that inaugural
RealWorld
parameter!
Example: a list of I/O actions
Let's try defining a list of I/O I/O action that you write in a
do-expression I/O actions just like we can with any other functional values! For example,
let's define a function that executes all the I/O actions in the list:
sequence_ :: [IO a] -> IO () sequence_ [] = return () sequence_ (x:xs) = do x sequence_ xs
No mirrors or smoke - we just extract I/O actions from the list and insert
them into a chain of I/O I/O.
Example: returning an I/O action as a result
How about returning an I/O action as the result of a function? Well, we've done
this for each I/O definition - they all return I/O actions
that need a
RealWorld value to be performed. While we usually just
execute them as part of a higher-level I/O definition, it's also
possible to just collect them without actual execution:
main = do let a = sequence ioActions b = when True getChar c = getChar >> getChar putStr "These let-bindings are not executed!"
These assigned I/O actions can be used as parameters to other
definitions, or written to global variables, or processed in some other
way, or just executed later, as we did in the example with
get2chars.
But how about returning a parameterized I/O action from an I/O definition? Here's a definition that returns the i'th byte from a file represented as a Handle:
readi h i = do hSeek h AbsoluteSeek i hGetChar h
So far so good. But how about a definition that returns the i'th byte of a file with a given name without reopening it each time?
readfilei :: String -> IO (Integer -> IO Char) readfilei name = do h <- openFile name ReadMode return (readi h)
As you can see, it's an I/O definition that opens a file and returns...an
I/O action that will read the specified byte. But we can go
further and include the
readi body in
readfilei:
readfilei name = do h <- openFile name ReadMode let readi h i = do hSeek h AbsoluteSeek i AbsoluteSeek i hGetChar h return readi
What have we done here? We've build a parameterized I/O action involving local
names inside
readfilei and returned it as the result. Now it can be
used in the following way:
main = do myfile <- readfilei "test" a <- myfile 0 b <- myfile 1 print (a,b)
This way of using I/O actions is very typical for Haskell programs - you just construct one or more I/O actions that you need, with or without parameters, possibly involving the parameters that your "constructor" received, and return them to the caller. Then these I/O actions can be used in the rest of the program without any knowledge about your internal implementation strategy. One thing this can be used for is to partially emulate the OOP (or more precisely, the ADT) programming paradigm.
Example: a memory allocator generator
As an example, one of my programs has a module which is a memory suballocator. It receives the address and size of a large memory block and returns two specialised I/O operations - definition. Because the creation of these references is a part of the
memoryAllocator I/O-action the operations define these operations using I/O actions. Instead of a "class" let's define a structure containing implementations of all the operations operation I/O operations I/O }
Exception handling (under development)
Although Haskell provides a set of exception raising/handling features comparable to those in popular OOP languages (C++, Java, C#), this part of the language receives much less attention. This is for two reasons:
- you just don't need to worry as much about them - most of the time it just works "behind the scenes".
- Haskell, lacking OOP-style inheritance, doesn't allow the programmer to easily subclass exception types, therefore limiting the flexibility of exception handling.
The Haskell RTS raises more exceptions than traditional languages - pattern match failures, calls with invalid arguments (such as
head []) and computations whose results depend on special values
undefined and
error "...." all raise their own exceptions:
- example 1:
main = print (f 2) f 0 = "zero" f 1 = "one"
- example 2:
main = print (head [])
- example 3:
main = print (1 + (error "Value that wasn't initialized or cannot be computed"))
This allows the writing of programs in a much more error-prone way.
Interfacing with C/C++ and foreign libraries (under development)
While Haskell is great at algorithm development, speed isn't its best side. We can combine the best of both worlds, though, by writing speed-critical parts of program in C and the rest in Haskell. We just need a way to call C functions from Haskell and vice versa, and to marshal data between both worlds.
We also need to interact with the C world for using Windows/Linux APIs, linking to various libraries and DLLs. Even interfacing with other languages often requires going through C world as a "common denominator". Chapter 8 of the Haskell 2010 report provides a complete description of interfacing with C.
We will learn FFI via a series of examples. These examples include C/C++ code, so they need C/C++ compilers to be installed, the same will be true if you need to include code written in C/C++ in your program (C/C++ compilers are not required when you just need to link with existing libraries providing APIs with C calling convention). On Unix (and Mac OS?) systems, the system-wide default C/C++ compiler is typically used by GHC installation. On Windows, no default compilers exist, so GHC is typically shipped with a C compiler, and you may find on the download page a GHC distribution bundled with C and C++ compilers. Alternatively, you may find and install a GCC/MinGW version compatible with your GHC installation.
If you need to make your C/C++ code as fast as possible, you may compile your code by Intel compilers instead of GCC. However, these compilers are not free, moreover on Windows, code compiled by Intel compilers may not interact correctly with GHC-compiled code, unless one of them is put into DLLs (due to object file incompatibility).
- C->Haskell
- A lightweight tool for implementing access to C libraries from Haskell.
- HSFFIG
- Haskell FFI Binding Modules Generator (HSFFIG) is a tool that takes a C library header (".h") and generates Haskell Foreign Functions Interface import declarations for items (functions, structures, etc.) the header defines.
-
Dynamicexceptions. At a higher level, MissingPy contains Haskell interfaces to some Python modules.
Calling functions
We begin by learning how to call C functions from Haskell and Haskell functions from C. The first example consists of three files:
main.hs:
{-# LANGUAGE ForeignFunctionInterface #-} main = do print "Hello from main" c_function haskell_function = print "Hello from haskell_function" foreign import ccall safe "prototypes.h" c_function :: IO () foreign export ccall haskell_function :: IO ()
vile.c:
#include <stdio.h> #include "prototypes.h" void c_function (void) { printf("Hello from c_function\n"); haskell_function(); }
prototypes.h:
extern void c_function (void); extern void haskell_function (void);
It may be compiled and linked in one step by ghc:
ghc --make main.hs vile.c
Or, you may compile C module(s) separately and link in ".o" files (this may be preferable if you use
make and don't want to recompile unchanged sources; ghc's
--make option provides smart recompilation only for ".hs" files):
ghc -c vile.c ghc --make main.hs vile.o
You may use gcc/g++ directly to compile your C/C++ files but I recommend to do linking via ghc because it adds a lot of libraries required for execution of Haskell code. For the same reason, even if your
main routine is written in C/C++, I recommend calling it from the Haskell function
main - otherwise you'll have to explicitly init/shutdown the GHC RTS (run-time system).
We use the
foreign import declaration to import foreign routines into our Haskell world, and
foreign export to export Haskell routines into the external world. Note that
import creates a new Haskell symbol (from the external one), while
export uses a Haskell symbol previously defined. Technically speaking, both types of declarations create a wrapper that converts the names and calling conventions from C to Haskell or vice versa.
All about the
foreign declaration
The
ccall specifier in foreign declarations means the use of the C (not C++ !) calling convention. This means that if you want to write the external function in C++ (instead of C) you should add
export "C" specification to its declaration - otherwise you'll get linking errors. Let's rewrite our first example to use C++ instead of C:
prototypes.h:
#ifdef __cplusplus extern "C" { #endif extern void c_function (void); extern void haskell_function (void); #ifdef __cplusplus } #endif
Compile it via:
ghc --make main.hs vile.cpp
where "vile.cpp" is just a renamed copy of "vile.c" from the first example. Note that the new "prototypes.h" is written to allow compiling it both as C and C++ code. When it's included from "vile.cpp", it's compiled as C++ code. When GHC compiles "main.hs" via the C compiler (enabled by the
-fvia-C option), it also includes "prototypes.h" but compiles it in C mode. It's why you need to specify ".h" files in
foreign declarations - depending on which Haskell compiler you use, these files may be included to check consistency of C and Haskell declarations.
The quoted part of the foreign declaration may also be used to import or export a function under another name - for example,
foreign import ccall safe "prototypes.h CFunction" c_function :: IO () foreign export ccall "HaskellFunction" haskell_function :: IO ()
specifies that the C function called
CFunction will become known as the Haskell function
c_function, while the Haskell function
haskell_function will be known in the C world as
HaskellFunction. It's required when the C name doesn't conform to Haskell naming requirements.
Although the Haskell FFI standard tells about many other calling conventions in addition to
ccall (e.g.
cplusplus,
jvm,
net) current Haskell implementations support only
ccall and
stdcall. The latter, also called the "Pascal" calling convention, is used to interface with WinAPI:
foreign import stdcall unsafe "windows.h SetFileApisToOEM" setFileApisToOEM :: IO ()
And finally, about the
safe/
unsafe specifier: a C function imported with the
unsafe keyword is called directly and the Haskell runtime is stopped while the C function is executed (when there are several OS threads executing the Haskell program, only the current OS thread is delayed). This call doesn't allow recursively entering into the Haskell world by calling any Haskell function - the Haskell RTS is just not prepared for such an event. However,
unsafe calls are as quick as calls in the C world. It's ideal for "momentary" calls that quickly return back to the caller.
When
safe is specified, the C function is called in a safe environment - the Haskell execution context is saved, so it's possible to call back to Haskell and, if the C call takes a long time, another OS thread may be started to execute Haskell code (of course, in threads other than the one that called the C code). This has its own price, though - around 1000 CPU ticks per call.
You can read more about interaction between FFI calls and Haskell concurrency in [7].
Marshalling simple types
Calling by itself is relatively easy; the real problem of interfacing languages with different data models is passing data between them. In this case, there is no guarantee that Haskell's
Int is represented in memory the same way as C's
int, nor Haskell's
Double the same as C's
double and so on. While on some platforms they are the same and you can write throw-away programs relying on these, the goal of portability requires you to declare imported and exported functions using special types described in the FFI standard, which are guaranteed to correspond to C types. These are:
import Foreign.C.Types ( -- equivalent to the following C type: CChar, CUChar, -- char/unsigned char CShort, CUShort, -- short/unsigned short CInt, CUInt, CLong, CULong, -- int/unsigned/long/unsigned long CFloat, CDouble...) -- float/double
Now we can import and export typeful C/Haskell functions:
foreign import ccall unsafe "math.h" c_sin :: CDouble -> CDouble
Note that pure C functions (those whose results depend only on their arguments) are imported without
IO in their return type. The
const specifier in C is not reflected in Haskell types, so appropriate compiler checks are not performed.
All these numeric types are instances of the same classes as their Haskell cousins (
Ord,
Num,
Show and so on), so you may perform calculations on these data directly. Alternatively, you may convert them to native Haskell types. It's very typical to write simple wrappers around imported and exported functions just to provide interfaces having native Haskell types:
-- |Type-conversion wrapper around c_sin sin :: Double -> Double sin = fromRational . c_sin . toRational
Memory management
Marshalling strings
import Foreign.C.String ( -- representation of strings in C CString, -- = Ptr CChar CStringLen) -- = (Ptr CChar, Int)
foreign import ccall unsafe "string.h" c_strlen :: CString -> IO CSize -- CSize defined in Foreign.C.Types and is equal to size_t
-- |Type-conversion wrapper around c_strlen strlen :: String -> Int strlen = ....
Marshalling composite types
A C array may be manipulated in Haskell as StorableArray.
There is no built-in support for marshalling C structures and using C constants in Haskell. These are implemented in the c2hs preprocessor, though.
Binary marshalling (serializing) of data structures of any complexity is implemented in the library module "Binary".
Dynamic calls
DLLs
because i don't have experience of using DLLs, can someone write into this section? Ultimately, we need to consider the following tasks:
- using DLLs of 3rd-party libraries (such as ziplib)
- putting your own C code into a DLL to use in Haskell
- putting Haskell code into a DLL which may be called from C code
The dark side of the I/O monad
Unless you are a systems developer, postgraduate CS student, or have alternate (and eminent!) verificable qualifications you should have no need whatsoever for this section - here is just one tiny example of what can go wrong if you don't know what you are doing. Look for other solutions!
unsafePerformIO
Do you remember that initial attempt to define
getchar?
getchar :: Char get2chars = [getchar, getchar]
Let's also recall the problems arising from this faux-definition:
- Because the Haskell compiler treats all functions as pure (not having side effects), it can avoid "unnecessary" calls to
getcharand use one returned value twice;
-.
Despite these problems, programmers coming from an imperative language background often look for a way to do this - disguise one or more I/O actions as a pure definition. Having seen procedural entities similar in appearance to:
void putchar(char c);
the thought of just writing:
putchar :: Char -> () putchar c = ...
would definitely be more appealing - for example, defining
readContents as though it were a pure function:
readContents :: Filename -> String
will certainly simplify the code that uses it. However, those exact same problems are also lurking here:
- Attempts to read the contents of files with the same name can be factored (i.e. reduced to a single call) despite the fact that the file (or the current directory) can be changed between calls. Haskell considers all non-
IOfunctions to be pure and feels free to merge multiple calls with the same parameters.
-.
So, implementing supposedly-pure functions that interact with the Real World is considered to be Bad Behavior. Nice programmers never do it ;-)
Nevertheless, there are (semi-official) ways to use I/O actions inside
of pure functions. As you should remember this is prohibited by
requiring the
RealWorld "baton" in order to call an I/O action. Pure functions don't have the baton, but there is a (ahem) "special" definition that produces this baton from nowhere, uses it to call an I/O action and then throws the resulting "world" away! It's a little low-level mirror-smoke. This particular (and dangerous) definition is:
unsafePerformIO :: IO a -> a
Let's look at how it could be defined:
unsafePerformIO :: (RealWorld -> (a, RealWorld)) -> a unsafePerformIO action = let (a, world1) = action createNewWorld in a
where
createNewWorld is an private definition producing a new value of
the
RealWorld type.
Using
unsafePerformIO, you could easily write pure functions that do
I/O inside. But don't do this without a real need, and remember to
follow this rule:
- the compiler doesn't know that you are cheating; it still considers each non-
IOfunction I/O actions that are included in the
main chain. By using
unsafePerformIO we call I/O I/O action inside an I/O definition is guaranteed to execute as long as it is (directly or indirectly) inside the
mainchain - even when its result isn't used (because the implicit "world" value it returns will be used). You directly specify the order of the action's execution inside the I/O definition. Data dependencies are simulated via the implicit "world" values that are passed from each I/O action to the next.
- An I/O action inside
unsafePerformIOwill be performed only if the result of this operation is really used. The evaluation order is not guaranteed and you should not rely on it (except when you're sure about whatever data dependencies may exist).
I should also say that inside the
unsafePerformIO call you can organize
a small internal chain of I/O actions with the help of the same binding
operators and/or
do syntactic sugar we've seen above. So here's how we'd rewrite our previous (pure!) definition of
one using
unsafePerformIO:
one :: Integer]).
inlinePerformIO
inlinePerformIO has the same definition as
unsafePerformIO but with the addition of I/O I/O actions</code> c fp o u l else write</code> (flushOld c n fp o u) (newBuffer c n) 0 0 0 where {-# NOINLINE write</code> #-} write</code>)
unsafeInterleaveIO
But there is an even stranger operation:
unsafeInterleaveIO :: IO a -> IO a
Don't let that type signature fool you -
unsafeInterleaveIO also uses
a dubiously-acquired baton which it uses to set up an underground
relay-race for its unsuspecting parameter. If it happens, this seedy race
then occurs alongside the offical
main relay-race - if they collide,
things will get ugly!
So how does
unsafeInterleaveIO get that bootlegged baton? Typically by
making a forgery of the offical one to keep for itself - it can do
this because the I/O action
unsafeInterleaveIO returns will be
handed the offical baton in the
main relay-race. But one
miscreant realised there was a simpler way:
unsafeInterleaveIO :: IO a -> IO a unsafeInterleaveIO a = return (unsafePerformIO a)
Why bother with counterfeit copies of batons if you can just make them up?
At least you have some appreciation as to why
unsafeInterleaveIO is, well
unsafe! Just don't ask - to talk further is bound to cause grief and
indignation. I won't say anything more about this ruffian I...use
all the time (darn it!)
One can use
unsafePerformIO (not
unsafeInterleaveIO) to perform I/O
operations not in some predefined order but by demand. For example, the following code:
do let c = unsafePerformIO getChar do_proc c
will perform the
getChar I/O call only when the value of
c is really required
by the calling code, i.e. it this call will be performed lazily like any regular Haskell computation.
Now imagine the following code:
do let s = [unsafePerformIO getChar, unsafePerformIO getChar, unsafePerformIO getChar] do_proc s
The three characters inside this list will be computed on demand too, and this means that their values will depend on the order they are consumed. It is not what we usually want.
unsafeInterleaveIO solves this problem - it performs I/O only on
demand but allows you to define the exact internal execution order for parts
of your data structure. It is why I wrote that
unsafeInterleaveIO makes
an illegal copy of the baton:
unsafeInterleaveIOaccepts an I/O action as a parameter and returns another I/O action as the result:
do str <- unsafeInterleaveIO myGetContents
unsafeInterleaveIOdoesn't perform any action immediately, it only creates a closure of type
awhich upon being needed will perform the action specified as the parameter.
- this action by itself may compute the whole value immediately...or use
unsafeInterleaveIOagain to defer calculation of some sub-components:
myGetContents = do c <- getChar s <- unsafeInterleaveIO myGetContents return (c:s)
This code will be executed only at the moment when the value of
str is
really demanded. In this moment,
getChar will be performed (with its
result assigned to
c) and a new lazy-I/O closure will be created - for
s.
This new closure also contains a link to a
myGetContents call.
Then the list cell is returned. It contains
Char that was just read and a link to
another
myGetContents call as a way to compute rest of the list. Only at the
moment when the next value in the list is required will this operation be performed again.
As a final result, we can postpone the read of the second
Char in the list before
the first one, but have lazy reading of characters as a whole - bingo!
PS: of course, actual code should include EOF checking; also note that you can read multiple characters/records at each call:
myGetContents = do c <- replicateM 512 getChar s <- unsafeInterleaveIO myGetContents return (c++s)
and we can rewrite
myGetContents to avoid needing to
use
unsafeInterleaveIO where it's called:
myGetContents = unsafeInterleaveIO $ do c <- replicateM 512 getChar s <- myGetContents return (c++s)
Welcome to the machine: the actual GHC implementation
A little disclaimer: I should say that I'm not describing here exactly what a monad is (I don't even completely understand it myself) and my explanation shows only one possible way to implement the I/O monad in Haskell. For example, the hbc compiler and the Hugs interpreter implements the I/O monad via continuations [9]. I also haven't said anything about exception handling, which is a natural part of the "monad" concept. You can read the All About Monads guide to learn more about these topics.
But there is some good news:
- the I/O monad understanding you've just acquired will work with any implementation and with many other monads. You just can't work with
RealWorldvalues directly.
- the I/O monad implementation described here is similar to what GHC uses:" I/O actions via fake "state of the world" values, you can now more easily understand and write low-level implementations of GHC I/O operations.
Of course, other compilers e.g. yhc/nhc (jhc, too?) define
IO in other ways.
The Yhc/nhc98 implementation
data World = World newtype IO a = IO (World -> Either IOError a)
This implementation makes the
World disappear somewhat[10],.
Further reading
[1] This tutorial is largely based on Simon Peyton Jones's paper Tackling the awkward squad: monadic input/output, concurrency, exceptions, and foreign-language calls in Haskell. I hope that my tutorial improves his original explanation of the Haskell I/O system and brings it closer to the point of view of new Haskell programmers. But if you need to learn about concurrency, exceptions and FFI in Haskell/GHC, the original paper is the best source of information.
[2] You can find more information about concurrency, FFI and STM at the GHC/Concurrency#Starting points page.
[3] The Arrays page contains exhaustive explanations about using mutable arrays.
[4] Look also at the Using monads page, which contains tutorials and papers really describing these mysterious monads.
[5] An explanation of the basic monad functions, with examples, can be found in the reference guide A tour of the Haskell Monad functions, by Henk-Jan van Tuyl.
[6] Official FFI specifications can be found on the page The Haskell 98 Foreign Function Interface 1.0: An Addendum to the Haskell 98 Report
[7] Using FFI in multithreaded programs described in paper Extending the Haskell Foreign Function Interface with Concurrency
[8] This particular behaviour is not a requirement of Haskell 2010, so the operation of
seq may differ between various Haskell implementations - if you're not sure, staying within the I/O monad is the safest option.
[9] How to Declare an Imperative by Phil Wadler provides an explanation of how this can be done.
[10] The
RealWorld type can even be replaced e.g. Functional I/O Using System Tokens by Lennart Augustsson.
Do you have more questions? Ask in the haskell-cafe mailing list.
To-do list
If you are interested in adding more information to this manual, please add your questions/topics here.
Topics:
fixIOand
mdo
Qmonad
Questions:
- split
(>>=)/
(>>)/return section and
dosection, more examples of using binding operators
IORefdetailed explanation (==
const*), usage examples, syntax sugar, unboxed refs
- explanation of how the actual data "in" mutable references are inside
RealWorld, rather than inside the references themselves (
IORef,
IOArray& co.)
- control structures developing - much more examples
unsafePerformIOusage examples: global variable,
ByteString, other examples
- how
unsafeInterLeaveIOcan be seen as a kind of concurrency, and therefore isn't so unsafe (unlike
unsafeInterleaveSTwhich really is unsafe)
- discussion about different senses of
safe/
unsafe(like breaking equational reasoning vs. invoking undefined behaviour (so can corrupt the run-time system))
- actual GHC implementation - how to write low-level routines based on example of
newIORefimplementation
This manual is collective work, so feel free to add more information to it yourself. The final goal is to collectively develop a comprehensive manual for using the I/O monad. | http://wiki.haskell.org/index.php?title=IO_inside&diff=cur&oldid=4634 | CC-MAIN-2021-17 | refinedweb | 7,054 | 58.52 |
This CTF was a ton of fun but very difficult. I played with my team, (Crusaders of Rust), and we ended up getting 10th place.
We almost full cleared web, getting every challenge except
njs (not counting PainterHell because that challenge was insane). I'll write about everything I had a direct hand in solving.
go-fs was the first web challenge that I solved, and it was a little difficult because I can't read Go. The challenge links to a website which has 6 files. One of the files is called "flag", but trying to open it results in the message: "No flag for you!"
The source code is provided, and the most important functionality of the script was blocking requests to
/flag. You can see how it was implemented:
func(w r * { w.Header().Set("Served-by", VERSION) w = &wrapperW{w} fileServ.ServeHTTP(w, r) }) func(w r * { w.Header().Set("Served-by", VERSION) w.Write([]byte(`No flag for you!`)) })
So, any request that starts with
/flag is blocked by the 2nd http handler function. I first tried some obvious tricks like
curl --path-as-is to try and get the flag, but the http library in Go has some sanitization and checks.
If the URL contains relative pathing, Go automatically attempts to redirect you to the correct location by way of a 301 Permanent Redirect. So, making a request to the url above would just redirect you to
/flag, which would be blocked.
I was stuck here for a bit until I dug deep in the documentation for Go and saw this.
The path and host are used unchanged for CONNECT requests.
Huh. I didn't know what a CONNECT request was before but apparently Go doesn't sanitize these requests. So, making a connection with this request type, we get the flag.
[email protected]:~$ curl -X CONNECT --path-as-is justCTF{"This bug seems to be not exploitable, at least not with a sane filesystem implementation.": yet, here you are!} ~~~~~~~~~~~~~~ Generated by Go FileServ v0.0.0b ~~~~~~~~~~~~~ (because writing file servers is eeaaassyyyy & fun!!!1111oneone)
justCTF{"This bug seems to be not exploitable, at least not with a sane filesystem implementation.": yet, here you are!}
Funnily enough, by googling this flag, we find this GitHub issue by the author, talking about a different bug. Well apparently, this was the unintended solution.
Computeration was a challenge with an unintended solution, so the admins ended up making Computeration Fixed with the bug patched. The challenge description links both the website and a place to report URLs to the admin, which makes me think it is some sort of XSS or data exfiltration challenge. Something interesting is that the website only runs in client-side, which means the data must be stored on the client.
Here's the HTML source of the website.
<html> <head> <title>Computeration</title> <meta charset="UTF-8"> </head> <body> <h1>Computeration</h1> <h2>My Notes</h2> <div id="notesDiv"></div> <br> <button onclick="clearNotes()">Clear notes</button> <h2>Search</h2> Search in my notes <input id="searchNoteInp"></input> <button onclick="searchNote()">OK</button> <div id="notesFound"></div> <script> let notes = JSON.parse(localStorage.getItem('notes')) || []; function clearNotes(){ notes = []; localStorage.setItem('notes', '[]'); notesDiv.innerHTML = ''; notesFound.innerHTML=''; } function insertNote(title, content){ notesDiv.innerHTML += `<details><summary>${title}</summary><p>${content}</p>` } for(let note of notes){ insertNote(note.title, note.content); } function searchNote(){ location.hash = searchNoteInp.value; } onhashchange = () => { const reg = new RegExp(decodeURIComponent(location.hash.slice(1))); const found = []; notes.forEach(e=>{ if(e.content.search(reg) !== -1){ found.push(e.title); } }); notesFound.innerHTML = found; } function addNote(){ const title = newNoteTitle.value; const content = newNoteContent.value; insertNote(title,content); notes.push({title, content}); localStorage.setItem('notes', JSON.stringify(notes)); newNoteTitle.value = ''; newNoteContent.value = ''; } </script> <h2>New Note</h2> Title: <input id="newNoteTitle"/> <br> Content: <textarea id="newNoteContent"></textarea> <br> <button onclick="addNote()">Add</button> </body> </html>
Looking at this source code, I immediately see the bug. I'll explain the bug in the next section. But, I remember that this is the broken challenge, so I look for an easier way to solve it. The HTML between the broken and fixed is the exact same, so I check out the report functionality.
They give a link where you can submit URLs for the admin to visit. I quickly shoot a request for the admin to view a requestbin, and I see something very interesting.
Huh. It seems like they left the referer HTTP header, showing us a secret URL. Navigating to the link, we are redirected to the website I submitted. Opening up the source code, we see:
And we see the flag. Pretty easy (unintended solution), but the flag also gives a major hint to the real solution.
justCTF{cross_origin_timing_lol}
Now, onto the actual challenge. We ended up getting 2nd blood on this challenge! Like I said earlier, I immediately knew the solution when I looked at it. I remembered reading a writeup that had the exact same vulnerability. I go and Google for it again, and lo and behold, the writeup was made by terjanq from justCatTheFish 🤔...
This is why it's always good to read your writeups 🙃
Anyway, the real vulnerability in this challenge is right here:
onhashchange = () => { const reg = new RegExp(decodeURIComponent(location.hash.slice(1))); const found = []; notes.forEach(e=>{ if(e.content.search(reg) !== -1){ found.push(e.title); } }); notesFound.innerHTML = found; }
Specifically, we can see on the 2nd line, a regex is created from the
location.hash string (
location.hash refers to the
#something in the URL). Well, the vulnerability here is that it lets us create our own regexes, so we can create a malicious one.
This is known as ReDoS, or Regular Expression Denial of Service. We can craft a malicious regex such that if it matches the content of the note, the website freezes.
Looking for a nice PoC, I find this article. They use the regex
^(?=(some regexp here))((.*)*)*salt$, where
salt is a long stream of random characters. The longer it is, the slower the execution will be, up to the point where we can crash the page.
I load up Regexr to test a regex. I create this regex:
^(?=justCTF{.*})((.*)*)$
And see this:
Perfect. So now, we have a way to crash the page if we match the flag. But how do we measure if the page crashes? Well, we can just check if our page crashes as well. This comes from an attack named XS-Leaks, specifically "blocking the event loop". Read more about it here.
So, since the website first searches when it detects a hash change, we have to load the page in an iframe first with a hash. Then, we can change the URL hash of the iframe and insert the regex. Once we do that, there's a number of ways to check whether the website crashed the event loop.
My initial setup was changing the URL of the iframe to another site, and then sending to a webhook whether it was able to redirect. After testing, it turned out I didn't even need this part since the site wouldn't send that second request anyway.
Well, here was my solution code:
<!DOCTYPE > <html> <body> <iframe src=" id="iframe"></iframe> <img src=" style="display:none" /> <!-- image that takes like forever to load --> <script> let iframe = document.getElementById("iframe"); let known = "no_referer_typo_ehhhhh"; let check = `[^h]`; let gen = (c) => { return `^(?=justCTF{${c}.*})((.*)*)$` } let check = () => { iframe.src = " + gen(known + check); setTimeout(() => { iframe.src = " let loaded = false; iframe.onload = () => loaded = true; setTimeout(() => { console.log(loaded); fetch(` + known + check + '&matches=' + !loaded); }, 2500); }, 400); } fetch(` + known + check); let first = true; iframe.onload = () => { if(!first) return; first = false; check(); } </script> </body> </html>
I ran into a problem where the page would load, but nothing would run. I assumed this was because the website was exiting right as soon as it finished loading, so I embedded a 200MB image in the page, making it run all of my code fine. The website first makes a request back telling me it loaded the page. Then, it loads the page with just an empty hash. Once it has loaded, it runs
iframe.onload, which runs the
check function.
It generates a malicious regex with what is already known of the flag plus the characters I want to test. Since we're running regex, we can use binary search, searching for half of the characters ([a-m]) and splitting at each request until we find the right character.
At this point, I could have automatically coded it to do the binary search, but I was too lazy, so just did everything by hand. It just took 25 minutes.
Eventually, we get the flag.
justCTF{no_referer_typo_ehhhhhh}
Baby CSP was technically the hardest web challenge (besides PainterHell) in the competition, even though I'd say it was much easier than the intended solution for
njs (finding a 0day by looking through the source).
Visiting the website, the source is provided:
<?php require_once("secrets.php"); $nonce = random_bytes(8); if(isset($_GET['flag'])){ if(isAdmin()){ header('X-Content-Type-Options: nosniff'); header('X-Frame-Options: DENY'); header('Content-type: text/html; charset=UTF-8'); echo $flag; die(); } else{ echo "You are not an admin!"; die(); } } for($i=0; $i<10; $i++){ if(isset($_GET['alg'])){ $_nonce = hash($_GET['alg'], $nonce); if($_nonce){ $nonce = $_nonce; continue; } } $nonce = md5($nonce); } if(isset($_GET['user']) && strlen($_GET['user']) <= 23) { header("content-security-policy: default-src 'none'; style-src 'nonce-$nonce'; script-src 'nonce-$nonce'"); echo <<<EOT <script nonce='$nonce'> setInterval( ()=>user.style.color=Math.random()<0.3?'red':'black' ,100); </script> <center><h1> Hello <span id='user'>{$_GET['user']}</span>!!</h1> <p>Click <a href="?flag">here</a> to get a flag!</p> EOT; }else{ show_source(__FILE__); } // Found a bug? We want to hear from you! /bugbounty.php // Check /Dockerfile
The website takes three URL parameters,
flag,
alg, and
user. If the
flag parameter is found, it'll print the flag if we are the admin. We can't see how the check works, so we can assume that we need to get XSS on the page and make a request with the
flag parameter.
The 2nd parameter,
alg, is plugged into the
hash function, hashing the
$nonce variable 10 times, replacing the
md5 function. The 3rd parameter,
user is an obvious XSS vector since is just outputted directly on the page. However, it only allows strings up to 23 characters...
I'll quickly explain nonces and CSP. CSP, or Content Security Policy, is an extra layer of security website operators can place on their site to help prevent XSS. It basically regulates what kind of scripts, images, stylesheets, websites, etc. that are allowed to be embed on the site.
The bottom of the website shows two URLs,
/Dockerfile, and
/bugbounty.php. Checking the Dockerfile shows us that PHP development mode is enabled, which enables things like warnings. Interesting...
/bugbounty.php is obviously just a page to redirect the admin to our XSS.
If we can find a way to break the nonce function, we can bypass the CSP and embed our own script. We can provide a nonexistent algorithm for the hash function, but then it just spits out a warning and returns false, defaulting to using md5.
Anyway, looking up the list of hash algorithms PHP supports, we find this page. I ended up checking all of the algorithms, and found that the
adler32 algorithm created a nonce which collides around every 300 attempts.
But, there's a problem - the 23 character limit. A script tag with a nonce would look something like this:
<script nonce=12345678></script>, which is already 32 characters. Obviously, something is up.
This is probably where a lot of teams got lost, and I also got lost here for like an hour. Eventually, I got kinda mad, and started slamming my keyboard with characters and hitting CTRL+V a lot in the alg parameter. Magically, CSP disappeared!
We get two warnings:
Warning: hash(): Unknown hashing algorithm: 1231112311 in /var/www/html/index.php on line 21 Warning: Cannot modify header information - headers already sent by (output started at /var/www/html/index.php:21) in /var/www/html/index.php on line 31
The first warning makes sense - I passed in a nonexistent hash algorithm. But the 2nd warning was interesting. The source of the bug is that you cannot send the response before you send headers. The response is the text that shows up on the page, and the headers are where CSP lives. PHP has a 4096 character buffer for the response where it stores text, but if that buffer is overrun, it'll be sent automatically (I think...).
Creating a large warning message (that repeats 10 times) overruns this buffer with too many characters, making it send automatically. Once any bit of the response has been sent, the header information can't be modified, which is the 2nd warning that we see. So now, there's no longer a CSP, and we can just do straight XSS.
We do have to find a 23 byte XSS vector, but one that I know is
<svg/onload=eval(name)>, and this is indeed 23 characters.
name comes from the
window.name property, which is a custom name variable which browsers can set on iframes and windows.
So, I whip up a quick website to send over this algorithm, XSS vector, and set a custom JS payload on the page. I again get the problem where the bot leaves immediately after the page begins loading, so I place the large image again.
We get a request to the admin, and it sends our request back with the flag! Well, actually it sends back "You are not the admin"... Hm...
I ask the admin what's going on, and he ends up telling me that the admin's cookies are set to
Lax mode. Here's an excerpt from the Mozilla Docs:).
Lax replaced None as the default value in order to ensure that users have reasonably robust defense against some classes of cross-site request forgery (CSRF) attacks.
So, making a fetch request (even with credentials included) don't send the lax cookies. I end up fixing this by changing my payload to open a new tab from the XSS, and reading the tab contents from there. Here's my final payload:
<!DOCTYPE > <html> <body> <iframe name="let x = window.open('/?flag'); x.onload = () => {fetch(' + x.document.documentElement.innerHTML)}" src=" <iframe src=" <img src=" style="display:none" /> </body> </html>
Checking my requestbin, we have the flag:
justCTF{
D0cker was a challenge that was almost like Docker trivia - you spoke with an "oracle", and you had to answer questions about the Docker environment to pass and get the flag.
Connecting to the server, it asks for a hash from
hashcash to limit requests. Submitting one, we connect to the server, and it drops us into a shell.
[email protected]:~$ nc docker-ams3.nc.jctf.pro 1337 Access to this challenge is rate limited via hashcash! Please use the following command to solve the Proof of Work: hashcash -mb26 vqhmexrh Your PoW: 1:26:210201:vqhmexrh::0Ia1kPVatH+cpzQG:0000000008/hM 1:26:210201:vqhmexrh::0Ia1kPVatH+cpzQG:0000000008/hM [*] Spawning a task manager for you... [*] Spawning a Docker container with a shell for ya, with a timeout of 10m :) [*] Your task is to communicate with /oracle.sock and find out the answers for its questions! [*] You can use this command for that: [*] socat - UNIX-CONNECT:/oracle.sock [*] PS: If the socket dies for some reason (you cannot connect to it) just exit and get into another instance groups: cannot find name for group ID 1000 I have no [email protected]:/$
It passes a command to communicate with the oracle,
socat - UNIX-CONNECT:/oracle.sock. Running this command, we get the first question:
I have no [email protected]:/$ socat - UNIX-CONNECT:/oracle.sock socat - UNIX-CONNECT:/oracle.sock Welcome to the ______ _____ _ | _ \ _ | | | | | | | |/' | ___| | _____ _ __ | | | | /| |/ __| |/ / _ \ '__| | |/ /\ |_/ / (__| < __/ | |___/ \___/ \___|_|\_\___|_| oracle! I will give you the flag if you can tell me certain information about the host (: ps: brute forcing is not the way to go. Let's go! [Level 1] What is the full *cpu model* model used?
It first asks for the full cpu model. Easy enough, typing
lscpu, we can see the CPU model name, "Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz". Sending this, we get the next question.
[Level 1] What is the full *cpu model* model used? Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz That was easy :) [Level 2] What is your *container id*?
Hm, our container id is where I had to start researching. Docker container ids have a length of 64, and the hostname that we see (c55303db3034) is only the first 12 characters. So, googling how to find the Docker container id in a container, I find that you can run
cat /proc/self/cgroup to find the container id.
I have no [email protected]:/$ cat /proc/self/cgroup cat /proc/self/cgroup 12:pids:/docker/c55303db30344916536c61da90c44f458ff48fba551c90d3a4e889f480100250 11:hugetlb:/docker/c55303db30344916536c61da90c44f458ff48fba551c90d3a4e889f480100250 10:memory:/docker/c55303db30344916536c61da90c44f458ff48fba551c90d3a4e889f480100250 9:rdma:/ 8:blkio:/docker/c55303db30344916536c61da90c44f458ff48fba551c90d3a4e889f480100250 7:perf_event:/docker/c55303db30344916536c61da90c44f458ff48fba551c90d3a4e889f480100250 6:net_cls,net_prio:/docker/c55303db30344916536c61da90c44f458ff48fba551c90d3a4e889f480100250 5:cpu,cpuacct:/docker/c55303db30344916536c61da90c44f458ff48fba551c90d3a4e889f480100250 4:freezer:/docker/c55303db30344916536c61da90c44f458ff48fba551c90d3a4e889f480100250 3:cpuset:/docker/c55303db30344916536c61da90c44f458ff48fba551c90d3a4e889f480100250 2:devices:/docker/c55303db30344916536c61da90c44f458ff48fba551c90d3a4e889f480100250 1:name=systemd:/docker/c55303db30344916536c61da90c44f458ff48fba551c90d3a4e889f480100250 0::/system.slice/containerd.service I have no [email protected]:/$
?
Huh. This one is pretty tricky. It asks us for the contents of the
/secret file on our machine. But, this file changes every time we get to this question. We don't have multiple connections, so how can we read the file without exiting the oracle?
Well, we can use Python to do this. I write a quick command:
python3 -c "import time; time.sleep(15); print(open('/secret', 'r').read())" &. This command runs Python, sleeps for 15 seconds, then prints out
/secret, all in the background (
/secret file not exist.
&). This step messed up a couple of times, as there was a bug with the challenge that made the
?
Getting that right, we get the next question:
[Level 4] Okay but... where did I actually write it? What is the path on the host that I wrote the /secret file to which then appeared in your container? (ps: there are multiple paths which you should be able to figure out but I only match one of them)
This one is again even more difficult, asking us where on the host system the wrote the
/secret file for it to show up on our own system. After playing around a bit with the
procfs, I end up finding
/proc/self/mountinfo. The contents of that file are: 1002 1001 0:65 / /proc rw,nosuid,nodev,noexec,relatime - proc proc rw 1003 1001 0:66 / /dev rw,nosuid - tmpfs tmpfs rw,size=65536k,mode=755 1004 1003 0:67 / /dev/pts rw,nosuid,noexec,relatime - devpts devpts rw,gid=5,mode=620,ptmxmode=666 1005 1001 0:68 / /sys ro,nosuid,nodev,noexec,relatime - sysfs sysfs ro 1006 1005 0:69 / /sys/fs/cgroup ro,nosuid,nodev,noexec,relatime - tmpfs tmpfs rw,mode=755 1007 1006 0:31 /docker/762366f3d2c8dce9dbbf44c96ba725db7f393acef5299764dc8d84a14526bd33 /sys/fs/cgroup/systemd ro,nosuid,nodev,noexec,relatime master:11 - cgroup cgroup rw,xattr,name=systemd 1008 1006 0:34 /docker/762366f3d2c8dce9dbbf44c96ba725db7f393acef5299764dc8d84a14526bd33 /sys/fs/cgroup/memory ro,nosuid,nodev,noexec,relatime master:15 - cgroup cgroup rw,memory 1009 1006 0:35 /docker/762366f3d2c8dce9dbbf44c96ba725db7f393acef5299764dc8d84a14526bd33 /sys/fs/cgroup/cpu,cpuacct ro,nosuid,nodev,noexec,relatime master:16 - cgroup cgroup rw,cpu,cpuacct 1010 1006 0:36 / /sys/fs/cgroup/rdma ro,nosuid,nodev,noexec,relatime master:17 - cgroup cgroup rw,rdma 1011 1006 0:37 /docker/762366f3d2c8dce9dbbf44c96ba725db7f393acef5299764dc8d84a14526bd33 /sys/fs/cgroup/freezer ro,nosuid,nodev,noexec,relatime master:18 - cgroup cgroup rw,freezer 1012 1006 0:38 /docker/762366f3d2c8dce9dbbf44c96ba725db7f393acef5299764dc8d84a14526bd33 /sys/fs/cgroup/perf_event ro,nosuid,nodev,noexec,relatime master:19 - cgroup cgroup rw,perf_event 1013 1006 0:39 /docker/762366f3d2c8dce9dbbf44c96ba725db7f393acef5299764dc8d84a14526bd33 /sys/fs/cgroup/pids ro,nosuid,nodev,noexec,relatime master:20 - cgroup cgroup rw,pids 1014 1006 0:40 /docker/762366f3d2c8dce9dbbf44c96ba725db7f393acef5299764dc8d84a14526bd33 /sys/fs/cgroup/devices ro,nosuid,nodev,noexec,relatime master:21 - cgroup cgroup rw,devices 1015 1006 0:41 /docker/762366f3d2c8dce9dbbf44c96ba725db7f393acef5299764dc8d84a14526bd33 /sys/fs/cgroup/net_cls,net_prio ro,nosuid,nodev,noexec,relatime master:22 - cgroup cgroup rw,net_cls,net_prio 1016 1006 0:42 /docker/762366f3d2c8dce9dbbf44c96ba725db7f393acef5299764dc8d84a14526bd33 /sys/fs/cgroup/blkio ro,nosuid,nodev,noexec,relatime master:23 - cgroup cgroup rw,blkio 1017 1006 0:43 /docker/762366f3d2c8dce9dbbf44c96ba725db7f393acef5299764dc8d84a14526bd33 /sys/fs/cgroup/hugetlb ro,nosuid,nodev,noexec,relatime master:24 - cgroup cgroup rw,hugetlb 1018 1006 0:44 /docker/762366f3d2c8dce9dbbf44c96ba725db7f393acef5299764dc8d84a14526bd33 /sys/fs/cgroup/cpuset ro,nosuid,nodev,noexec,relatime master:25 - cgroup cgroup rw,cpuset 1019 1003 0:64 / /dev/mqueue rw,nosuid,nodev,noexec,relatime - mqueue mqueue rw 1020 1001 252:1 /tmp/pwn-docker/shentssegesslatinglopposeelmerth.sock /oracle.sock rw,relatime - ext4 /dev/vda1 rw 1021 1001 252:1 /var/lib/docker/containers/762366f3d2c8dce9dbbf44c96ba725db7f393acef5299764dc8d84a14526bd33/resolv.conf /etc/resolv.conf rw,relatime - ext4 /dev/vda1 rw 1022 1001 252:1 /var/lib/docker/containers/762366f3d2c8dce9dbbf44c96ba725db7f393acef5299764dc8d84a14526bd33/hostname /etc/hostname rw,relatime - ext4 /dev/vda1 rw 1023 1001 252:1 /var/lib/docker/containers/762366f3d2c8dce9dbbf44c96ba725db7f393acef5299764dc8d84a14526bd33/hosts /etc/hosts rw,relatime - ext4 /dev/vda1 rw 1024 1003 0:63 / /dev/shm rw,nosuid,nodev,noexec,relatime - tmpfs shm rw,size=65536k 953 1003 0:67 /0 /dev/console rw,nosuid,noexec,relatime - devpts devpts rw,gid=5,mode=620,ptmxmode=666 954 1002 0:65 /bus /proc/bus ro,relatime - proc proc rw 955 1002 0:65 /fs /proc/fs ro,relatime - proc proc rw 956 1002 0:65 /irq /proc/irq ro,relatime - proc proc rw 957 1002 0:65 /sys /proc/sys ro,relatime - proc proc rw 958 1002 0:65 /sysrq-trigger /proc/sysrq-trigger ro,relatime - proc proc rw 959 1002 0:70 / /proc/acpi ro,relatime - tmpfs tmpfs ro 960 1002 0:66 /null /proc/kcore rw,nosuid - tmpfs tmpfs rw,size=65536k,mode=755 961 1002 0:66 /null /proc/keys rw,nosuid - tmpfs tmpfs rw,size=65536k,mode=755 962 1002 0:66 /null /proc/timer_list rw,nosuid - tmpfs tmpfs rw,size=65536k,mode=755 963 1002 0:66 /null /proc/sched_debug rw,nosuid - tmpfs tmpfs rw,size=65536k,mode=755 964 1002 0:71 / /proc/scsi ro,relatime - tmpfs tmpfs ro 965 1005 0:72 / /sys/firmware ro,relatime - tmpfs tmpfs ro
Quite a lot of text. The first line is the important one though:
This line basically says that the root folder (/) of the container is mounted using
overlayfs. You can read more about overlayfs here. This page also tells us that the writable bit of the filesystem is on the "upper" layer, so the
upperdir folder in the first line is the one we need to submit.
At this point, I wanted to make a script to make things easier. Since there was Python, we could use sockets to communicate, but I wanted to try out something my teammate suggested: mpwn, a single file standalone Python library that emulates pwntools.
When starting the container, all you have to do is copy and paste
mp.py to the
/tmp dir, then run this script to answer the first four questions:
from mp import * import time import os open("start.sh", "w").write("#!/bin/bash\nsocat - UNIX-CONNECT:/oracle.sock") os.system("chmod +x start.sh") cpu = [l for l in open("/proc/cpuinfo", "r").read().split("\n") if l.startswith("model name")][0].split(": ")[1] containerid = [l for l in open("/proc/self/cgroup", "r").read().split("\n") if "/docker/" in l][0].split("/docker/")[1] secretloc = open("/proc/self/mountinfo").read().split("\n")[0].split("upperdir=")[1].split(",")[0] + "/secret" p = process("./start.sh") for _ in range(11): print(p.recvline()) print(cpu) p.sendline(cpu) print(p.recvline()) print(p.recvline()) print(containerid) p.sendline(containerid) print(p.recvline()) print(p.recvline()) time.sleep(3) secret = open("/secret", "r").read() print(secret) p.sendline(secret) print(p.recvline()) p.sendline(secretloc) p.interactive()
Running this script, we get to level 5.
$ [Level 5] Good! Now, can you give me an id of any *other* running container?
Well, this one is asking us to get the id of another running container... While there probably is a way to find this by looking through the system, there's an easier solution: just load up another instance and give it the container id 🙃
Finally we get to the last question, and the real challenge:
$ [Level 5] Good! Now, can you give me an id of any *other* running container? 44b6790cf522512823fb19641d2920f30561cc17ea9c076927954db4062fc256 44b6790cf522512823fb19641d2920f30561cc17ea9c076927954db4062fc256 [Level 6] Now, let's go with the real and final challenge. I, the Docker Oracle, am also running in a container. What is my container id?
Huh. This one is real difficult. Checking back at
/proc/self/mountinfo, we see this line:
1020 1001 252:1 /tmp/pwn-docker/shentssegesslatinglopposeelmerth.sock /oracle.sock rw,relatime - ext4 /dev/vda1 rw
No instance of a container id. Curiously, the filename of the socket, when converted from ASCII to hex, is indeed 64 characters. But it's not the right id. On paper, this seems impossible. It seems like we need to find the oracle's container and run
cat /proc/self/cgroup to find their id, but that is obviously not the case. I'm guessing a lot of people got stuck here.
The first thing I tried was run socat with every debug option available:
socat -dddd -lu -v -D - unix-connect:/oracle.sock. While I did find the sock filename again, no 64 character id to be found. I tried using Python's socket library,
import socket; sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM); sock.connect("/oracle.sock"), but that only showed the socket filename again. I tried looking through the
/sys/block and
/sys/fs parts of the filesystem, but found nothing.
Eventually, I came to realize that the container was somewhere on the same harddrive, mapped to our container. I started to investigate ways to list harddrives and blocks. I tried
df -h,
lsblk, reading blocks directly with
dd, but no cigar. Eventually, I tried the
du command.
du measures how much diskspace a file or folder uses, and I was going to use this to see if I could find anything interesting about the
/oracle.sock file. But, running this command without arguments instead looped through all the files in the container and crashed my instance. Oops.
But, I ended up looking through the files anyway. There were tens of thousands of lines, so I only quickly skimmed through it, but then I found a couple of 64 character strings...
0 ./kernel/slab/:A-0001088/cgroup/signal_cache(797:networkd-dispatcher.service) 0 ./kernel/slab/:A-0001088/cgroup/signal_cache(1745:snapd.service) 0 ./kernel/slab/:A-0001088/cgroup/signal_cache(849:ssh.service) 0 ./kernel/slab/:A-0001088/cgroup/signal_cache(1071:docker.service) 0 ./kernel/slab/:A-0001088/cgroup/signal_cache(1045:cloud-config.service) 0 ./kernel/slab/:A-0001088/cgroup/signal_cache(680:containerd.service) 0 ./kernel/slab/:A-0001088/cgroup/signal_cache(2034:[email protected]) 0 ./kernel/slab/:A-0001088/cgroup/signal_cache(589:systemd-udevd.service) 0 ./kernel/slab/:A-0001088/cgroup/signal_cache(2048:session-2.scope) 0 ./kernel/slab/:A-0001088/cgroup/signal_cache(576:cloud-init.service) 0 ./kernel/slab/:A-0001088/cgroup/signal_cache(11527:44b6790cf522512823fb19641d2920f30561cc17ea9c076927954db4062fc256) 0 ./kernel/slab/:A-0001088/cgroup/signal_cache(693:cron.service) 0 ./kernel/slab/:A-0001088/cgroup/signal_cache(1108:init.scope) 0 ./kernel/slab/:A-0001088/cgroup/signal_cache(8827:2855b404af1e705d8431f1983577a511321e2ea0cc49d64c5dd8d4262aee63b5) 0 ./kernel/slab/:A-0001088/cgroup/signal_cache(862:supervisor.service) 0 ./kernel/slab/:A-0001088/cgroup
One of these was my container id, but I didn't know what the other one was. Well, I booted up a new instance, went to this folder, and extracted the different 64 character value. I crossed my fingers, ran my script again, and hoped for the best:
[Level 6] Now, let's go with the real and final challenge. I, the Docker Oracle, am also running in a container. What is my container id? 2855b404af1e705d8431f1983577a511321e2ea0cc49d64c5dd8d4262aee63b5 2855b404af1e705d8431f1983577a511321e2ea0cc49d64c5dd8d4262aee63b5 [Levels cleared] Well done! Here is your flag! justCTF{maaybe-Docker-will-finally-fix-this-after-this-task?} Good job o/ <- received EOF <- received EOF
Nice.
justCTF{maaybe-Docker-will-finally-fix-this-after-this-task?}
In my opinion, a very high quality CTF. There were some problems with some challenges, and I did waste like 6 hours on PainterHell, decompiling the plugin, rewriting and hotpatching different sections of the unofficial decompiler, and finding the backdoor in the TF2 plugin only to find out later that my version was unsolvable. Damn.
I also spent a lot of hours on njs, but apparently that one was an unknown njs issue... I didn't read the source code closely enough, I guess.
Well, the CTF was very fun and I learned a lot. We were 5th up until the last hour when everyone stopped flag hoarding and we dropped to 10th. Damn. Anyway, thanks justCatTheFish for a great CTF. | https://brycec.me/posts/justctf2020_writeups | CC-MAIN-2022-21 | refinedweb | 4,873 | 56.76 |
- Type:
Bug
- Status: Closed
- Priority:
P1: Critical
- Resolution: Done
- Affects Version/s: 5.7.2, 5.8.0
- Fix Version/s: 5.9.0 Beta 2
- Component/s: Build tools: qmake
- Labels:
- Commits:1b599660219d8b54df5d075bf7aa69bfd3bf282b
Using Q_INTERFACES in an application built against Qt 5.8.0 on Windows 8.1 (Visual Studio 2015) now results in the following error during MOC:
error : Undefined interface
An example of code that fails for me:
#include <QObject> #include <QQmlParserStatus> class Foo : public QObject, QQmlParserStatus { Q_OBJECT Q_INTERFACES(QQmlParserStatus) };
This is a regression in 5.8.0, but does not appear for me in Qt 5.6.2 or Qt 5.7.1.
This behaviour has not been observed on macOS, iOS, Android, or Ubuntu builds.
Note that this occurs both in a shadow build and an in-source build.
Related:
QTBUG-36107 | https://bugreports.qt.io/browse/QTBUG-59460?gerritIssueStatus=Open | CC-MAIN-2019-51 | refinedweb | 137 | 58.48 |
Checkboxmodel inside grid with editor
Hello,
looks like I have found a bug in extjs 4.1.1a: in case of using grid with Checkboxmodel and some editor for cell there happens deselection of all nonactive rows and selecting only active(editing) row.'), selModel: Ext.create("Ext.selection.CheckboxModel", { checkOnly : true }),() });
- Join Date
- Mar 2007
- Location
- Gainesville, FL
- 39,144
- Vote Rating
- 1240
Not truly a bug IMO. When you select a row outside the checkbox (in another column) it will only select that row, multi selection is only if you click on the check box so when you try to edit it will have the same behavior as you are click on outside the checkbox. You can give the mode to the selection model to multi or simple but when you try to start edit it will toggle the selection on and.
checkbox models and cellediting clash!
If I have a multi-selection checkbox grid; (checkbox models are normally multi-select). I set checkOnly : true
Are you saying that when I activate cellediting on another column, it must interfere with the checks I have selected?
4.1.1 did not have the issue, 4.1.3 still has the problem. I think this is a big problem. Test the code again. Please explain.
Not sure if any solution exists for this...I have a similar issue using extjs 4.1.1
I have an editable grid with a checkboxselection model the checkOnly attribute is set to true.
When I click in an editable cell and then tab over to the next editable cell the row is getting selected.
As a result the logic that I have in the select listener is getting executed for the row something that I want to execute only when the check box is selected not when I tab over to the next editable cell/column.
Any suggestions?
Thanks,
Gus
I too came across this problem and did the following:
Code:
var clickedColumnIndex = -1; Ext.create('Ext.grid.Panel', { (...) listeners: { "cellclick": function (sender, td, cellIndex, record, tr, rowIndex, e, eOpts) { clickedColumnIndex = cellIndex; }, "beforedeselect": function (rowmodel, record, index, eOpts) { return (clickedColumnIndex === 0); //prevents deselecting all rows when editing other cells } }, selModel: Ext.create('Ext.selection.CheckboxModel', { checkOnly: true, mode: 'MULTI' }), plugins: [ Ext.create('Ext.grid.plugin.CellEditing', { clicksToEdit: 1 }) ] });
Thanks for the reply, I did end up doing something similar, I used the statement in the beforeselect and beforedeselect listeners:
if (Ext.getCmp('PORInquiryGrid').editingPlugin.activeColumn != null &&
Ext.getCmp('PORInquiryGrid').editingPlugin.activeColumn.dataIndex != null)
{
return false;
}
On the other hand when I tried using the arrow keys to navigate through the editable grid the same issue happened again i.e. the row got selected and I could not figure a way to workaround that.
It is kind of frustrating for some of the configuration options not to work and there is no clear workaround.
@grekpe Thanks for sharing your solution, it's working great (using ExtJS 4.2.1).
However, I think it would make sense to add some property to CheckboxSelection Model that would force user to (un)check the Checkbox in order to change the selection.
The way it is now is really frustrating IMO :-) | https://www.sencha.com/forum/showthread.php?247510-Checkboxmodel-inside-grid-with-editor&s=91bc262e9af6f06d0e63c6d7adf47fbc&p=978420 | CC-MAIN-2016-30 | refinedweb | 531 | 53.21 |
Bulk you have to really know your SQL (well to do it in SQL).
Let's say you have to read data from an RSS feed, parse it and then load it into SQL. Let's assume further that this feed updates every 2 hours. It would be a trivial task to write a C# app that reads and parses the feed. One crude way to upload this data would be to do a single row insert for each data element. This would be terribly inefficient. The other option would be to use .Net framework's SqlBulkCopy class.
The basic template would be something like
private void WriteToDatabase() { // get your connection string string connString = ""; // connect to SQL using (SqlConnection connection = new SqlConnection(connString)) { // make sure to enable triggers // more on triggers in next post SqlBulkCopy bulkCopy = new SqlBulkCopy ( connection, SqlBulkCopyOptions.TableLock | SqlBulkCopyOptions.FireTriggers | SqlBulkCopyOptions.UseInternalTransaction, null ); // set the destination table name bulkCopy.DestinationTableName = this.tableName; connection.Open(); // write the data in the "dataTable" bulkCopy.WriteToServer(dataTable); connection.Close(); } // reset this.dataTable.Clear(); this.recordCount = 0; }
The above code snippet shows you the API usage. But before you actually do that, you need to follow a couple of steps to setup your data table.
First, let's look at a simple record structure (as reflected in C# class):
using System; using System.Data; using System.Configuration; /// <summary> /// Summary description for MyRecord /// </summary> public class MyRecord { public int TestInt; public string TestString; public MyRecord() { } public MyRecord(int myInt, string myString) { this.TestInt = myInt; this.TestString = myString; } }
Now, let's start dissecting the class that we will use to upload the data:
using System; using System.Data; using System.Collections.Generic; using System.Data.SqlClient; using System.Configuration; using System.IO; namespace SqlExamples.FileLoader { /// <summary> /// Summary description for BulkUploadToSql /// </summary> public class BulkUploadToSql { private List<MyRecord> internalStore; protected string tableName; protected DataTable dataTable = new DataTable(); protected int recordCount; protected int commitBatchSize;
Note that we have an internal List data structure as well as the DataTable. This is redundant and you can avoid using the internalStore if your application does not need to massage the data before it's sent to SQL.
I then define 2 private constructors. The reason is that we want to use the factory pattern to return our object to the caller.
private BulkUploadToSql( string tableName, int commitBatchSize) { internalStore = new List<MyRecord>(); this.tableName = tableName; this.dataTable = new DataTable(tableName); this.recordCount = 0; this.commitBatchSize = commitBatchSize; // add columns to this data table InitializeStructures(); } private BulkUploadToSql() : this("MyTableName", 1000) {}
Note that we set the commit batch size. This is a very important factor that needs to be fine tuned for your database. What this defines is the number of records that we would send in one shot to the database.
The next step is to Initialize the data table with columns that reflect the actual table structure.
private void InitializeStructures() { this.dataTable.Columns.Add("TI", typeof(Int32)); this.dataTable.Columns.Add("TS", typeof(string)); }
I then provided a factory method to load data into my internal structure from a data source. In the example code below, I use a Stream, but this can be any data source from where you wish to populate your data.
public static BulkUploadToSql Load(Stream dataSource) { // create a new object to return BulkUploadToSql o = new BulkUploadToSql(); // replace the code below // with your custom logic for (int cnt = 0; cnt < 10000; cnt++) { MyRecord rec = new MyRecord ( cnt, string.Format("string{0}", cnt) ); o.internalStore.Add(rec); } return o; }
This would make sure that our class is properly initialized and loaded with data. Once the caller has a valid object, they can now "Flush" the data as shown below:
public void Flush() { // transfer data to the datatable foreach (MyRecord rec in this.internalStore) { this.PopulateDataTable(rec); if (this.recordCount >= this.commitBatchSize) this.WriteToDatabase(); } // write remaining records to the DB if (this.recordCount > 0) this.WriteToDatabase(); } private void PopulateDataTable(MyRecord record) { DataRow row; // populate the values // using your custom logic row = this.dataTable.NewRow(); row[0] = record.TestInt; row[1] = record.TestString; // add it to the base for final addition to the DB this.dataTable.Rows.Add(row); this.recordCount++; }
In the example above, the call to Flush() actually massages the data (and at the same time loads it into the actual data table). As I mentioned before, you can actually skip this step if your application does not require massaging.
As a example of an app that uses this class:
using System; using System.Collections.Generic; using System.Text; using SqlExamples.FileLoader; using System.IO; namespace DemoApp { class Program { static void Main(string[] args) { using (Stream s = new StreamReader(@"C:\TestData.txt")) { BulkUploadToSql myData = BulkUploadToSql.Load(s); myData.Flush(); } } } }
As always, this is JUST demo code to explain a concept. This is NOT production quality code and please make sure to follow the coding guidelines in your team.
Happy coding.... | https://docs.microsoft.com/en-us/archive/blogs/nikhilsi/bulk-insert-into-sql-from-c-app | CC-MAIN-2020-24 | refinedweb | 812 | 50.94 |
Heya there,
i recently started to learn the C++ language and i am trying out things.
The deal here is i want to replace 4 spaces with a tab.The code seems fine to me,but its not proper as when i run it the 4 spaces are replaced by 6 spaces,5-7 and so on.I am at a dead end and any help would be appreciated.Here is my code:
Thanks for your time.Thanks for your time.Code:#include <iostream> using namespace std; char x; int spaces=0; int main() { char x; int spaces=0; while(cin.get(x)) { if (x !=' ') { if (spaces>0) { for (int i=0; i<spaces; i++) { cout<<' '; } spaces=0; } cout<<x; } else { if (++spaces==4) { cout<<'\t'; spaces=0; } } } } | http://cboard.cprogramming.com/cplusplus-programming/143007-help-replacing-spaces-amateur.html | CC-MAIN-2015-06 | refinedweb | 128 | 93.34 |
When - YOU! - what WSL needed to run, what worked, and what didn't, etc.
And ***wow!*** did you, our community respond! 🙂
This release was built by and for YOU!
On behalf of the WSL & Console engineering teams, a very sincere and grateful THANK YOU to all of you who've tried and used Bash/WSL over the last 12+ months, and especially to all of you who filed issues on our GitHub issues repo, contacted me on Twitter, submitted/voted for asks on our UserVoice, asked questions on StackOverflow, AskUbuntu, Reddit, our Command-Line Blog, the WSL Team Blog and elsewhere.
The massively improved Bash/WSL & Windows Console that we're shipping in Windows 10 Creators Update is due largely to all of you!
What's New in WSL?:
More compatibility
A key goal for Win10 CU was to dramatically improve WSL's depth and breadth of compatibility with the Linux System Call Interface (SCI). By expanding and improving our syscall implementations, we increase the tools, platforms, runtimes, etc. that our users need to run.
The result? In Win10 CU, most mainstream developer tools now work as expected, including:
-!
Note: Some of you may also have been following along with some intrepid explorations into running X/GUI apps and desktops on WSL. While we don't explicitly support X/GUI apps/desktops on WSL, we don't do anything to block/prevent them from running. So if you manage to get your favorite editor, desktop, browser, etc. running, GREAT 🙂 but know that we are still focusing all our efforts on delivering a really solid command-line experience, running all the command-line developer tools you need.
Ubuntu 16.04 support
While Win10 Anniversary Update delivered Ubuntu 14.04 support, in Win 10 Creators Update, WSL now supports Ubuntu 16.04. Any new Bash instances installed on Win10 CU will now install Ubuntu 16.04.
If you'd like to find out what version of Ubuntu you're running, enter the following at your Bash on Ubuntu on Windows Console:
$ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 16.04.2 LTS Release: 16.04 Codename: xenial
Important Note: Existing Ubuntu 14.04 instances are NOT automatically upgraded to 16.04: You must manually upgrade your instance to Ubuntu 16.04 in one of two ways:
- Remove & Replace (recommended)
- Upgrade In-Place
Remove & replace
If you're currently running an Ubuntu 14.04 instance, we recommend removing and replacing your existing instance with a fresh new Ubuntu 16.04 instance.
WARNING: The instructions below will delete your existing distro and any of the files you've stored in the Linux filesystem. Therefore, be sure to copy/move any Linux files you want to keep, for example, to a Windows folder (e.g. /mnt/c/temp/wslbackup/...) BEFORE removing and replacing your instance!
To remove and re-install your Ubuntu instance, run the following commands from a Cmd/PowerShell Console.
C:\> lxrun /uninstall /full /y ... C:\> lxrun /install
The lxrun /install command above will then download and install a fresh new copy of Ubuntu 16.04 onto your machine.
Upgrade In-Place
If your Ubuntu instance is particularly complex to configure, you can opt to upgrade it in-place, though this may not result in an optimal instance.
If you opt to upgrade your instance in-place, use Ubuntu's instructions for upgrading an existing instance:
$ sudo do-release-upgrade
Another issue that users quickly bumped into in Win10 AU was the fact that non-administrators could not ping a network endpoint. This has now been fixed in Win10 CU:
File change notification support (INOTIFY)
Another much-requested improvement is the ability for a tool to register for notifications when a file is changed. This is an essential capability used frequently by web, Node.js, Ruby and Python developers, and many others.
For example,.
Well, now you can! WSL supports inotify which allows apps to register for filesystem change notifications, which can then trigger actions, like rebuilding a project, or restarting a local web server. This works for both DrvFS and internal LXFs locations.
A key goal of building WSL was to reduce the "gaps" experienced when running Windows tools alongside Linux command-line tools and environments. When we shipped WSL in Windows 10 AU, we brought Linux and Windows alongside one another, but there was still a large "gap" - while both systems could share some of the same files, each environment was pretty isolated from one another.
Users often told us that they wanted to be able to invoke Windows applications from within Bash, and to invoke Linux applications from within Windows. So, we added this feature!
In Windows 10 Creators Update, you can now launch Windows apps & tools from within Bash ...
... and you can launch Linux binaries/commands/scripts from within Windows:
Read our blog post on this feature for more details:
UNIX and Netlink Socket improvements
Some aspects of UNIX sockets and Netlink sockets were not supported in Anniversary Update.
In Creators Update, UNIX datagram sockets and Netlink sockets, options & properties were added to WSL, enabling various forms of IPC that allow many modern tools to run on WSL.
For more details, review the WSL Release Notes..
For more details, review the WSL Release Notes.
Miscellaneous WSL Improvements
The WSL improvements listed above are just a selection of the most visible and noteworthy changes, but there are many hundreds of other WSL improvements included in Creators Update. Below are a few more, and don't forget to dig through the release notes for more detail.
- Linux processes exposed to Windows Process enumeration infrastructure so they show up on TaskManager, etc.
- Added features to help enable anti-malware & firewall tools to understand Linux processes
- Shared memory support required by PostgreSQL & other tools
Windows Console & Command-Line Improvements:
The Windows Console is one of the most fundamental pieces of the entire Operating System and has been part of Windows for several decades. Around 2 years ago, a new Windows Console team was formed to own give the Console its biggest overhaul in more than 30 years!.
So, what's new in the Console in Win10 CU?
Many VT Sequence Improvements
Because the Windows Console was not originally built to support *NIX, it was unable handle the different behaviors and output formatting codes (ANSI Escape Codes & VT sequences) generated by *NIX command-line tools & applications.
But, no longer:
In Windows 10 Anniversary Update, the Console was improved with the ability to handle most common VT Sequences, enabling it to correctly render the majority of simple text formatting. However, it lacked support for several advanced scenarios.
Another frequent ask from the community was for the Console to support > 16 colors. Support for 256+ colors is increasingly important when working with today's increasingly rich and sophisticated command-line tools, shells, etc.
In Win10 Creators Update, the Console has been updated to support full, glorious 24-bit color!
For more details, read the accompanying blog post announcing 24-bit color support in the Console.
Mouse support
With the number of rich, text-based UI's ever increasing, users wanted mouse support for tools like Midnight Commander, Htop, etc., so we added mouse support in Win10 CU.
Symlinks in Windows without Admin Rights
Symlinks are an important tool used comprehensively in Linux, but less so in Windows since admin rights were required to create symlinks, and the Console in which the symlinks were created had to be run elevated as admin - something users rarely do.
In Windows 10 Creators Update, the admin-rights restriction has been lifted for users who have enabled developer mode, allowing symlinks to be created from an un-elevated Console.
Read the release announcement for more symlink details & examples.
What's Next?
So, are we done yet? Noooo! Far from it!
Both WSL and Windows Console have backlogs stuffed full of improvements, features and new capabilities that we're eager to get working on.
In addition, while Console and WSL have been significantly improved in Creators Update, it is important to note that WSL remains a beta feature in Win10 Creators Update while we shave-off some rough edges and improve some core features and capabilities."!
And, as always, PLEASE KEEP YOUR FEEDBACK COMING: Let us know if you find issues when using WSL on the WSL GitHub issues repo, feel free to ask questions on Twitter, suggest new features on UserVoice, carry on all the great discussions on Reddit, StackOverflow, SuperUser, etc, and keep the comments coming on this Command-Line Blog, and the WSL Team Blog.
Onwards! 😀
Keep it up! Pretty sure everything I’ve wanted has been covered in this post… Love it.
Thank you for the Awesome blogpost 😉
Thanks James. Glad you like it! More on the way 😉
This is amazing and the way you’re responding to feedback and implementing it is also amazing. I love WSL so much, I hope you guys are having as much fun building it as I’m sure we all are using it!
Building this thing is a lot of hard work, but it’s a HOOT! 🙂
Thanks for all the work you guys are doing Rich, may not seem like it in the bug tracker on Git but there are numerous people that appreciate it. I will be checking NMAP and the like for functionality once I have updated.
LOL 🙂 Thanks Warchylde. Do keep the feedback coming – we love it all – good and bad: We couldn’t build this thing without all the feedback!
Awesome to see this. Thanks for all the improvements, it really sways my decision when I’m choosing windows/osx as a dev environment.
Sorry – no container support: You’ll want Docker for Windows!
currious about linux containers now …. only one way to find out ….
Sorry – no containers – you’ll want Docker for Windows.
Any particular reason why docker would not run on WSL? The missing thing INOTIFY is there now along with networking. Are the linux namespaces available as well?
The general answer to “Any particular reason why would not run on WSL” is “yes” 😉
INOTIFY support is only a small part of the work required to support containers. We’ve more work yet to do here.
You guys are awesome. Before I had to switch to another machine or fire up a virtual machine, we meant I had to share resources… but now, I’m working between Linux and Windows seamlessly. Thank you!
Woohoo! 😀 Great to hear – thanks for giving us a try … and for sharing your experience! KEEP IT COMING!
Stop saying you’re “running node.js in bash”. You don’t run anything in bash except bash scripts. Bash is not runtime environment.
But it’s so much more terse than “asking Bash to send a request to the kernel to spin up a new process into which node and its dependencies are loaded & bound, and then started by executing node’s main()”.
At some point, one has to replace absolute accuracy with metaphors that work.
I wish I could upvote this comment – well said 🙂
Holy Cow!!
This is immense!
Wow, so no more Cygwin/GitBash hacks for me… and definitely, no need to run a *nix in a VM just to feel at home! The Nu Windows is No Windows!
LOL 🙂 Thank you! Glad you’re as buzzed as we are! 😉
WSL is great – nice work guys, I’ve been using it for a few months now and find it invaluable for all sorts of stuff. As a developer all the best/good tools are on linux and I need connectivity via SSH to servers, scripting etc. WSL fits the bill nicely because I don’t necessarily want to install a full Linux on laptops etc or run a VM. Purists may disagree but I reckon its a great step forward. It’ll be nice to see a few of the rough edges with the filesystem etc sorted out – I’m installing the updates as I write;-)
Many thanks Mike! 😀 Glad we’re making your life easier and more fun 😀
Thanks for this, lets hope win console soon will be as good as macos, linux consoles.
That’s the goal … eventually 😉
At this point, the main issue I have with WSL is (missing) support for network filesystems and removable drives. Second would be wsl/win32 interoperability regarding symlinks and permissions.
Funny you should ask 😉
Yes we’ll be improving symlink interop in the future too.
We fully honor Linux DAC permissions in the Linux filesystem, but there’s no clean mapping between Linux DAC and Windows’ ACL’s so in the Windows Filesystem, everything is owned by root.
Regarding the other post: I saw this, it’s great!
“We fully honor Linux DAC permissions in the Linux filesystem, but there’s no clean mapping between Linux DAC and Windows’ ACL’s so in the Windows Filesystem, everything is owned by root.”
Well, yes and no. Have a look at NFSv4 ACLs (and how it interacts with unix-style permissions) and the RichACLs patch on Linux. This would be the way to go, IMHO.
Currently, having everything as 0777 on drvfs is annoying. Some software (ssh, gpg) checks the permissions and can be naggy about it.
Do you see the windows console continuing to develop alongside PowerShell? For a long while i’ve assumed it was just a matter of time before PowerShell replaced it. But it seems that the console is being actively developed. What is the relationship between the two?
Windows Console is a console/terminal app that communicates with command-line applications and shells like Cmd and PowerShell on Windows, and Bash, zsh, fish, etc. on Linux.
Windows Console is under active, heavy development as I type as we’re giving the Console its biggest overhaul in > 30 years!
PowerShell development also continues (by a different, partner team) in the open – see their GitHub here:
Thank you.
Thank you Microsoft!!! <3
I'm tired to reboot to linux/windows every time <3
Next time I want a tilling manager (like i3wm) for windows and that will be the best platform ever 😉
And why not NTFS support driver or whatever for linux to help the two system share folder easily? (on external hard drive for exemple)
I know I'm asking a lot ^^
Thank you again Microsoft!!! <3
LOL 🙂 Many thanks Archer. We don’t have any plans to support X/GUI apps at this time.
We do have some plans to improve filesystem interop, but it’s actually a lot more complex than it may appear! Bear with us! 🙂
Tried both in-place and replace methods, still seeing 14.04 Trusty. Any suggestions?
Yes – upgrade to Creators Update. If you’re getting 14.04, you’re running Anniversary Update.
Gotta say good job M$
Many thanks 🙂
“ifconfig” makes it sounds as if you’re using the legacy ioctl() based network controls as opposed to RTNETLINK? Is that the case, or does iproute and friends work?
I’m intrigued by this. I’m wondering if I could grab an iso somewhere and give it a try?
Okay, posted too early. According to the release notes you actually *do* support netlink based network controls.
Still no nuget with Mono?
Sorry – not our area: Might want to go ask Mono or NuGet teams.
Love you guys!
LOL 🙂 We love you (all) too 🙂
Any updates on an offline installer for WSL?
Nothing to report at this time.
This looks like a great update! Lots of things on my personal wishlist are on here.
One of the issues I had with the original release is that the filesystem you existed in was somewhat “virtual” in that if you went to your bash home directory and cloned a git repo, you couldn’t really access those files from windows for use w/in an IDE, etc. If you did manage to access and alter them from the windows side, it pretty much broke the whole experience because bash seemed to forget those files existed, or something along those lines (iirc).
Is that something that’s been addressed in this release?
Because of some major differences between the Windows and Linux filesystems, some “Gaps” remain, including the fact that Windows can’t see into the Linux filesystem.
We ARE, however, very keen to reduce this gap and have some work planned to improve this issue in the future.
>One of the key drivers for the Console overhaul was the need to enable the Console to render the output of Linux command-line
>tools & applications running on WSL.
The reason I’m done with the platform.
You waited on all of this until you had to do it- and you had to do it for Linux, of all things.
Prior to WSL, there were relatively few requests for VT support in the Windows Console, and there were many 3rd party alternatives to satisfy those that DID need Linux capable console.
is there any way to run an x server on this?
Many do, yes. Though note that we don’t spend any time adding/massaging features to support X/GUI apps/desktops, etc. That said, we don’t do anything to block them either – so if they work for you, GREAT! 😀
Wow! That’s awesome 🙂
LOL 🙂 Thank you! 😀
Wow, this is all so great!
Glad you like it – many thanks 🙂
Sounds GREAT !
. . . But FATALLY flawed unless you can affirmatively answer the following conundrum: when will Microsoft sell me a _single_ (i.e., retail) copy of a version of Windows 10 that does NOT include built-in SPYWARE, or non-security updates (i.e., Win10 Enterpise LTSB–a product you already make)?
I understand your concerns, but turning down the hyperbole a little, the telemetry we gather is NOT spyware – it does not record your personal information nor the contents of your files, etc. It’s all about figuring out what features users are using, where errors occur, and all to the goal of delivering an ever improving platform and OS.
As to whether you’ll be able to buy an LTSB type SKU – that’d be GREAT feedback to share via the Feedback Hub. OF course, no guarantees, but if you don’t ask, you don’t get!
That’s all good and stuff, but is anyone working on the Windows 10 bugs that plague both desktop and ESPECIALLY mobile?
Of course. Be sure to submit feedback via the Feedback Hub on Desktop and/or Mobile: Your feedback gets routed directly to the feature owners who’re able to diagnose, prioritize and schedule improvements.
This is an awesome update! makes WSL much more useable!
Glad you like it 🙂 Keep the feedback coming!
That’s great! Excitedly I uninstalled, [rebooted just in case] re-ran bash – it downloaded from windows store, extracted, installed and and its 14.04 trusty again. Any ideas?
99% sure it’s because you’re still running Anniversary Update. If you’re on Creators Update, you’ll get 16.04
Amazing work. A++
Only thing left that really sucks about Windows now is the very unloved client development frameworks. If only MSFT would get this passionate about evolving the UI stack…
There are some GREAT improvements in UWP app platform, and in the Direct Composition features giving apps greater control over more sophisticated UI layout & rendering options. And DirectX12 is pretty cool!!
And, yes, work continues on all fronts – lots more cool stuff on the way w.r.t client app development 😉
Is Windows 10 about to become a better Linux than Linux?
I sincerely hope you guys keep up with the improvements and make a truly compatible Linux within Windows 10.
Better Linux than Linux? I don’t know if we’ll ever reach that lofty goal – there are likely to be things that we’ll simply never be able to do that full Linux can. However, we’re enthusiastically trying to support all the scenarios, tools, syscalls, and features required for our core goal: To run all the dev tools you need to do your best work, and have the most fun … on Windows 🙂
Cool! Can I now run an sshd daemon using private key authentication?
Yes
Amazing work team!
Many thanks Tim 🙂
THX for this nice tutorial
Hi,
I’m getting an error while I’m trying to start apache2, which is the following:
—————-
➜ ~ sudo service apache2 start
* Starting Apache httpd web server apache2
*
* The apache2 configtest failed.
Output of config test was:
apache2: ../sysdeps/posix/getaddrinfo.c:2603: getaddrinfo: Assertion `IN6_IS_ADDR_V4MAPPED (sin6->sin6_addr.s6_addr32)’ failed.
Aborted (core dumped)
Action ‘configtest’ failed.
The Apache error log may have more information.
—————-
The apache error log doesn’t seem have anything relevant.
Is there any way to fix it? I’ve reinstalled fully the whole linux subsystem.
Which OS build are you running, and which version of Ubuntu?
Hi, Rich. I loved your appearance on Windows Weekly, BTW. So much so that you inspired me to try this out.
Unfortunately, as soon as I started playing with it I ran into this exact same “IN6_IS_ADDR_V4MAPPED” problem. I get the error when I try to start Apache2 or ssh. I’m on the Creator’s Update and I’ve upgraded to Ubuntu 16.04.
Hi. Could you please file an issue on our GitHub & do please complete the default template bug report:
great post. soco on me!
ROFL 🙂 Thanks Joe – will hold you to that! 😀
Still useless to me until usb devices will work in Bash.
Funny you should say that – last week’s Windows 10 Insider build included brand new support in WSL for USB storage devices and USB serial comms! Jump on the fast ring to give it a try!
lsusb returns “unable to initialize libusb: -99”.
I’m on Windows 10 (10.0.14393 Build 14393)
lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 14.04.5 LTS
Release: 14.04
14393 is Anniversary Update which is now very old and missing LOTS of important functionality. I strongly encourage you to upgrade to Windows 10 Creators Update and upgrade or (better) nuke & reinstall your Linux instance so that you get Ubuntu 16.04.
Alas, USB support wasn’t in Creators Update either, but is emerging in the early post-Creators Update Insider builds.
Pretty amazing stuff! Great work MS.
This is all very exciting, but one thing I was really disappointed to find out about is symlinks are still not cross-compatible between Linux and Windows one bit.
Windows still can’t read Linux-created symlinks – it sees the symlink (and even shows the file size of the file it points to on Linux), but it can’t read it.
Conversely, Windows-created symlinks don’t even show up in Bash. Completely invisible.
Can we hope for this to be resolved?
Yes, we do plan on making the Windows and Linux symlinks line up better in future releases. This mismatch is a scheduling issue combined with some surprising complexities we need to spend some time resolving.
Sir!
Great Job.
Good Luck
Nice! Please add unicode support. Fore test add to ~/.bashrc
PS1=”\[\033[01;37m\]\$? \$(if [[ \$? == 0 ]]; then echo \”\[\033[01;32m\]\342\234\223\”; else echo \”\[\033[01;31m\]\342\234\227\”; fi) $(if [[ ${EUID} == 0 ]]; then echo ‘\[\033[01;31m\]\h’; else echo ‘\[\033[01;32m\]\u@\h’; fi)\[\033[01;34m\] \w \$\[\033[00m\] ”
Source ~/.bashrc and see unix invite string.
Use apt without -get, example:
sudo apt install apache2
upstart in 14.04 cannot autostart apache2 and create folder for PID
Use command “ip a” instead “ifconfig”
Use True Color theme in Midnight Commander (mc):
More Unicode & UTF-8 support on the way
You can already run apt without the -get if you wish – this is a distro thing, not a WSL thing
Apache2 now works on WSL – start via “$ sudo service apache2 start”
If you prefer ip, use it! 🙂
If you want to use MC’s themes, go for it!
Thanks for you reply, you do great work! 😉 Good luck
It’s great that these impovements are coming to the Windows command line, but I feel like there are still way, waaaay too many features missing from it. What makes the Mac and Linux terminals so great is that they’re really the complete package. They have tab support, auto-hiding scroll bars and many other little things that make them so comfortable for working in for long periods of time. I mean, it’s sort of crazy that the Windows cmd app doesn’t even let you dynamically resize text by pressing +/-!
All of the features you list are on our backlog, but as I mentioned in the post, we’re giving the console its biggest overhaul in > 30 years. One does not just bolt on a stack of features to something as core and critical as the Console, and hope for the best. We are nearing completion of some of the most important parts of the Console overhaul, so will be able to return to working on more visible improvements in future releases.
Bear with us while we create a FAR more capable console for the future.
In the meantime, while I think it safe to say that the Windows Console has improved leaps and bounds from where it was (i.e. unable to handle any VT sequences at all), there are several third party Consoles you might want to explore including ConEmu, Cmder, Console2, ConsoleZ, Hyper, etc.)
Great news! 😀 I usually turn to Git-Bash, since I prefer Bash anyway, but since Git-Bash runs in a cmd-window… 😉
Anyways, thanks for your reply. So glad to hear that you’re all thinking about these things as well. 🙂 I feel like Ubuntu on Windows could become really great with these new features.
Providing an api (sort of win32 ptys) to implement third-party alternatives to the win32 console would be a way to fullfill such requests. Otherwise, you will be stuck with providing a one-size-fit-all answer to an endless stream of (sometimes conflicting) requests.
Console2, ConEmu & al are all using kind of hacky workarounds, with a hidden console, which is not nice.
Miniksa said this is in the works, but zadjii said that it’s prolly not going to be in RS3
Nice, some really great improvements.
One question, in the Windows Linux Interop section of the post you have a screenshot of creating and writing a file from WSL then opening it with Notepad.exe, is this now supported/recommended? As this blog post suggests it’s not:
Thanks
It is entirely supported to access files stored in the Windows filesystem (e.g. c:\… or /mnt/c/…) via Windows or Linux tools.
It is recommended NOT to access files stored under the root of your Linux filesystem using Windows tools.
Really great news!
Can windows console programs also use VT100 sequences? If yes, do they need to opt-in in any way?
Absolutely – as per the fine manual 😉 ()
Output Sequences
The following terminal sequences are intercepted by the console host when written into the output stream, if the ENABLE_VIRTUAL_TERMINAL_PROCESSING flag is set on the screen buffer handle using the SetConsoleMode flag. Note that the DISABLE_NEWLINE_AUTO_RETURN flag may also be useful in emulating the cursor positioning and scrolling behavior of other terminal emulators in relation to characters written to the final column in any row.
I would like to be able to install Bash for Ubuntu on Windows once for all users, instead of separately for each user. I normally use an account with admin rights for administration and an ordinary user account for most other work (just as I do in Ubuntu and MacOS).
I am looking forward to using the expanded features of BoUoW in the Creator’s updatge.
We only support installing a Linux instance for each user – if you want a single/shared multi-user Linux instance, you might want to consider running Linux in Hyper-V.
The practice you describe above was recommended for XP users, but the introduction of UAC in Vista+ eliminated the need to have a separate admin account since normal, non-elevated processes do not have admin rights by default and require user action/authentication to elevate.
Awesome!
Are there any special/additional hazards related to an in place upgrade from Ubuntu 14.04 to 16.04 related to the WSL system; or is it just the same set that would be faced in upgrading a stand alone Ubuntu install?
There are always hazards with in-place upgrades of any OS, but none specific only to a distro running on WSL.
This said, it’s possible that something you’ve got installed doesn’t get upgraded correctly, and so you may end up chasing ghosts if you do find issues. Thus, it may be faster and safer over-all if you do a clean install.
Really a lot of improvements!!
sshd, nginx, Apache are supported now. Does that mean they are installed as system services and started up automatically? Or do we still need some workaround for automatic startup of sshd?
You have to install them and start them manually (e.g. $ sudo service ssh start) : We don’t yet support running background Linux daemons/services without an open Bash console.
I was really hoping these issues (2 links below) would be solved now within creators update 15063 (just installed).
Adding autostartssh.vbs to Scheduler causing subsequent problems when trying to open another bash window after logging in are not the great fix that can make CygWin redundant for the sake of headless ssh/scp.
If there are newer builds with solutions for me to check in this respect please let me know.
It’s quicker to install and configure CygWin than apply current work-arounds. Shame.
Though note that Bash/WSL does FAR more then Cygwin. Not dissing Cygwin here – it’s a great tool and I use it daily too, but if/when you need to run genuine Linux tools, complex build toolchains, etc., WSL is your best bet.
Remember, WSL is not a server-quality product yet. We’ve more engineering to do before we get there.
And because WSL instances are per-user, they cannot startup before you’ve logged-in. And when you close your last Bash session, we tear-down all running Linux processes.
We are (of course) working to improve WSL’s features over time but in the meantime, you could just run OpenSSH on Windows instead:
Thanks for the suggestion, but I need it to run a few other Linux commands too, without me having to be interactively logged in, hence, CygWin.
@Tom: You could run Win32 OpenSSH, and start bash.exe from there.
I’m using ConEmu now because I could not make windows console appear full screen, does this feature available now ?
ALT + ENTER is your friend 😉
A lot of great work went into these tools and I’m convinced they will prove to be a real boon for productivity on the Windows platform. As a developer I’m grateful this project even exists and we’re all fortunate its turning out to be successful. But I think it’s important to remember why a quality Linux subsystem on Windows is possible but the reverse is fraught with difficulty.
Awesome stuff! It’s been really exciting following the progress, can’t wait to see whats next 🙂
Ooh, we’ve got some fun stuff on the boil 😉
Im seriously considering switching back to Windows again due to this project alone.
THANK YOU THANK YOU THANK YOU! don’t stop:)
Great improvements, though I’m still struggling to get lsusb working; resulting in “unable to initialize libusb: -99”
Would be good for my own sake to install adb android tools under this bash. Thank you
Could you file a bug on this in? Much more likely to be able to help you there than in these comments 🙂 Thanks!
Long time Microsoft-hater here, but I’ve got to say I really love what you’re doing with this. I’d love to see it evolve into a deployment platform and not just a development platform. What I’d *really* love is to see it ported to Windows Server, and give us the ability to run UNMODIFIED Linux Docker containers on Windows Server. How cool would that be?
Hi there. Thanks for taking the time to share your feedback – we really appreciate it!
So would we 🙂 We continue to work on many improvements throughout WSL with the goal of supporting ever more features and scenarios. The Server and Container scenarios are particularly popular asks, and are on our backlog, but we’ve got some work to do to get there. In particular, we have to figure out how to improve the richness of our networking support, disk IO perf, how to run system-wide daemons in the background at machine startup, etc.
Bear with us we’re running fast and working hard 😉
Hello Rich! I recently uninstalled the 14.04 image and installed the 16.04 one and noticed something strange and I was wondering if this was done on purpose or not. First of all, valgrind works now, which is GREAT, I can finally do my college assignments on wsl just as fine as on Ubuntu (I got it to work in 14.04 too but by building it from source though), and secondly, before upgrading to CU and 16.04, clear used to work in the expected way, it scrolled down so that the prompt was on the first line of the terminal and I could scroll up to see my previous commands, but now, when I try to scroll up after a clear, my previous commands/output are gone, it’s like the terminal gets reset or something.
Glad WSL is rocking your boat 🙂 I believe the old behavior was erroneous (albeit occasionally useful) 🙂
SYSTEM command behavior has changed (bug?)
Consider following instruction: start /B php -f somefile.php > somefile.txt
In previous version of Windows somefile.txt contains php script output, but in Creators Update it contains system command output!
Can’t repro on my Creators Update machine running Cmd:
c:\temp>start /B git --version > somefile.txt
c:\temp>type somefile.txt
git version 2.11.0.windows.3
c:\temp>
ms wtf? farmanager (in admin mode), bash as default shell… all seams to be cool… until you open phpstorm or vscode with TERMINAL SET TO USE BASH.. and you NOW (since that creators update) get error that you can’t run simultaneously multiple bash instances with different rights. why you did that crap??? on AU it was working just fine.. and in CU you broke it. what for?
We “broke it” because it’s a potential security issue and we’d rather keep you safe.
The challenge here is that all instances of a distro share the same session and there are potential risks with running a Linux binary (e.g. /bin/bash) with admin privileges inside a session with non-elevated binaries.
The question I’d ask is why are you running any Linux process with elevated Windows permissions? That is a potentially very dangerous thing to do.
i do run “far manager” in admin mode, so i don’t have issues copying into folders like program files (as far as i remember until win 8 you was just able to put any file in any folder, but then you had to be admin for that -> that’s why i’m running file manager in admin mode)
and since far (it’s built ontop of cmd) – same as nc (norton commander under dos), vc (volkov commander), mc. and when i do bash -> it logins me with my “user” account. bash.exe is maybe launched as admin but it still uses the normal user inside.. so for some tasks i still have to use sudo to be able to run any stuff in wsl as root
and i also use wsl in my IDEs but now i had to either run IDEs as admin or use far (file manager similar to total commander) in non admin mode.. and now i have hundreds of confirmations to do this and that on my drives. which is very annoying
Artem: You don’t understand what elevation does: if you elevate a process, it is given FULL ADMIN RIGHTS. Any process that you run elevated CAN DO WHATEVER IT LIKES to your hard drive, registry, etc., and can modify the contents of Program Files\*, Windows\* etc.
I CANNOT RECOMMEND STRONGLY ENOUGH that you AVOID RUNNING APPLICATIONS ELEVATED AS ADMIN wherever possible. If you do need to elevate a process so that it/you can modify something in a protected area of the filesystem or registry, then be sure to close that app as soon as you’re done.
Ever wonder why XP was so easy to infect with malware, etc? It’s because it lacked UAC (similar to sudo in Lunux) and users ran as local admin by default, thus, any app they ran had admin rights, and so malware was able to inject itself onto those machines.
UAC masks-off the admin-rights bits from the SID’s of users who are in Local Admin group. Thus, any standard process launched by Local Admin users gets cannot screw up core system files, OS components, machine-wide registry settings, etc. If, however, a process is launched elevated, its given full admin rights and you can but hope that it isn’t malicious!
Also, Far doesn’t run atop Cmd. Cmd is a command-line shell and can launch GUI or command-line apps. When you launch Far from Cmd, a new Win32 process is started into which Far is loaded and executed. If you launch Far elevated with admin rights, it can do whatever it likes to your HDD.
Similarly, Bash.exe is a command-line app through which you launch Linux’s /bin/bash. Thus, if you launch Bash elevated, it launches /bin/bash elevated, and anything the user then launches runs elevated and all of these things are free to do anything they like to your hard drive.
I hope you have a well tested backups & recovery system!
I want to be able to type atom . in WSL and open Atom with the current directory.
for rails development is this work now
alias atom=’/mnt/c/Windows/System32/cmd.exe /c “atom .”‘
in my .bashrc
You don’t need to invoke cmd first: If your Atom is on your Windows path, you could just run `atom.exe`. If you’d rather not type .exe every time, just alias `atom` to `atom.exe`. If atom is not on your path, alias `atom` to `<full path to atom.exe`
This is going to get a lot of people to come back to Windows from Mac or Linux. Really looking forward to this. Thanks Microsoft 🙂
Thanks Daniel! 🙂 Our aspirations are simple: Just allow devs to get their work done on Windows. Period. If that means Windows becomes a more productive platform for devs than other platforms, great 😉
I’m trying to remove and replace to upgrade to Ubuntu 16.04. However, I’m hitting error 0x80070003 when uninstalling, and error 0x80070091 when installing. Now I’m in a broken state where I can’t uninstall (it just throws “The LX subsystem has an install or uninstall operation pending”), nor can I reinstall (I just get that error code every time). Any ideas?
You need to reboot after enabling WSL and BEFORE running Bash otherwise WSL won’t have been able to initialize and won’t work correctly.
To get you back in working state, I would:
Disable WSL
Reboot
Delete the `%localappdata%\lxss` folder using Explorer / PowerShell / Cmd
Re-enable WSL and reboot
NOW run Bash from the command line to download and install Ubuntu
Hope this helps
Im having some issues regarding inotify, using W10 Creators
1. When I change a PHP file, it still doesnt reflect the changes. I need to stop and start the Apache server. Anything i need to do?
2. Im copy the files into …var\www\test Apache is serving the web files from here. Do i need to mount a drive and use it as the source for the website?
any recommodations will be apreciated
Be sure you’re not copying files into …/var/www/test using Windows tools (it’s fine to do so from within Bash).
I’d recommend storing your website files under `:\` and then configure Apache to find the website files under `/mnt//`
Hello.
This may be a beginner question but what are you using to allow many applications to run under the same terminal in that screenshot where you seem to have htop, midnight commander, etc running?
Thanks and great work!
That, Simon, is tmux … it’s a terminal/console multiplexor – essentially provides a one or more virtual consoles, each in their own region of the host terminal/console. It’s a little awkward to get started, but its an indispensable tool once you’ve gotten the knack 🙂
This is the main tmux site:
And if you want more, Brian Hogan (@bphogan on Twitter) has written a great book about Tmux:
The file system remains poor.
Hey Michael – HUGE fan of your work – thanks for running WSL through your tests 🙂
Yes – we’re very aware of our disk perf and are working with several teams throughout the kernel to make improvements here over the next few releases. As you can imagine, changes deep in the IO stack need to be VERY carefully engineered and tested 😉
Awesome product, WSL is changing my life!. Just wish that I could access and edit files in my homedir with Windows editors. Seems when I create files they don’t show up in Linux land. 🙁
Yep – we’d love to be able to do this too! 🙂 We’re working on improving the filesystem interop in future releases.
Awesome job guys! The progress your making is apparent. Windows is becoming more and more a great environment to develop in.
If you haven’t already seen it though, docker-compose.exe won’t run correctly in bash. It turns out there’s an issue with the version of python available where it doesn’t support the code page of the Windows UTF-8, i.e. 65001. Docker-compose fails with errors like this:
LookupError: unknown encoding: cp65001
Thanks 🙂
Being tracked in this issue:
Please follow-along there for triage & eventual resolution.
I realize that the italicized note points out that X/GUI apps and desktops aren’t explicitly supported, but are not prevented from attempting to run them. I am interested in trying to do so, but have been stymied by failure to initiate a ‘startx’ command. Is there any possible way from within the bash environment to launch an Xserver without interfering with the Windows interactions with the underlying video hardware? I installed xinit and have tried to invoke startx, but it fails complaining about an inability to transfer to the console server. Has anyone found a way to invoke a X server under WSL?
You could create a bash alias for startx=”/mnt/c/Program Files/…/xserver.exe” (filling out the path and correcting the name of your x server app.
Since the Creators Update, the cmd crashes the layout of the Logoff/Shutdownscripts!!! The scripts run, but i can’t see anything IN the cmd window! (gpo: Show Logoffscripts = yes) The settings-dialog of the cmd windows also crashes, no text is displayed. My – interaktive backup.cmd doesn’t work!
Thanks for the report. Could you please follow the instructions & file a bug on our GitHub repo here:
Now i have the solution for my problem:
It’s not an cmd Problem!
It’s the Service:
Energy Server Service queencreek (ESRV_SVC_QUEENCREEK).
(Intel(r) Energy Checker SDK. ESRV Service queencreek)
Stopped the Service. And all looks fine!
Volker
How do I access remote shares from wsl?
I have X: mapped to a share on another device, from which I need to copy a bunch of files for a Linux program I want to compile on WSL, but “ls /mnt/” shows only C and d.
I’d use scp, but there is some issue where the two ssh tools won’t talk to each other due to encryption conflicts, that I don’t want to take the time to troubleshoot right now.
Suggestions welcome!
thanks in advance.
Fred
We only mount fixed drives into your Linux instance by default. If you’d like to mount a network share, install the latest Insiders build and follow the instructions here:
To SSH, you must be running sshd configured with the appropriate server keys and/or authn settings.
Thanks, Rich!
For mounting network drives, is 16193 recent enough, or does it require 16199? (I’m currently on the slow ring while I rebuild after a–for me, at least–non-recoverable failure of 16199, and am not ready to go back to the fast ring yet).
thanks again!
It was already really good. Now I am feeling secure with the idea that this has come to stay. That makes me real happy. I was never an avid Windows fan, I am really warming up 🙂
Glad we’re winning you over 🙂
Thanks for bringing Ubuntu on Windows, this is really helpful. But I need your help installing gcc compiler on my Ubuntu (running on windows). For some reason I am unable to compile simple C program in bash. The Error that I am getting when installing gcc on bash
Err:1 xenial-security/main amd64 libc-dev-bin amd64 2.23-0ubuntu7
404 Not Found [IP: XXX]
Ign:2 xenial-security/main amd64 linux-libc-dev amd64 4.4.0-79.100
Ign:3 xenial-security/main amd64 libc6-dev amd64 2.23-0ubuntu7
Err:1 xenial-security/main amd64 libc-dev-bin amd64 2.23-0ubuntu7
404 Not Found [IP: XXX]
Err:2 xenial-security/main amd64 linux-libc-dev amd64 4.4.0-79.100
404 Not Found [IP: XXX]
Err:3 xenial-security/main amd64 libc6-dev amd64 2.23-0ubuntu7
404 Not Found [IP:XXX]
E: Failed to fetch 404 Not Found [IP: XXX]
E: Failed to fetch 404 Not Found [IP: XXX]
E: Failed to fetch 404 Not Found [IP: XXX]
E: Unable to fetch some archives, maybe run apt-get update or try with –fix-missing?
Can you please help in this?
Thanks
You should ensure you have internet access and then run the following – looks like your APT package sources are unreachable.
sudo apt update
sudo apt upgrade
Then install gcc – in fact, I’d recommend installing `build-essential`.
You might want to follow along with Scott Hanselman’s guide to getting started with WSL:
I have install VSCODE in my Linux subsystem(WSL) but I unable to launch it ,I think,since it has no privileges to access graphic driver calls directly it cannot up and run required Graphic Elements.
Any good solution to execute simple GUI application such as gEdit,VSCode without haiving Xserver like application on windows side?
If you want to edit your code in VSCode & build/run from within WSL:
1. Install VSCode for Windows
2. Create/clone your project into a Windows folder, e.g. c:\dev\project1
3. Navigate to that folder in Linux: $ cd /mnt/c/dev/project1
4. Open that project in VSCode: $ code .
“it is important to note that WSL remains a beta feature in Win10 Creators Update while we shave-off some rough edges and improve some core features and capabilities.”
Will future LTSB versions be able to install WSL after it moves out of Beta? I was so thrilled when I got it to install and work on my base 1607 LTSB image before applying windows updates, but then it broke and I cried.
Yes, WSL should come to future LTSB releases.
Thank you. I managed to build my kernel module however insmod returns: Function not implemented.
Are kernel modules part of your roadmap ?
Glad you were able to build your kernel module.
However, since WSL is an module in the NT kernel, which differs in many significant ways compared to the Linux kernel, it will rarely make sense to run Linux kernel modules in the NT kernel.
Will I be able to Sideload Ubuntu Desktop Edition or Is it essential to download ubuntu from the Windows app store and Sideloading Ubuntu Desktop would not work?
We do not support X/GUI desktops/apps/tools at this time. That said, we do nothing to prevent their use, but we don’t test X/GUI apps and are not prioritizing fixing issues that impact X/GUI apps.
bash does not run .bashrc
code. in a dir opens vs code in the windows/system32 dir
notepad.exe Hello.txt can’t find file though it is in dir.
Yes, bash does run .bashrc.
`code .` (note the space between code and .) opens VSCode in your current folder … ** AS LONG AS YOU’RE IN A FOLDER REACHABLE FROM WINDOWS. ** In other words, since Windows cannot (currently) see anything under your Linux root, consider moving your code into a folder under /mnt// where is one of your machine’s fixed drive letters (e.g. `c`) and is the path to the folder containing your source. For example:
$ cd /mnt/c/dev/project1
$ code .
Good grief, man! This is AMAZING! Look at me! I’m doing Linuxy things over here!!
Gosh, I have always wanted to play around with *nix but didn’t even know where to start. Now… Whoomp, There It Is!
Wow! I just spun up Apache, MySQL, PHP, Drupal… after STRUGGLING with virtual boxes and assorted third-party LAMP stacks.
Linux! It’s right THERE! And all the instructions on The Google for everything with dollar sign prompts actually makes sense and WORK!
Man, I have Node.js… Drush… Composer… Fahgetaboutit!
Thank you, guys!!
LOL 🙂 Great to hear you’re enjoying being able to run all your favorite tools on Windows and Linux on WSL. Let us know how you get on via twitter: @tara_msft & @richturn_ms 🙂 | https://blogs.msdn.microsoft.com/commandline/2017/04/11/windows-10-creators-update-whats-new-in-bashwsl-windows-console/ | CC-MAIN-2018-17 | refinedweb | 8,380 | 72.16 |
Optimize with a SATA RAID Storage Solution
Range of capacities as low as $1250 per TB. Ideal if you currently rely on servers/disks/JBODs
An action is a piece of code that is executed when a particular URL is requested. After actions are executed, a result visually displays the outcome of whatever code was executed in the action. A result is generally an HTML page, but it can also be a PDF file, an Excel spreadsheet, or even a Java applet window. In this book, we'll primarily focus on HTML results, because those are most specific to the Web. As Newton's Third Law states, "every action must have a reaction." (An action doesn't technically have to have a result, but it generally does.) Although not "equal and opposite," a result is always the reaction to an action being executed in WebWork.
Suppose you want to create a simple "Hello, World" example in which a message is displayed whenever a user goes to a URL such
as. Because you've mapped WebWork's servlet to
*.action, you need an action named
helloWorld. To create the "Hello, World" example, you need to do three things:
HelloWorld
Let's begin by writing the code that creates the welcome message.
Start by creating the action class,
HelloWorld.java, as shown in the listing below.
HelloWorld.java
package ch2.example1;
import com.opensymphony.xwork.Action;
public class HelloWorld implements Action { private String message;
public String execute() { message = "Hello, World!\n"; message += "The time is:\n"; message += System.currentTimeMillis(); return SUCCESS; }
public String getMessage() { return message; } }
The first and most important thing to note is that the
HelloWorld class implements the
Action interface. All WebWork actions must implement the
Action interface, which provides the
execute() method that WebWork calls when executing the action.
Inside the
execute() method, you construct a "Hello, World" message along with the current time. You expose the message field via a
getMessage() JavaBean-style getter. This allows the message to be retrieved and displayed to the user by the JSP (JavaServer Pages) tags.
Finally, the
execute() method returns
SUCCESS (a constant for the string
"success"), indicating that the action successfully completed. This constant and others, such as
INPUT and
ERROR, are defined in the
Action interface. All WebWork actions must return a result code—a string indicating the outcome of the action execution.
Note that the result code doesn't necessarily mean a result will be executed, although generally one is. You'll soon see how these result codes are used to map to results to be displayed to the user. Now that the action is created, the next logical step is to create an HTML display for this message.
WebWork allows many different ways of displaying the output of an action to the user, but the simplest and most common approach is to show HTML to a Web browser. Other techniques include displaying a PDF report or a comma-separated value (CSV) table. You can easily create a JSP page that generates the HTML view:
Archived Discussions (Read only) | http://www.javaworld.com/javaworld/jw-10-2005/jw-1010-webwork.html | crawl-003 | refinedweb | 512 | 55.95 |
I've just started programming in C++, err at least learning how. I've programmed in VB6 and VB.Net for a long time, I just decided I should learn a new language.
Anyway, I went over to Microsofts' site and got Visual C++ 2008 Express Edition as a compiler. I've created an empty project which right now just contains Main.cpp and Libraries.h, but it won't compile. It keeps telling me the exe of the project cant be found, even though it builds successfully. Am I missing something as a c++ newb or what?
Here's the code, in case it matters. It does nothing at the moment, but I'm still confused as to why it wont compile and run.
Main.cpp
Libraries.hLibraries.hCode:#include "Libraries.h" void Main() { return 0; }
Code:/* LIBRARIES HEADER FILE Used to refer parts of the project to some of the more commonly used libraries. */ #include <iostream> using namespace std; | http://cboard.cprogramming.com/cplusplus-programming/125605-newb-question.html | CC-MAIN-2015-48 | refinedweb | 161 | 85.39 |
I'm a c programmer, and I just started a personal project of mine to get me in to c++. When I decided to do this, I knew that c doesn't really do strings too well, and there's not much I can do about that, so I decided to use that excuse to learn c++.
I am writing a basic keylogger console program. I am running vista with visual studio c++ 2008. I do not need my code to be able to run on any other OS other than windows.
What I have written so far is based on c++ tutorials found on google, programming sites, etc. I KNOW that the following code probably has numerous conceptual/design problems with it, so I am asking any well-experienced c++ programmer to give me some advice and also EXPLAIN why in plain english so I can understand the principle behind your method.
OK, now for the parts I am having problems with.
1. For the infinite loop I am creating to get the key pressed: What would be the best way to get the actual value of the key being pressed, and then write that to the logfile? The way that I am trying to design it now is getting the key pressed 1 character at a time, then writing that to the logfile. After that key is writting to the logfile, clear that value from the char variable and read in the next key press. OR, would it be better to write a chunk of char data to the logfile at a time?
2. Is there a function that can get ALL keys pressed, including F1, F2, etc. I've already experimented with _getch() and it just returns a whitespace with an F key is pressed. I obviously can't use a regular char key, due to the nature of my program.
Here is what I have so far. Any help is HIGHLY appreciated
Code:#include <iostream> #include <string> #include <time.h> #include <ctime> #include <fstream> #include <conio.h> #include <windows.h> int main() { // declare variables char buffer[1]; // display program version and contact information std::cout << "Keylogger 0.1\nWritten by:\tMark Born\nContact:\tmcborn@gmail.com"; // checking for existing log file std::fstream logfile("keylog.log"); if (std::basic_fstream::is_open) { std::fstream logfile("keylog.log", std::fstream::app); std::cout << "Existing log file found. Initializing..\n"; } else { std::fstream logfile("keylog.log", std::fstream::app); std::cout << "No existing log file. Creating log file...\n"; // showing and hiding console window, loop for recording keystrokes std::cout << "Keylogger has initialized.\nPress F10 to hide console window.\nPress F11 to show console window.\nPress F12 to shutdown.\n"; for (;;) { while (_kbhit()) { std::cin.get(buffer); if (buffer == "F10") { HWND hWnd = GetConsoleWindow(); ShowWindow(hWnd, SW_HIDE); } if (buffer == "F11") { HWND hWnd = GetConsoleWindow(); ShowWindow(hWnd, SW_SHOW); } if (buffer == "F12") { logfile.close(); return (0); } } } | https://cboard.cprogramming.com/cplusplus-programming/100782-fstream-file-i-o-getting-keystrokes.html | CC-MAIN-2017-09 | refinedweb | 482 | 67.04 |
[Solved] Accessing UI components from another file
Hi,
I'm new to the forum and new to qt as well. My problem may seem simple to you but i can't find solution.
I have a .ui form (mainform), I added a separate class (functions.cpp and .h) in which I will put all the functions (calculation) needed in my work. The problem is that results of these calculations have to be displayed in the mainform (lineEdit for example).
So how can I write in functions.cpp to access components of the mainform.ui
What I have tried is:
@#include "mainform.h"
void myFunc()
{
MainForm::ui->text1->setText(...)
@
but it does not work.
I tried also to include the ui_mainform.h and use the class Ui::Mainform but no success.
So would someone please give me some hints.
Thank you in advance.
- koahnig Moderators
welcome to devnet
If you like to have access to the values of your form, you may pass a reference or a pointer to the form to your functions for access.
Are you experienced with OOP and C++?
Hi koahnig,
Thanx for the reply and the welcome. I have some experience but not so advanced.
I have already tested what you said but it says Ui::MainForm* MainForm::ui is private.
@#include "functions.h"
#include "mainform.h"
#include "ui_mainform.h"
MainForm *form=new MainForm();
functions::functions(QObject *parent) :
QObject(parent)
{
}
void setCont()
{
form->ui->lineEdit->setText("some text");
}@
Where did I make mistake?
- koahnig Moderators
You should write some access functions for MainForm.
E.g.
@
class MainForm : public QWidget
{
Q_OBJECT
public:
MainForm(QWidget *parent = 0);
~MainForm();
QString getName () const; void setName ( const QString & name );
private:
Ui::MainForm ui;
};
QString MainForm :: getName () const
{
return ui->lineEdit->text();
}
void MainForm :: setName ( const QString & name )
{
ui->lineEdit->setText ( name );
}
@
NOTE: this is brain to keyboard just providing an example.
However, i would also recommend you to have a look at some C++ tutorials. One needs a bit of experience with C++ to work with Qt.
Thanx for your reply.
You really make me scared with your example :)
It's only an example but my application can contain like 20 text edits, tables,.... And the actions to do on these components can be just everything: set text, change caption, insert rows,... so I think it's no way to wirte functions for each of these stuff.
I am sure the right way is to pass a pointer to the form but I can not yet find my way.
Any other suggestions are highly appreciated.
In your header you should declare all of your widgets:
@
private:
QTextEdit *text1;
QTextEdit *text2;
QLabel *label1;
// and so on....
@
Hi,
I changed Ui::MainForm ui to be public in the MainForm and it compiles without problem but nothing happens when I press the button. I send here my codes.
mainwindow.h:
@
#ifndef MAINWINDOW_H
#define MAINWINDOW_H
#include <QMainWindow>
namespace Ui {
class MainWindow;
}
class MainWindow : public QMainWindow
{
Q_OBJECT
public:
explicit MainWindow(QWidget *parent = 0);
~MainWindow();
public:
Ui::MainWindow *ui;
private slots:
void on_pushButton_clicked();
};
#endif // MAINWINDOW_H
@
mainwindow.cpp:
@#include "mainwindow.h"
#include "ui_mainwindow.h"
#include "functions.h"
functions f;
MainWindow::MainWindow(QWidget *parent) :
QMainWindow(parent),
ui(new Ui::MainWindow)
{
ui->setupUi(this);
}
MainWindow::~MainWindow()
{
delete ui;
}
void MainWindow::on_pushButton_clicked()
{
f.setCont();
}
@
functions.h:
@#ifndef FUNCTIONS_H
#define FUNCTIONS_H
class functions
{
public:
functions();
void setCont();
};
#endif // FUNCTIONS_H@
functions.cpp:
@#include "functions.h"
#include "mainwindow.h"
#include "ui_mainwindow.h"
functions::functions()
{
}
void functions::setCont()
{
MainWindow m;
m.ui->lineEdit->setText("my text");
}
@
and main.cpp:
@#include <QtGui/QApplication>
#include "mainwindow.h"
int main(int argc, char *argv[])
{
QApplication a(argc, argv);
MainWindow w;
w.show();
return a.exec();
}
@
Need your help please.
Thanks a lot in advance.
Declaring things public is not the way to solve these issues. You're breaking encapsulation, and that will come back to haunt you. It means that you are producing dependencies between components of your application that should be independent, ultimately leading to spaghetti code.
Why does another class need direct access to the UI components of the mainwindow?.
How come no one is recommending "signals and slots":? :)
Use signals and slots instead of breaking your encapsulation, just add slots for the functionality you want accessible through your main window, and connect those to the signals of the classes that need to use that functionality. This way your forms don't have to know and be exposed naked to each other, just the slot interface of your mainwindow.
[quote author="ddriver" date="1330360312"]How come no one is recommending "signals and slots":? :) .[/quote]
Because it is important to first understand the actual problem, before suggesting a suggestion. As the problem is not clear yet, it is a bit early to suggest that signals & slots can be a fix for it.
[quote author="andry_gasy" date="1330360010".[/quote]
Well, the point is, exposing the inner workings of a class to other classes does not make your applications simpler. It will make them more complicated in the end. Think of C++ classes as relatively independent parts of a complex organization. They provide a service to other classes. It is very, very important to be clear about what service that is exactly, but not to bother the users of the class with details on how it performs that service. That is: you have to design an API for each of your classes that gives a clear image of what the class can do, but that not exposes how it does that.
What you are suggesting to do, is to expose the guts of your mainwindow class to everyone. That goes directly against the idea above. Instead, try to think of what service your mainwindow class should provide to the rest of your application, and create and use that. That will make sure that if you later on decide to reorganize your mainwindow, the rest of your code will still work as long as you keep the API on the surface of your mainwindow class constant.
BTW: this is basic OOP stuff, nothing Qt specific.
Thank you all for your reply.
I went through the link ddriver was suggesting and was thinking about all Andre's lecture :)
Well, for a C++builder user like me, it seems quite complicated. I totally agree with Andre but for me gui classes must be some kind of "public" classes for other classes in the project. All other classes must have access to ui components because they have to "show" their results to the user. For small project (few visual components), signals and slots are still ok, but for many forms with many widgets in them, it gets messy.
As i said, I will have many forms with many widgets. I put all the calculation stuff in separate files. According to results of these calculations, tasks as following will be done on the gui: changing contents of text/line edits, labels, adding rows to tables, updating tables/combo boxes contents,... So again I think to allow these calculation files to have direct access to ui components would be the easiest way.
My question is then still the same: is there a way to make thing like
@form->ui->lineEdit->setText("some text");
@
from a file outside the form?
Thanks a lot for trying to help. I really appreciate.
Last attempt by me: I would really, really not allow direct access to the widgets by classes that don't own them. Your calculator classes should not need to know how their results are used. Instead, they could just announce their results via a signal, and you can connect those signals to the relevant slots to set the values on the widgets. Perhaps you can even make that connection to the widgets directly, if the mainWindow has access to the calculator instances.
GUI classes do not form the exception for having to provide a good, clear API.
Really sorry for the very late reply as I was completely absorbed by the job.
Thanks again for all your help. I totally agree with Andre for "his" approach. I've had time to learn about signal and slot. I will do it the "right" way when time allows me but for the moment, I just used the "dirty" way: setting the Ui::Mainwindow as public and it's working well for my small project.
All the best!
Sorry for replying the very old thread, just adding the information missing for Qt newbies like me was some time ago, who came down here in hunting for "proper" fast access to form elements.
The cannonical form for "fast, just clean" access to Qt form elements "out of box" can be:
(It took looong time to me to elaborate this "simple" line of code. IMHO it should be printed ready somewhere up in the Qt intro examples.)
w.findChild<QTextBrowser*>("FormElement")->setText("TEST");
where
- w is the MainWindow object
- QTextBrowser is type of the element you want to access
- "FormElement" is the name of the element you want to access
(You can omitt it from parameter list if there is only one element of the given type. findChild will return it by default.)
- setText is the method you want to call. | https://forum.qt.io/topic/14397/solved-accessing-ui-components-from-another-file | CC-MAIN-2018-30 | refinedweb | 1,530 | 65.83 |
by typing appwiz.cpl on a command prompt and then selecting "Turn features on or off " from the left side
From the features select World Wide Web Services
and make sure that you select the following
- REQUIRED: CGI. This is the same as the FastCGI support
- REQUIRED: IIS Management Console
- OPTIONAL: Custom HTTP Features: Static Content, etc
[We will use this to check if IIS is installed correctly]
- OPTIONAL : Other health and diagnostics info as
Shows in the picture below
Click OK and let the Windows install IIS and FastCGI for you.
Step 2: VERIFY THAT IIS IS WORKING
Once installed you can verify that the IIS is installed by going to the
>>IMAGE
why
CAUTION:.
mapped to the physical directory C:\Inetpub\RoRIIS7
Please note that the HostName we are using RoRIIS7
AND that I am using port 8080
The website should start.
To verify that the website is working, copy the IISSTART.HTM and the
welcome.png files into the RoRIIS7 directory.
Then navigate to
You should see a page similar to step 2.
NOTE: When you generate the Rails app, make sure that
you use -D option. -D option generated the dispatchers
for CGI and FastCGI
Step 11 Generate a RAILS APP
Modify the app\controllers\test_controller.rb like below
This will enable it to display some test when we navigate to this URL
class TestController < ApplicationController
def index
render :text=>"The index action"
end
def about
render :text=>"The about action"
end
end
Step 12 Hook up IIS to Ruby
So far, we have IIS Setup and a Ruby App Setup
We
the FCGI. The FCGI handling is provided by the Rails
Dispatcher.
Once you add the mapping go to
C:\windows\system32\inetsrv\config and open
applicationhost.config
In this config file you should see a section for fast cgi
<fastCgi>
<application fullPath="C:\ruby\bin\ruby.exe" arguments="C:\inetpub\RoRIIS7\MyApp\public\dispatch.fcgi" />
</fastCgi>
and another section for
the
possibly fine tune this later if you needed to.
Do an IISRESET and go visit
OOPS!
this is the command line config tool
Download SQLITEDLL-3.6.15 ZIP and place the dll in the system32 directory.
This is the engine for SQLITE.
Now we need to install the GEM has the
tip on getting this done
gem install --version 1.2.3 sqlite3-ruby
Step 14 COMPLETE!!!
Now you navigate to
and you should see
There you go ladies and gentlemen, I give you Ruby On Rails with IIS7
Hi,
I was wondering if there’s any reason why this would not work on Server 2008? I followed all the steps to a T, but can’t get past the 500.0 error message. I’ve double and triple checked all the permissions, but no change. Any help, or troubleshooting steps you may be able to provide would help.
thanks,
-Steve H.
If you’re wanting an setup on Windows for development, check out BitNami stacks – they’ve got a Ruby one – – and a JRuby one –
I’ve not had any issues with it yet. But I’m not sure if it’s suitable for production environments or not.
Daf
>>> Thanks Daf. I do have the dev environment. I am looking for
deployment via IIS
I had to add the IIS_USRS group to the RoRIIS7 dir and give full control to get the 500 error to go away.
I also had to uncomment out this line in the environment.rb:
<pre>config.gem "sqlite3-ruby", :lib => "sqlite3"</pre>
Hi,
I’m using server 2008, and have also followed the instructions to the letter.
I am also stuck on the http 500 error.
I have set the netowrk permissions, and also (as a test) allowed the "everyone" group full access to c:ruby and c:inetpubroriis7. (this was just a test to see if it was permissions related – no joy).
Do you have any hints as to how I can debug this?
Ta’
Frank
I am also stuck on the http 500 error using Server 2008 x64, has anybody solved this ?
I am installing redMine (RoR based ticketing system) with this guide and it works great.
however, there is still a small problem with this configuration. In redmine , if there is a new row of data created (regardless is new issue , new follow-up , new tracker or etc) .
After record create, redMine will redirect to show the record. It will show the 500 error at first. If I refresh it , it will load the page correctly. It seems like IIS could not find the new record when it is created , but later it will work. I suspect is the caching problem of dispatch.fcgi or URL Rewrite or Module Handler setting .
Anybody encounter this problem , any solution ?
IIS7 + RoR 2.2.2 (RedMine) + URL Rewrite 1.1
Hi
I’m using server 2008 and your simple example is running fine. The problem comes when expanding the site to handle images and css. None of the files will be forwarded. Rails log errors like "ActionController::RoutingError (No route matches "/stylesheets/main.css" with {:method=>:get}):
public/dispatch.fcgi:24"
You build your post on Ruslan’s post about URL rewriter.
but it’s not used in your example?
Is Ruslan’s proposed solution for displaying static files the way to go to get it working?
To get it working with regular rails view helpers like <%= stylesheet_link_tag ‘main’ %>, <%= javascript_include_tag :defaults %>
and normal tags like <img src="images/pict.gif">
that all works well running the application on webrick.
I added the URL revrite feature to my IIS7 site, imported the rules from Ruslan’s solution, and changed the C:windowssystem32inetsrvconfig and open
applicationhost.config handler section path"*" to path="dispatch.fcgi"
Finally I added virtual directories to my site having alias images,stylesheets and javascripts.
I’ve followed all the instructions and am on Windows 2008. Has anyone solved this error:
HTTP Error 500.0 – Internal Server Error
c:rubybinruby.exe – The FastCGI process exited unexpectedly
Be sure you give the following permissions:
<Ruby> NETWORK SERVICE(Read and Execute)
<Ruby App> NETWORK SERVICE(Read and Execute)
<Ruby App/log> NETWORK SERVICE(Full Control)
<Ruby App/temp> NETWORK SERVICE(Full Control)
One last note: Account is "NETWORK SERVICE", not "NETWORK"
I have the same problem with c:rubybinruby.exe – The FastCGI process exited unexpectedly
I set permission both to ruby and my ruby site folders. =(((
Me Too. Premissions set and still getting error. I even tried using the latest FastCGI Update as well and it didn’t work either.
Yep same problem here Win Server 2008 x64, permissions all triple checked, and still get the blasted 500 error. Anyone out there solved this on x64?
Wow…should have paid closer attention to the rest of the comments….
September 06, 2009 11:35 AM by Steve H. posted the correct answer to the 500 error. Adding permissions for IIS_IUSR group to appfolder and ruby folder cleared this error right up…good job Steve!
Ok, so I can get the app running on Win64 – 2008, but if I navigate to app/login and enter Uname and PWD, I get another Internal 500 error and the app log says this:
Log Name: Application
Source: Application Error
Date: 4/12/2010 11:20:52 AM
Event ID: 1000
Task Category: (100)
Level: Error
Keywords: Classic
User: N/A
Computer: WMSTUTGIS02.WILLIAMS.com
Description:
Faulting application name: ruby.exe, version: 1.8.6.0, time stamp: 0x48a0d73f
Faulting module name: msvcrt-ruby18.dll, version: 1.8.6.0, time stamp: 0x48a0d73e
Exception code: 0x40000015
Fault offset: 0x000267f4
Faulting process id: 0xf80
Faulting application start time: 0x01cada5c14ce5fac
Faulting application path: c:Rubybinruby.exe
Faulting module path: c:Rubybinmsvcrt-ruby18.dll
Report Id: 5cb4e26f-464f-11df-a76b-001a64635a3a
Event Xml:
<Event xmlns="">
<System>
<Provider Name="Application Error" />
<EventID Qualifiers="0">1000</EventID>
<Level>2</Level>
<Task>100</Task>
<Keywords>0x80000000000000</Keywords>
<TimeCreated SystemTime="2010-04-12T16:20:52.000000000Z" />
<EventRecordID>959</EventRecordID>
<Channel>Application</Channel>
<Computer>WMSTUTGIS02.WILLIAMS.com</Computer>
<Security />
</System>
<EventData>
<Data>ruby.exe</Data>
<Data>1.8.6.0</Data>
<Data>48a0d73f</Data>
<Data>msvcrt-ruby18.dll</Data>
<Data>1.8.6.0</Data>
<Data>48a0d73e</Data>
<Data>40000015</Data>
<Data>000267f4</Data>
<Data>f80</Data>
<Data>01cada5c14ce5fac</Data>
<Data>c:Rubybinruby.exe</Data>
<Data>c:Rubybinmsvcrt-ruby18.dll</Data>
<Data>5cb4e26f-464f-11df-a76b-001a64635a3a</Data>
</EventData>
</Event>
Any ideas?
I'm stack on step 14. Installing SQLite doesn't help, the window with the "We're sorry.." still remain. But fixing permission for IIS_USRS group help!
blog.anlek.com/…/installing-updating-sqlite3-on-windows
link dows not work and i am stuck on step 14 … so either I havent installed sqlite properly or something else but I am still getting the "we're sorry…" message
Hi,
I am able to get a basic test-app running on Windows 7 with IIS 7 and Ruby 2.3.x.
However, I keep getting a 500.0 error as soon as I try to set up my actual application. This app has many plugins and gems and I am beginning to think that this may be the problem.
I have checked permissions a 100 times and I currently allow full control to both NETWORK SERVICE as well as IIS_IUSRS.
I am at the end of my rope.
Great thanks to you for this tutorial
Server Error in Application "RUBY"
Internet Information Services 7.5
Error Summary
HTTP Error 500.0 – Internal Server Error
<handler> scriptProcessor could not be found in <fastCGI> application configuration
Detailed Error Information
Module FastCgiModule
Notification ExecuteRequestHandler
Handler RubyFastCGI
Error Code 0x80070585
Ok, here is the thing. If I remove a parameter to Ruby interpreter, so instead of this:
handlers>
<add name="RubyFastCGI" path="*" verb="*" modules="FastCgiModule" scriptProcessor="d:ruby192binruby.exe|C:inetpubRubymyApppublic
dispatch.fcgi" resourceType="Unspecified"
requireAccess="Script" />
</handlers>
I would have:
handlers>
<add name="RubyFastCGI" path="*" verb="*" modules="FastCgiModule" scriptProcessor="d:ruby192binruby.exe" resourceType="Unspecified"
requireAccess="Script" />
</handlers>
then I am getting this:
Server Error in Application "RUBY"
Internet Information Services 7.5
Error Summary
HTTP Error 500.0 – Internal Server Error
d:Ruby192binruby.exe – The FastCGI process exited unexpectedly
Detailed Error Information
Module FastCgiModule
Notification ExecuteRequestHandler
Handler RubyFastCGI
Error Code 0x00000001
I have checked the permissions 1000000000000000000000000000 times.
But the real question is whether I need to use the parameter. Do I?
Hi,
I recommend to take a look at which has Ruby on Rails support on IIS7 and IIS Express.
Please try Helicon Zoo to easily configure Ruby on Rails for IIS 7 –
just so others are aware. in Step 12: Hook up IIS to Ruby you can find the "Add Module Mapping" link from the Actions section of the "Handler Mappings" icon on the home screen.
Hello All,
I had installed Ruby on Rails with still its not working
Best Regards,
Jalpesh
Hello.
I have this error.
HTTP Error 500.0 – Internal Server Error C:Ruby187binruby.exe – The FastCGI process exited unexpectedly
Thanks.
There is a way to run RoR on IIS 8 via HttpPlatformHandler – described by Scott Hanselman in his post "Announcing: Running Ruby on Rails on IIS8 (or anything else, really) with the new HttpPlatformHandler"…/AnnouncingRunningRubyOnRailsOnIIS8OrAnythingElseReallyWithTheNewHttpPlatformHandler.aspx
If I didn't install rails with the -d and have an existing app, is there a way to install the dispatchers after the fact?
thanks
chris | https://blogs.msdn.microsoft.com/dgorti/2009/06/17/ruby-on-rails-with-iis-7-reloaded/ | CC-MAIN-2019-43 | refinedweb | 1,893 | 57.37 |
How to pretty print JSON file
Pretty printing a JSON file in python is easy. Python provides a module called JSON to deal with JSON files. This module provides a lot of useful method including a method called dumps to pretty print JSON content.
In this post, I will show you how to pretty print JSON data in python with examples.
Example to pretty print :
Let’s consider the below example :
import json data = '[{"name" : "Alex", "age" : 19},{"name" : "Bob", "age" : 18},{"name" : "Charlie", "age" : 21}]' json_obj = json.loads(data) pretty_obj = json.dumps(json_obj) print(pretty_obj)
Here, data is the given JSON. json.loads converts the JSON data to a JSON object. We are using json.dumps to convert that JSON object. If you execute this program, it will give one output as like below :
Not a pretty print ! Because we need to specify the indent level in the dumps method :
pretty_obj = json.dumps(json_obj, indent=4)
Not it will give the required result :
Read JSON file and pretty print data :
Create one new file example.json and put the below JSON data :
[{"name" : "Alex", "age" : 19},{"name" : "Bob", "age" : 18},{"name" : "Charlie", "age" : 21}]'
In the same folder, create one python file to read the contents from this file :
import json with open('example.json', 'r') as example_file: json_obj = json.load(example_file) pretty_obj = json.dumps(json_obj, indent=4) print(pretty_obj)
Note that we are using load(), not loads() to read the content from a file. It will pretty print the file data.
Write pretty print JSON data to a file :
We can also use the above method to pretty print data to a separate file.
import json data = '[{"name" : "Alex", "age" : 19},{"name" : "Bob", "age" : 18},{"name" : "Charlie", "age" : 21}]' example_file = open('example.json', 'w'); json_obj = json.loads(data) pretty_obj = json.dumps(json_obj, indent=4) example_file.write(pretty_obj) example_file.close()
If you open the example.json file, it will looks as like below : | https://www.codevscolor.com/python-pretty-print-json | CC-MAIN-2020-40 | refinedweb | 324 | 68.26 |
NAME
keyboard - how to type characters
DESCRIPTION
Keyboards are idiosyncratic. It should be obvious how to type ordinary ASCII characters, backspace, tab, escape, and newline. In general, the key labeled Return or Enter generates a newline (0x0A); if there is a key labeled Line Feed, it generates a carriage return (0x0D); CRLFs are not used. All control characters are typed in the usual way; in particular, control-J is a line feed and control-M a carriage return. The delete character (0x7F) is usually generated by a key labeled Del, Delete, or Int. The view character (0x80), sam(1), and 9term(1), causes windows to scroll forward; the back-view character (0x81), causes windows to scroll backward. The view character is generated by the ← and ↓ keys; the back-view character is generated by the → and ⇑ keys. Internally, characters are represented as runes (see utf(5g)). Any 16-bit rune can be typed as a multi-character sequence. The compose key must be pressed while the first character of the sequence is typed; on most terminals, the compose key is labeled Alt. While pressing the compose key, type a capital and exactly four hexadecimal characters (digits and to to enter the rune with the value represented by the typed number. There are two-character shorthands for some characters. The compose key must be pressed while typing the first character of the pair. The following sequences generate the desired rune: !! ¢ c$ l$ g$ y$ || § SS "" © cO « sa << ¬ no [0xad] -- ® rO __ ° de ± +- s2 s3 ´ ’’ [0xb5] mi ⊥ pg .. ,, s1 s0 >> ¼ 14 ½ 12 ¾ 34 ?? × mu /O Ù ‘U Ú ’U Û ^U "U Ý ’Y |P ß ss ÷ -: /o ù ‘u ú ’u û ^u "u ý ’y |p "y α *a β *b γ *g δ *d ε *e ζ *z η *y θ *h ι *i κ *k λ *l μ *m ν *n ξ *c ο *o π *p ρ *r ς ts σ *s τ *t υ *u φ *f χ *x ψ *q ω *w Α *A Β *B Γ *G Δ *D Ε *E Ζ *Z Η *Y Θ *H Ι *I Κ *K Λ *L Μ *M Ν *N Ξ *C Ο *O Π *P Ρ *R Σ *S Τ *T Υ *U Φ *F Χ *X Ψ *Q Ω *W ← <- ⇑ ua → -> ↓ da ↔ ab ∀ fa ∃ te ∂ pd ∅ es [0x2206] De ∇ gr !m [0x220d] st ∗ ** · bu √ sr ∝ pt ∞ if an ─ l& l| ∩ ca ∪ cu ∫ is ∴ tf [0x2243] ~= ≈ cg [0x2248] ~~ ≠ != ≡ == ≤ <= ≥ >= ⊂ sb ⊃ sp !b ⊆ ib ⊇ ip O+ [0x2296] O- Ox [0x22a2] tu [0x22a8] Tu [0x22c4] lz el Note the difference between ß (ss) and [0xb5] (micron) and the Greek β and μ. As well, white and black chess pieces may be escaped using the sequence color (w or b) followed by piece (k for king, q for queen, r for rook, n for knight, b for bishop, or p for pawn).
SEE ALSO
ascii(7), sam(1), 9term(1), graphics(3g), utf(5g) KEYBOARD(5g) | http://manpages.ubuntu.com/manpages/hardy/en/man5/keyboard.5g.html | CC-MAIN-2014-41 | refinedweb | 498 | 73.51 |
Linux namespace relationships library
Project description
Linux Kernel Namespace Relations
NOTE: Python 3.6+ supported only
This Python 3 package allows discovering the following Linux Kernel
namespace relationships and properties, without having to delve into
ioctl() hell:
- the owning user namespace of another Linux kernel namespace.
- the parent namespace of either a user or a PID namespace.
- type of a Linux kernel namespace: user, PID, network, ...
- owner user ID of a user namespace.
See also ioctl() operations for Linux namespaces for more background information of the namespace operations exposed by this Python library.
Installation
$ pip3 install linuxns-rel
API Documentation
Please head over to our linuxns_rel API documentation on GitHub Pages.
CLI Examples
List User Namespaces
$ lsuserns
may yield something like this, a pretty hierarchy of Linux kernel user namespaces:
user:[4026531837] owner root (0) ├── user:[4026532696] owner foobar (1000) ├── user:[4026532638] owner foobar (1000) ├── user:[4026532582] owner foobar (1000) │ └── user:[4026532639] owner foobar (1000) │ └── user:[4026532640] owner foobar (1000) │ └── user:[4026532641] owner foobar (1000) ├── user:[4026532466] owner foobar (1000) │ └── user:[4026532464] owner foobar (1000) ├── user:[4026532523] owner foobar (1000) └── user:[4026532583] owner foobar (1000)
If you have either Chromium or/and Firefox running, then these will add some user namespaces in order to sandbox their inner workings. And to add in some more hierarchical user namespaces, in another terminal session simply issue the following command:
$ unshare -Ur unshare -Ur unshare -Ur unshare -Ur
Debian users may need to
sudo because their distro's default
configuration prohibits ordinary users to create new user namespaces.
List PID Namespaces
$ lspidns
shows the PID namespace hierarchy, such as:
pid:[4026531836] owner user:[4026531837] root (0) └── pid:[4026532467] owner user:[4026532466] foobar (1000) ├── pid:[4026532465] owner user:[4026532464] foobar (1000) ├── pid:[4026532526] owner user:[4026532464] foobar (1000) └── pid:[4026532581] owner user:[4026532464] foobar (1000)
Don't worry that the PID namespace hierarchy doesn't match the user
namespace hierarchy. That's perfectly fine, depending on which programs
run. In our example, we didn't create new PID namespaces when using
unshare, so we see only additional PID namespaces created by
Chromium (Firefox doesn't create them though).
Potentially FAQs
Q: Why do
get_userns()and
get_parentns()return file objects (
TextIO) instead of filesystem paths?
A: Because that's what the Linux namespace-related
ioctl()functions are giving us: open file descriptors referencing namespaces in the special
nsfsnamespace filesystem. There are no paths associated with them.
Q: What argument types do
get_nstype(),
get_userns(),
get_parentns(), and
get_owner_uid()expect?
A: Choose your weapon:
- a filesystem path (name), such as
/proc/self/ns/user,
- an open file object (
TextIO), such as returned by
open(),
- an open file descriptor, such as returned by
fileno()methods.
Q: Why does
get_parentns()throw an PermissionError?
A: There are multiple causes:
- you didn't specify a PID or user namespace,
- the parent namespace either doesn't exist,
- or the parent namespace is inaccessible to you,
- oh, you really have no access to the namespace reference.
Q: Why does
get_userns()throw an PermissionError?
A: You don't have access to the owning user namespace.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/linuxns-rel/1.0.0/ | CC-MAIN-2022-33 | refinedweb | 532 | 50.06 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.