text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Fedora 8 has been released. It sports a new look and feel, a codec installation program, the first signs of the GNOME online desktop, various security improvements, support for Compiz and Compiz-Fusion, Java support via Iced-Tea, and much more. Get it from the download page. Update: A couple of articles about the release: 1, 2, 3. Update II: One more: 4.
Fedora 8 Released
About The Author
Thom Holwerda
Follow me on Twitter @thomholwerda
143 Comments
2007-11-08 8:55 pmdylansmrjones
pulseaudio isn’t unique for Fedora. It’s everywhere you know. It would be more correct to say: “Finally, even Fedora has it now”.
It’s great to see people show enthusiasm about their distro of choice, but don’t hype what the rest of us has had for a long time
-
2007-11-08 9:16 pmdylansmrjones
1) I’m not taking a jab at anyone.
2) “Enabled by default” is irrelevant and has nothing to do with this. The only thing that matters is whether or not it is available. Besides that, not all distributions have a default configuration
3) I read the news carefully, and it is news. But dissing other distributions despite them having the same packages installed (by default or as dependencies for other packages) for longer than Fedora simply isn’t acceptable.
4) I have nothing against Fedora users, but I do have something against people dissing other people. So, don’t do that.
I’ve been a Fedora user for long long time too (and Redhat Linux 6 thru 9 before FC2). It’s also a long long time since I jumped to another (and more bleeding edge as well as stable) distribution.
All I wrote was that people shouldn’t hype what the rest of us already have. Nothing wrong about that wish.
2007-11-08 9:27 pmRahul
“Enabled by default” is irrelevant and has nothing to do with this. The only thing that matters is whether or not it is available. Besides that, not all distributions have a default configuration
Defaults do matter for the large majority of users and makes a big difference in the user experience and is infact what makes distribution’s different to a large extend. Also PulseAudio maintainer is a Fedora developer and has done months of work in getting it to a state here it works robust enough for many use cases. As an example, a plugin for Flash for written during the Fedora 8 development time in response to request from users testing and participating in the discussions while makes Flash and PulseAudio work well together. I doubt that any other distribution has it yet. So yes, defaults matter and what is in the repository and how it is configured matters too.
2007-11-09 6:43 aml3v1
enabled by default.
So from now on we’ll judge a distro based on what it has enabled by default ? Yeah, I’m going too far, still, I’m not willing to do that. Availability counts more than enable-ity. Giving the credit for pulseaudio development is one thing, saying a distro is better because it has it by default, is another. For the average users, it won’t matter if it’s there or not, since they don’t know what it is anyway. For us here, the news is that it’s in a better state of development, and that it’s available, the rest well, chitchat.
2007-11-08 11:02 pmapoclypse
Yep. Teh first packages I saw of Pulse was in Ubuntu. Up until recently what was missing was the flash support and xine support. Luckily this has been resolved and is pretty easy to find and configure the correct packages. i’m hoping that pulse becomes the defacto sound server since it has many promising features coming down the line.
2007-11-09 5:34 pmsiki_miki
The biggest problem seems to get some proprietary apps to work on Pulse. In some cases it’s possible to use OSS/alsa wrappers, but few important apps (Realplayer, some Skype versions..) still refuse to work (pasuspender is a last resort for those).
So it’s good that RH is pushing this as default. It will force them to either support PA or make their applications work with the emulation.
I have been using it happily since 7.04 Ubuntu (0.9.6). ESD emulation refuses to work, but otherwise I enjoy added functionallity (still with minor annoyances like latency while changing volume, etc.).
Funny, it is already 8, yet 6 & 7 seem to have passed me by without notice. I easily switched to Opensuse after experiencing too many issues that should never have existed in the first place, most which Suse did not have.
But, correct me if I am wrong on this, but it seems Fedora community has done a lot of work during this time to really put out a fine distro. Reading over the release notes it seems that Fedora is once again back on track, which is nice because I have always had a preference (probably due to having been a longtime Red Hat user). I have to say the this new release seems much more impressive than any of the last 3-4 versions.
2007-11-08 6:20 pmanyweb
what people have probably missed in the last few releases is the huge amount of work on the ‘infrastructure’ behind fedora in order to get it to where it is today,
it’s clear now that that work is now showing the fruits of fruition and everything is coming together
* custom spins
* fedora 8 on a usb key
* pulseaudio
* codecbuddy
* yum improvements (yes it’s fast)
* packagemanagement improvements (change repos and more)
* gui for firewall
* online desktop
* the whole fedoraproject.org website and associated projects
i think F9 (yes I’m talking about that already) will be awesome, as it’ll most likely have compiz enabled by default plus a whole lot more, and no doubt it’ll have ironed out any bugs still present or yet to be found in F8
cheers
anyweb
-
2007-11-08 8:03 pmSlackerJack
It’s more about getting OpenOffice and other linux tech to work better with Microsoft’s products. Microsoft can’t possibility just make them working together without some gain or license.
2007-11-08 11:57 pmMorgan
The same way other great distros do it: Hard work and dedication to the goals of the project. Ubuntu is as far along in some respects, and further in a few more, than Fedora and without any help from Redmond. Slackware, while behind on features, is hands down the most rock-solid, stable major distro out there, again without Microsoft’s help. FreeBSD, while not a Linux distro, is open source UNIX and is the most popular internet server OS, without a shred of Microsoft input.
Why on earth would any Linux distro or other alternative OS really need help from them to do good things?
2007-11-09 12:39 amSEJeff
Dude honestly, what rock did you crawl out from under? If you want “rock solid Linux” use RHEL. It has a multitude more qa and users finding and ironing out bugs. Also Pat makes Slackware a (mostly) 1 man job. You’ve never ran mission critical servers with uptime requirements allowing less than 20 hours of scheduled downtime / year have you? It shows.
FreeBSD might be the most popular server os if you work at Yahoo, but the rest of the internet disagrees with you. If you want to refute what I’m saying show me a reputable source with some numbers to back it up. You won’t find them.
Edited 2007-11-09 00:40
2007-11-09 12:48 amMorgan
I could ask the same of you: Show me the numbers. But that wasn’t what my post was about. My point was that none of the distros I listed needed Microsoft’s help to be as good as they are.
In response to your rant, maybe RHEL is more secure and stable than Slackware but my personal experience shows otherwise. It’s an opinion based on my own experience and should be taken as such. You are probably right in that RHEL is more stable and secure but whatever.
As for internet servers, here’s a few tidbits:
From regarding FreeBSD:
*.
I could go on, but you get the picture.
2007-11-09 1:33 amsbergman27
“””.
“””
That bit about MS using FreeBSD instead of Windows on Hotmail is absolutely *ancient*.
I believe, but am not certain, that the uptime counter on 32 bit Linux still rolls over after 2^32 jiffies. (Not sure how, if at all, the new tickless operation might affect that.) So the Netcraft statistic might be pretty meaningless.
“””
I could go on, but you get the picture.
“””
Yes. But perhaps not the one you intended to paint. 😉
FreeBSD is a solid enough player that I’m sure there are better examples to be had, though.
Edited 2007-11-09 01:39
2007-11-09 9:24 pmCrLf
“I believe, but am not certain, that the uptime counter on 32 bit Linux still rolls over after 2^32 jiffies.”
It does, but that isn’t what makes Netcraft uptime statistics meaningless. They used to (correct me if I’m wrong) calculate uptime from TCP signatures (like nmap does), which stopped being reliable a long time ago for Linux, but also for other OSes (like FreeBSD). For this reason, netcraft stopped publishing uptime top lists.
2007-11-09 11:27 pmsbergman27
Last I heard, they had switched to using a (then new) feature of Apache which reports the uptime… but is subject to the rollever effect.
2007-11-09 5:28 pmSEJeff
Microsoft stopped using FreeBSD years ago man. Remember that they bought Hotmail and up until then, Hotmail was an all FreeBSD shop. It isn’t that I’m knocking it, ports is pretty nice. Its just that it is difficult to manage source based operating systems in mass (being > 1000) easily while they are being used in production.
It looks like Netcraft stopped doing their “top operating system” charts awhile back. Here is the most recent one I can find. It certainly isn’t like Linux has lost a ton of marketshare since 2005. If anything it has grown quite a bit since then.…
And like I stated earlier, Yahoo is known to be a huge freebsd user. You were totally right about none of those distros needing help from Microsoft. I certainly won’t complain that OO.o can open up and use Excel macros thanks to Novell though. I was playing devil’s advocate and being a bit more of an ass than was called for.
2007-11-09 6:25 pmstartxjeff
Now now children….
Please keep your hands in the windows, and stop flipping off the nice policeman on the motorcycle.
2007-11-08 8:36 pmSEJeff.
Now upstream gnome software like totem is taking advantage of that API if it exists to do this on any distro. Very cool stuff indeed.
2007-11-08 9:06 pmRahul
Not quite true. Ubuntu used a separate shared library called libgimme and Bastien Nocera, Red Hat developer who is also the Totem maintainer pushed the code upstream and it was made part of the upstream project itself instead of a separate shared library. Then he used that API and worked with other developers to enable codeina aka codec buddy. Refer for more information on this.
Edited 2007-11-08 21:07
2007-11-08 9:23 pmdylansmrjones
That’s what SEJeff wrote.. *sigh*
Read what people writes, before you take a jab at them.
-
2007-11-09 12:11 amSEJeff
I’m confused what exactly you disagree with me on since you agree with exactly what I said.
The idea came from a discussion on the Ubuntu development mailinglist of which I subscribe. They punted around ideas with gstreamer upstream and gstreamer upstream wrote in the hooks. libgimme won’t work without the hooks in gstreamer fyi.
This was a huge deal during the feisty (Ubuntu 7.04) development period because the Canonical developers weren’t sure if the gstreamer developers would have the API complete in time for them to sync the changes and ship it with Ubuntu before the freeze / release of Feisty. libgimme-codec was a stopgap until the API was feature complete.
Then this was pushed upstream and now other distros can use it. Kind of like how Ubuntu uses Fedora’s system-config-printer and they sent enough bugfixing patches that they have commit access to it’s vcs repository.
So what was “not quite true” about this?
2007-11-10 12:20 amLengsel
I’ve skipped the last few Fedora releases, I think openSUSE does a better job, but I do plan on trying Fedora 9. I think F9 really has the potential to blow up! What I mean is Fedora could really get huge, depending on how the implement KDE4. That’s why I think F9 could really get big, is if KDE4 is really slick and smooth in Fedora, combine that with just having Fedora now, no Core and Extras, I think it really might prove to be a sweet release.
…must stop urge to distro hop…
It’s too much, I have to download it now!
-
-
2007-11-08 11:49 pmthegnome87
“heh, i’m curious, what are you hopping from ?”
I was considering hopping from Ubuntu. Though Mandriva 2008 looks good too…mmm so many polished choices…
-
Never have I seen a binary Distribution so cutting edge, very few packages are not there latest versions. It also makes me wonder which is better a release every in a 6 month cycle based on gnome like Ubuntu or to release ad-hoc. It definitely has worked to fedoras advantage in this instance. It does make me wonder is release based on every other kernel would not be a better option.
2007-11-08 7:16 pmsbergman27
Wow. We agree. 😉
About the cutting edge part, anyway. That’s what makes Fedora fun. And it does not stop with the included packages, either. The cutting edge regression testing and quality control means that it’s actually pretty stable, considering the amount of churn in the packages. It is not only up to date when you install it. It tends to stay that way. If an upstream developer releases a 1.02 release to supercede 1.01, Fedora passes that on to the user. WRT the kernel, have a look at the updates repos for previous releases. You will find that as new kernels are released, and after a period of testing, they update the kernel. For example, FC6 started with kernel 2.6.18. But currently has 2.6.22.
So, you see, cyclops, your wish to have the latest kernel is already granted. 🙂
And, IMO, it really makes more sense to plan around the release of the default DE. The DE is really more significant to most end users, and requires more attention by the distro maintainer, than does the kernel. Assuming desktop use, anyway.
I know we don’t always see eye to eye. But since we are both impressed with Fedora, I do hope you find this helpful and interesting.
-Steve
Edited 2007-11-08 19:17
-
2007-11-10 6:35 pmPlatformAgnostic
Heh… looks like you really did him the “smackdown.” Don’t worry, though… Sbergman isn’t a Vista User like I am.
2007-11-11 2:17 amsbergman27
“””
Don’t worry, though… Sbergman isn’t a Vista User like I am.
“””
I avoid it like the plague. 😉
But I thought “creepy individual” was a bit half-hearted and unimaginative. A far cry from the old days when he used to call me a “slimy little toady”.[1]
Ah, memories! 🙂
Anyway, I have about 5 internal and customer servers I’ll be ungrading to F8. My own desktop first, of course. This does look like a nice release.
[1]
Edited 2007-11-11 02:26
Looks like a excellent feature set, I hope they fixed the compiz screen tearing because that really annoyed me with 7 and it makes scrolling sloppy. It’s good to see the security features being kept as a priority and nothing like having the security in place ready for the big time market.
2007-11-08 7:11 pm
2007-11-08 7:12 pmshotsman
I did an upgrade from a vanilla 7 for 7.92 and then applied the patched to that to get to what to all intents and purposes is Fedora 8.
There were a few glitches with the 7.92 but according to Bugzilla, these have been fixed if the released package set for Fedora 8 and so far, I have to say that it was overall and pleasurable experience when compared to other FC series upgrades (3,4 & 5)
I have to agree with a previous poster. There has been lots of work under the covers that has improved the overall distro in leaps & bounds. This is especially true with 7 and now 8.
So far, so good.
2007-11-09 8:22 amSoulbender
but I’m trying to get away from that and would like to see how good Fedora is at the distribution upgrade thing.
Make sure you have a LOT of free space:
$ yum upgrade
[snip]
Error Summary
————-
Disk Requirements:
At least 24122MB needed on the / filesystem.
WOW! I know, I only have ~150Mb free on / (after downloading all updated packages) but still…
2007-11-09 10:50 amSoulbender
You also need a lot of RAM, apparently:
Cleanup : pcsc-lite-devel ################### [1337/2012]
error: Couldn’t fork %preun: Cannot allocate memory
Cleanup : gnome-mag ################### [1338/2012]
error: Couldn’t fork %preun: Cannot allocate memory
Cleanup : perl-IO-Socket-INET6 ################### [1339/2012]
error: Couldn’t fork %postun: Cannot allocate memory
Segmentation fault
Nice. Anyone know how to “recover” from this?
2007-11-09 10:59 amRJop
Better place to ask about that…
Edited 2007-11-09 11:02
2007-11-09 11:06 amFinalzone
Just curious, have you tried these methods below before upgrading?
2007-11-10 11:25 amSoulbender
Yes and in the end I decided to just re-install from CD.
Now I know who “yum upgrade” isn’t recommended.
2007-11-10 6:37 pmPlatformAgnostic
Something there is pretty sloppy. They go OOM and then segfault? Great testing there!
2007-11-10 8:56 pmRahul
If you OOM, all sort of weird things happen. That isn’t specific to the upgrade process. moreover the yum upgrade is still not supported officially. Refer
I think the worst thing about running linux, is the constant need,want to try EVERYTHING! They all make it sound so good!
Being RedHat behind Fedora, I am sure Fedora project will become better that even Ubuntu very soon !!! (may be in one or two next releases). The Gnome version is supperior than any destro out there. Lets not forget RedHat was one of the main supporters of Gnome from the early days of it’s creation.
Common Fedora Rise from ashes.
This looks really cool, I’m seriously considering putting this on my laptop. However, I know that the Fedora community takes patents seriously, so I need to know: Will my intel wireless card work in this? How about my broadcom based Bluetooth adapter? Both of these devices work fine in ubuntu, but I’m worried.
-
-
-
-
2007-11-08 7:30 pmanyweb
i think your intel wireless will work just fine,
read this:-
.
”
taken from >
cheers
anyweb
2007-11-09 3:13 amMorgan
I have the infamous Broadcom 4318 card which finally has decent built-in support in Ubuntu 7.10. It still requires that you either already have the firmware as a local file or else that you have a working wired connection before it will fully enable. I’m sure this is purely because of license issues as there is no open-source firmware for this chipset. Still, it’s better than the tedious command-line dance of 7.04 and earlier.
I’m downloading FC8 right now and I look forward to finding out if it is just as easy as Ubuntu to get wireless up on my card. I have a feeling it will go as smoothly if not more so.
2007-11-09 3:40 amsbergman27
“””
I have the infamous Broadcom 4318 card which finally has decent built-in support in Ubuntu 7.10.
“””
Me too. In my laptop. But the support for it in Ubuntu has been good for the last 2 or 3 releases. It is supported by the restricted drivers manager now. But in Edgy and Feisty, installing the firmware was just a matter of telling synaptics to install it and saying “yes” when it asked if you want to install the firmware. Yes, you do need a working wired connection first. But it’s a hell of a lot better than all the hunting around for a working link to the firmware that I used to have to do.
“””
I’m downloading FC8 right now and I look forward to finding out if it is just as easy as Ubuntu to get wireless up on my card.
“””
Been there. Done that. It’s not. I spent an hour trying to get mine going and it didn’t work. I decided to put Ubuntu back on it. But I’m waiting for my bittorrent download of F8 to complete so I can upgrade my desktop box from F7.
2007-11-09 3:47 amMorgan
That’s odd because the fwcutter package in the repositories in 7.04 and previous didn’t work for my card. I resorted to command-line stuff I found on a few forum posts. They all involved manually downloading the .debs from other locations as well as pulling my own firmware from my laptop’s recovery CDs. As of 7.10 though, the restricted drivers manager has a built-in link to a firmware that works, though I’ve found that using my own firmware instead gets me better signal strength for some odd reason. Either way, it’s a few clicks in a GUI compared to a lot of copy-and-paste to a terminal.
For what it’s worth, my laptop is a Compaq Presario V2565US.
2007-11-09 4:00 amsbergman27
V2552us here. Yeah, fwcutter in 7.04 worked like a charm for me. The previous release, as well, I think.
Previous to that, I had gotten so darned frustrated with the 4318 that I ordered in an intel based mini-pci card to replace it. Guess what? The bios checks against a list of supported (HP) cards and won’t boot if it finds an unauthorized (Intel) card instead. And all that time I had mistakenly thought it was my laptop, since I had bought and paid for it and all, and not HP’s.
Edited 2007-11-09 04:02
2007-11-09 7:43 amRJop
>I’m downloading FC8 right now and I look forward to finding out if it is just as easy as
>Ubuntu to get wireless up on my card. I have a feeling it will go as smoothly if not more
>so.
If it depends on proprietary code, it’s “no go” on Fedora.…
Edited 2007-11-09 07:46
2007-11-09 8:40 amIceCubed
The bcm43xx driver is already in vanilla kernel.
Works semi good. There are some problems with the signal strength (10 meters from my router the signal strength drops to 47%, i get 10% at ~25 meters). But at least it works.
You still need to get the proprietary firmware.
Don’t know about fedora, but ubuntu installs the firmware using the ‘restricted drivers’ tool.
Edited 2007-11-09 08:42…
“You can do a text mode installation of the Live images using the liveinst command in the console.”
I imagine this is the installer option for the live CD?
2007-11-08 8:17 pmRahul
That is just the text based installer if you are booting from the live cd into runlevel 3. If you are booting into a desktop environment, just click the “install to hard disk” icon on the desktop. It doesn’t get any easier than that 😉
Here is a screenshot…
That’s the Fedora KDE Live image but GNOME Live image works the same way too.
in action, the Install to hard drive icon can be seen clearly.…
Edited 2007-11-08 20:44
The main reason why I use Ubuntu is because it’s really easy to install software using Synaptic.
How does this compare to Fedora? How is the package management in Fedora?
What are the experiences?
2007-11-08 9:56 pmRahul
Considering that Fedora has synaptic too, I wouldn’t say it a difference at all 😉
# yum install synaptic (which uses apt-rpm)
and off you go. Yum which is the default package manager and Add/Remove programs (pirut) which is the default graphical frontend does work well. Users who want more advanced options might take a look at yumex which is a alternative graphical frontend.
Others prefer Smart package manager which is also available in the repository. Finally the next version of Fedora will have PackageKit which is a distribution neutral graphical frontend. Refer
-
2007-11-08 10:09 pmJeroenverh
Thanks Rahul and Anyweb!
So when you use Pirut/Synaptic/Yumex or in the future PackageKit you automatically are connected to a Fedora repository?
2007-11-08 10:13 pmRahul
Yes that’s the coolest thing. They all connect to the same Fedora repository and use the exact same RPM database. You can even switch between them whenever you want. This is possible because yum, recent versions of apt-rpm and smart all understand the repomd format originally designed for yum and they are rely on RPM libs to do the backend transactions.
Similiar for PackageKit which can work as a graphical frontend on top of yum, apt,smart etc.
2007-11-08 10:24 pmJeroenverh
That sounds great!
I think it’s time to try it out again!
(The last time I used Fedora/Red Hat was version 6.2)
-
2007-11-08 10:27 pmrahim123
What about availability of up-to-date software. If for example Gimp upgrades to 2.6 two weeks after a Fedora version is released, do they offer backports or anything? What about versions of non-supported less common software packages? I personally find Ubuntu’s backports to be limited and slow to upgrade.
2007-11-08 10:32 pmRahul
Fedora has a stated objective () of staying close to upstream releases which includes not patching unless necessary and updating software quickly in the repository. Fedora includes new features in updates tooin the main updates repository itself rather than just security or bug fixes and Fedora repository has grown nicely over the last few releases.
Check out. There are other third repositories which offer packages that are non-free or patent encumbered too that Fedora official repository excludes.
I’ve downloaded and burned it etc. but I know that I’m going to eventually regret it. I’m very eager to try it out but the hopping back and forth from openSuse/Kubuntu/Fedora has me really weary. I’d decided to remain with openSuse 10.3 because everything works and there seems little advantage to install a new distro. The struggle to re-import onto my older hardware drives me a little nuts and the wireless thing – with Fedora madwifi/ndiswrapper quickly forced me into dependency hell. Then the insane number of updates eventually gets you in too deep. I was really impressed with Kubuntu 7.04 and was committed to it but when I upgraded to 7.10 (it had to be better right?) it wasn’t good enough, too many things were too much of a struggle to get working again. With openSuse all I had to do was have madWifi saved on a flashdrive and everything else was a breeze. Enough whining eh? I’ll have to commit or get off the pot I guess…..The good part is that we have something new, hopefully better to work with. The bad part is that with diversity we sometimes have less continuity and greater fragmentation.!
I can understand some hardware incompatibilities. I might even understand the touchpad not working without a specific driver, but the keyboard? Unacceptable!”?
2007-11-09 1:38 amsomebody!
Smarter people simply press power for few seconds, turn off their notebook. Turn it on again and press eject.
Even if you eject unresponsive system, what is the benefit here??? You’ll turn it off eventually.”?
Can you explain me why I have to reinstall so many XP instead of Vista if it “just works”?
2007-11-09 3:27 amMorgan
Can you explain me why I have to reinstall so many XP instead of Vista if it “just works”?
Hopefully I’m not opening a can of worms here, but in my experience it has been a combination of what the user needs along with what their computer is capable of. A good example is the $500 HP systems at Wal-mart with VHP preinstalled. Those machines are good enough to run Premium, though without much breathing room beyond that. Light gaming and multimedia are fine, upgrades of RAM and video are required for anything beyond that. A true gamer or video/audio pro would buy better gear initially though, so in my opinion that example system hits the mark just right.
Power users, on the other hand, may find Vista’s flaws much faster than the average user. In such cases, XP may still be the OS of choice if Windows is a requirement. Personally I feel that any modern OS is fine for basic web browsing, chatting, email and light office work, with Linux-based OSes edging out Windows and Mac because of the vast selection of high-quality free software. Most major distros are “ready to roll” with at least one good program for each conceived need already installed by default. OS X would be a close second in this area, with iLife, Safari, Mail and iChat already there. Vista lags behind on this front, though it’s a step in the right direction at least, with better built-in photo and music handling than XP.
-
2007-11-10 3:25 amJon Dough
Can you explain me why I have to reinstall so many XP instead of Vista if it “just works”?
Your comment sounds like someone trying to install Vista on a computer with a pre-existing OS rather than buying a computer with Vista pre-installed. I am sure that if my laptop came with one of the various GNU/Linux distributions pre-installed, it would “just work” as well as Microsoft Vista Home Premium does.
2007-11-09 6:27 amh3rman”?
Might be just me, but… why do I feel this comment sounds just like a commercial?
-
up until now, a linux distro would boast of supporting a particular software sometimes especialized to the flavor at discussion to show how good it is. this is no better than a waste..
a software must boast about itself, and it must be a default that such software can be installed in an OS. indeed, it must be the responsibility of the software to tell the world it can be installed on an OS. linux distros however tend to (impractically) feature the opposite.
2007-11-11 2:21 amdcwrwrfhndz
Try removing rhgb quiet from the grub entry.
On the grub screen, move to the fedora entry end press ‘e’, move to the line which ends with rhgb quiet and press ‘e’ again.
Then remove rhgb quiet and press enter and then ‘b’.
You should now see some more messages.
Is it hanging with inifinite messages that look like this?
init[1] divide trap error rip:…. rsp:…. error 0
Edited 2007-11-11 02:23
-
2007-11-15 12:02 pmdcwrwrfhndz
Do you have FreeBSD partitions on you pc, even if on different HD than the one on which you installed F8?
(why the hell people are modding down this guy???)
I tried to download the x86_64 version of the ISO image at the university due to their fast connection speed.
After 3 hours I had less than 12% of the download. During the last hour a classmate tried using BitTorrent on another machine and we couldn’t get it to download fast enough to be done by closing time at the computer lab.
Be sure to allow plenty of time for this 3.6+ GB download! (BTW, I plan on using more than one spin so downloading only one of them wouldn’t be an option.)
I just installed Fedora 8. It’s pretty nice. You can add remove repositories by GUI now. The firewall tool is nice but I think I still like firestarter better.
There is the pulseaudio sound server but I have yet to figure out what the advantage is.
Anyway, as usual I have some configuring and software to install in order to get it the way I like it.
2007-11-09 10:56 amFinalzone
It is now possible to install packages from either CD/DVD. Using Add/Remove Software, make sure to check InstallMedia (it should be more descriptive) and disable other online repository. That is one of the most awaited features for Fedora users who don’t have connection or have slow narrowband network.
I’d like to share the first impressions I got after running the Fedora 8 Gnome live system CD.
Initialization was fine, but I had to adjust the monitor settings a bit (at the monitor) because the upper region of the screen has been placed outside the CRT. The monitor is an Eizo FlexScan F980 21″ tube model.
Personally I did like the “Show details” button when boot stage had reached X level, so you could do some diagnostics at services startup. Good idea.
The mouse pointer, allthough excellent in shape and color, seemed a bit blurred to me, because of the strange shadow beneath it. To me, it is a no-go to make things less cognizable.
Now for internationalization: I selected the “Change language” button at the login screen and selected “German”, so de_DE.UFT-8 got set. After loggin in the user “fedora”, the problems started. Allthough the error messages were displayed in German (Good!), the result was a bit disappointing. I will translate the messages for you. #1: “The language default.desktop does not exist. System default will be used.” #2: “Could not find ‘exec’ command in the session script ‘fedora’, so the secured GNOME session will be tried for you.” #3: “This is the secured GNOME sessopn. ou will be logged into the GNOME default session, no start scripts will be run. This is for troubleshooting only.” (abgesichert: secured / saved / failsafe). After that, the whole GNOME desktop waas in English. Allthough I could do System / Administration / Language: German (OK), there was no change at all. Using System / Administration / Keyboard, the german layout could be selected without problems. But problems started inside the Terminal when Umlauts (ö. ä, ü) and Eszett (ß) were entered, strange repeatition effects appeared.
Hardware detection was good, e. g. bttv0 and agpgart were working. Even the PD drive was recognized, as well as the UFS volumes on the harddisks. Sadly, they could not be accessed. If I put a jaz in the iomega jaz drive (This device is used for entertainment only.), in “Computer” a new icon appears, titled “jaz%20Drive.drive”, but the media cannot be accessed. The error message did not help very much, especially the advice “try dmesg | tail or so” sounds a bit impertinently. In “Computer” there was a folder “Network”, containing another folder “Windows Network”. Huh? “Windows”? This folder was empty. Very strange… it seems to be a placeholder only.
The middle mouse key did not work as intended (middle key = scroll, when mouse moved forward / backward, and paste selected text if clicked), it seemed to do the same as the left key. This is what I’ve used:
% dmesg | grep “^u[km]”
Printing was fun. The HP Laserjet 4000 has been detected correctly and the printer driver worked without any problems, after the A4 standard paper format and the duplex unit availability have been selected. And hey, Abiword is a cool application. 🙂
Enabling desktop effects did not work, maybe my old ATI Radeon 9000 RV250 with 128 MB is not supported. Rotating the screen 90° did render the screen unreadable.
And finally, after export PS1=”/u@/h:/w/$ ” (used / instead of backslash because it disappears from the post), the Terminal was quite usable. 🙂
So, to sum up my very first impressions: Fedora looks good by default, brings a lot of functionality on the live system CD, but I think it needs to be installed to harddisk in order to get some things working. If I find a spare disk, I will try for sure.
2007-11-10 7:14 amgilboa
… Due to obvious size limitations, the Live CD has a lot of language packages missing.
While the LiveCD should have logged in cleanly when set to non en_US language (or disable the language selection altogether), I’d venture and guess that the actual installed system will work just fine.
Never the less, can you please take the time and file a bug report about the reported problems? [1]
– Gilboa
[1]
2007-11-09 3:19 pmRahul
1) There are good performance improvements. You might just want to download and try it out for yourself though
2) When was that? “RPM based” is a very broad sweep. Yum works well in Fedora. If not there is always alternatives like apt-rpm (synaptic) and smart. Feel free to post your impressions.
2007-11-10 12:45 amHeLfReZ
One thing I found is depending on the distro/version you use…sometime cpu-scaling is enabled by default, some not. Debian/Ubuntu usually run about the same for me if the scaling is turned off. chmod+s for the selector allows you to set the speed on the fly from the gui.
Same goes for distro’s like OpenSUSE and Fedora, check the scaling, alot of the fast VS slow arguments come from people not knowing what the defaults are, check it out, you might be surprised.
-
2007-11-09 5:47 pmshotsman
I installed it earlier today in a VMWare Image.
No problems.
Ok, I didn’t expect any.
But there are often unexpected surprises in any new release…
I have just finished upgrading a PPC Mac Mini. Again everything worked OOTB.
In answer to another post about broadcom support, the output drom ‘dmesg’ shows a url where you can get the microcode for 43xx based WIFI Adapters. A definite improvement here. This was on a Dell Inspiron 8600 laptop.
2007-11-10 7:24 amgilboa
You’ll need to latest any-any patch [1] to build the VMWare Server kernel module under latest-F7 and F8.
However, this is true for every recent distro. (that uses kernel >= 2.6.23 *)
– Gilboa
[1]…
* -current/-testing distro already beginning to get 2.6.24rc kernel.
2007-11-10 8:37 amnetpython
However, this is true for every recent distro. (that uses kernel >= 2.6.23 *)
The latest vmware-workstation 6 compiles without the any-whatever patch on a 2.6.23.1 OpenSuSE factory kernel.
I haven’t tried to install vmware server but i don’t see that much difference. Perhaps SuSE allready has patched the kernel.
-
2007-11-10 10:28 amMoocha
Workstation 6 works just fine, you just have to (manually) apply this patch to /lib/modules/$(uname -r)/build/include/asm/page.h since Fedora’s gcc will otherwise balk when building the modules:
— page.h.orig 2007-09-13 14:36:24.000000000 +0300
+++ page.h 2007-09-13 14:36:24.000000000 +0300
@@ -109,7 +109,9 @@
static inline pte_t native_make_pte(unsigned long val)
{
– return (pte_t) { .pte_low = val };
+ pte_t pte;
+ pte.pte_low = val;
+ return pte;
}
#define HPAGE_SHIFT 22
I’m currently in love with PulseAudio. I couldn’t get it to work in Fedora 7 on my sound card (Intel DG965WH integrated STAC 9221 audio) despite investing a lot of time and effort. In F8 it worked out of the box. After downloading and compiling the libraries from its src subdirectory (the precompiled ones didn’t work) and running the supplied vmwareesd script instead of the default vmware wrapper and I can *finally* stop worrying about sound (VMware is still stuck in single-open antique OSS-land via /dev/dsp
2007-11-14 10:02 amnetpython
Without applying the patch the modules were compiled without any issue on FC8 (kernel 2.6.23.1). Thus vmware-workstation 6.0.2 build-59824 installed sans problems.
This was also the case with OpenSuSE 10.3
I doubt wether the any-any has to be applied with the newer kernels.
Edited 2007-11-14 10:03
Being excited about the feature set, I decided to give Fedora 8 a try.
First, I tried the live CD in a VirtualBox VM. This ran fine, albeit very slow. I did the hard drive install (to the VirtualBox virtual hd), which went OK, but slow. Finally, the installed to hd VM went really really slow.
Then, I burned the iso to CD, and tried to run it as a regular Live CD (natively). The X server failed to start.
Now, this laptop I’m doing it on is nothing fancy. It’s a Via chip (with integrated via video card on the motherboard), with a gig of RAM.
All other distros that I’ve run on this machine (tons of ’em) have detected video and properly configured X and started the X server. But Fedora 8 couldn’t, nor could Fedora 7.
Fedora couldn’t even get standard VESA running.
That’s rather shocking. Getting standard video configured and up and running is a Linux problem that has been solved for years now. Only heavily manual/command line oriented distros like Slackware or Arch make it a little more difficult (but not much). All other desktop and server oriented distros can, at the very least, get the standard VESA driver running on pretty much any hardware/video card.
And all the auto-configure scripts out there are open source, as well as the kernel, kernel modules, and kernel patches. Just use the Knoppix script, if not directly, as a reference, for goodness sake.
How a huge distro like Fedora, backed by a major corporation like Red Hat, can fail to get X auto-running on extremely ordinary hardware, while all other distros have succeeded on this same hardware, is beyond me.
Oh well. Perhaps there is a cheat code that will make it X run properly. I’ll look around for it. But if anyone knows of a cheat code, I’ll be appreciative of being alerted to it. 🙂
-
2007-11-09 8:12 pmJeffS
“I am confused here. You expected this setup to run Fedora speedily, yes?”
No. I expect a little bit of sluggishness while running an OS in a VM, especially on my hardware.
That said, I’ve run several Linux live iso’s in VirtualBox, as well as Windows 2000, and all ran acceptably fine. Sure, there was latency due to a live iso running in the VM, but the speed was enough to make it usable.
Fedora, on the other hand, was really really sluggish in this set up, to point of not really being usable (like running Vista with only 512meg RAM).
Plus, I did the hd install (into the VirtualBox virtual hd). What was odd was that it ran even more sluggishly this way. I had expected the opposite.
But the main thing is that trying to run the live CD directly (not in a VM), it failed to start X.
To me, that’s what was really odd. In the past, I always had good luck with Fedora, CentOS, and Red Hat, with running it on pretty much all hd without problems. But now, it can’t start X, on my extremely ordinary hardware, while all other distros could run X on this same hardware with zero problems.
-
The thing that keeps me from looking at fedora again (I was once a dedicated user) is the fact that other distro’s have said they will ALWAYS be free. The server versions of the software will be the same as the desktop versions. Never will they ask for money for the linux software itself. It’s ingrained into the distro and every release that follows. In my books, that’s huge.
I know that if I support (for example haha) Ubuntu, they will support me. RedHat doesn’t have the same lofty mission statement.
I’ve found in the past that Fedora was WAY too buggy on laptops and I didn’t really know/understand the direction. Plus, YUM really sucked!
So for now, for the foreseeable future, I’m with Ubuntu.
2007-11-10 1:38 amRahul
The thing that keeps me from looking at fedora again (I was once a dedicated user) is the fact that other distro’s have said they will ALWAYS be free
So has Fedora. Refer
Fedora believes in the statement “once free, always free” .
RHEL is not Fedora. It is a derivative distribution like OLPC and others are.
Edited 2007-11-10 01:39
Plus, YUM really sucked!
You should know that yum sucks much less than it used to. It is faster and the presto delta rpms feature makes updates very fast. Delta rpms are not standard yet in Fedora but will be in Fedora 9. Fedora 8 is a solid release. In my experience it is more stable than version 7. I absolutely love the new pulseaudio sound server. It rocks!
2007-11-10 9:09 amSoulbender
You should know that yum sucks much less than it used to.
Unfortunately sucking less than before doesn’t mean it doesnt still suck. Performance is still horrible (seriously, every time I start Yumex or Pirut it build packing lists and stuff for forever. What’s up with that?) and the interface is not up there with Synaptic yet.
That said, it does work and I’ve never had it screw up any dependencies or anything. The big issues for me is the abysmal performance and that the gui needs some/lot of polish.
2007-11-10 1:12 pmbuff
seriously, every time I start Yumex or Pirut it build packing lists and stuff for forever
I can honestly say I have been using Fedora since version 3 and I am currently using 8 and I never used those GUI utilities. It is easier to just open a shell and type yum upgrade or yum check-updates. Faster too since the GUI doesn’t have to build up. I just ran a yum update on 8 to get some data to post here. It took 25 seconds to check the repos and report back there were no updates. Not too shabby. Skip the GUI update Linux via a shell/terminal.
Edited 2007-11-10 13:16
-
I wish i could install the stupid thing. I’ve tried installing it several times since test 3 and even tried F7 with no luck at all. It keeps giving me a promise controller error, which I though all those issues with promise controllers were resolved like 5 years ago and my hardware is anything but new. I don’t have anything raided, yet the stupid lvm thingy still gives me errors and won’t install. I’ve asked questions on the forum with absolutely no answer and I’ve never had this issue with older versions (up until FC4) of fedora when i was a fedora user. Ubuntu, Suse, mandriva DO NOT give me this issue at all. Extremely disappointed at this point with FC.
Reading through these posts on this long thread, and seeing some other reviews, and considering Dvorak’s rant (whatevery Dvorak’s rants are work, very littel, but he did detail legitimate problems), and my own experience (Fedora couldn’t start X on my ordinary hardware), I’d have to say Fedora has serious, serious quality issues.
Yes, we all know Fedora is Red Hat’s experimental branch – a test bed for latest cutting edge stuff.
But you’d think that they would want to give a better user experience, and just make sure that basic hardware works properly (like it does with, like, 300 other distros).
Red Hat definetely has the resources, and they could simply copy hd detection scripts (all open source) from other distros that have great hd detection (like Debian, Ubuntu, Mepis, PCLinuxOS, openSUSE, Kanotix, sidux, knoppix, Mint, and many others).
If those other distros can make the vast majority of hardware “just work” out of the box, why can’t Fedora do so (when it’s backed by huge successful corporation Red Hat)?
Yes, Fedora is free, and experimental. But Red Hat will lose the ever so important mind share if they keep putting out crap with Fedora.
Oh well. RHEL is rock solid, as is the free CentOS.
But Fedora should be avoided, until Red Hat makes it a priority to put a little quality control into it.
2007-11-11 4:23 amFinalzone
It is time to stop having the wrong assumption. Fedora is a community project that includes Red Hat and community developers and users. When you look to the features, you see an active involvement of the community. Nodoka theme, Fedora Electronic Lab, Revisor utility to name a few are one of these examples. Fedora is actually the base of several distributions including Red Hat Enterprise Linux, Yellow Dog, OLPC Sugar and Pepper Linux. Rawhide is actually the test bed no different from Debian unstable.
For hardware detection, that is the classic YMMV (You mileage may vary) it might be a matter of bad luck from some while it works for others. In your case, it might the former. You can install an utility called Smolts () so you can report which device did not work or worked badly.
2007-11-11 5:24 amArkansas_Rebel
What is ‘isn’t very good’???
Ok now this is opinion and speculation here stating to avoid it?
I have been using Red Hat since 6.0 no expert but I have NO problems with Fedora and just passed my RHCT on Friday of THIS week!
Fedora has never had any stability problems that I am aware of and I have it custom configured and ALL of my hardware works fine, I have Logical volumes, Raid Arrays, and you name it working without a hitch.
This was not a legitimate claim to say avoid something that you do not understand or work with.
2007-11-11 10:56 pmapoclypse
That may be the case for you, but I can’t even install the stupid thing on my five year old pc that used to be able to install FC1-4 without any issues. I’m happy you are ab;e to install it without issues. Now I remember why I switched in the first place, Fedora was always way too experimental, things would break from one version to the next. I really do want to try it because it looks good and they are doing interesting things but the installer won’t let me.
First off if someone is having problems installing is the media valid, did it burn ok, did the MD5SUM check validate against the key?
I believe people are casting negative light on Fedora Core and they have NEVER used the Distro. I went from Red Hat 9.0 release wiped it and loaded FC3 and never had ANY problems or anything of the mystical errors, drives not found to no networking???
I am leading to believe these are people making ‘false’ reviews of Fedora because they dislike Red Hat and NOTHING else. This is wrong and just because you do not like a distro that does not give the right to comdem it and tell others lies about it.
Like I said earlier I just passed my first RHCT on Friday of this week and have NEVER heard/seen any of problems being reported on here about Fedora Core # any release.
I find it a total disservice to the Open Source movement and Free Software core.
2007-11-11 11:08 pmapoclypse
Media, checked, cd burned 4 times, FC7 was tried and that too was burned more than once and checked. No luck at all. This is not an attack on the distro, his is my experience as user trying to install fedora. I’ve even tried the DVD installer which I was trying to avoid since I hate having a dvd full of crap I can download later.
Your post comes off fanboyish, when all that people are doing (or at least in my case) is trying to use it. There is no casting of negative light, if it looks that way it definitely does not fall on our shoulders, it should fall of RedHat’s shoulders, for not caring enough about its users to give them a smooth user experience, instead relegating them to the position of a guinea pig.).
If this is what the Fedora community has evolved into then I don’t think I want anything to do with it. I remember it being a lot more friendly.
2007-11-12 3:15 amArkansas_Rebel).
I would be more than happy to help anyone I am no expert but I have learned from a Unix guru at work that is in the learning curve of SELinux, and so on like me.
I am not trying to be a ‘fan boy’ to any degree to be honest with you I tried Ubuntu and found that it was not for me. The main reason behind my acceptance of Fedora dates back to purchasing $60+ boxed set of Red Hat Professional I bought back in I believe 1999-2000 or so. I knew nothing about Linux and basically really never grasped the start of the underpinnings until I started a new position as a Linux Admin and I was required to take the RHCT and I have to study to take the RHCE in Feb/Mar of 2008.
So basically tinkering with Linux I was able to change IT jobs from a big Corp to a smaller company where you can learn and grow.
I am a member of Linuxquestions, Linuxforums, and Fedora Forums, in fact I will be signing up to becomea a ambassador in Fedora in Arkansas no less…..
Now I can relax a bit and be able to contribute my knowledge and be able to learn from others to. Maybe this can change your viewpoint.
2007-11-12 3:57 pmapoclypse
Great. Do you know why I keep getting a /dev/mapper/pdc_heiegcdb error everytime I try to install FC7-8 from both the live and DVD installer on my machine. I know its an issue with the promise controller (on my five year old machine) and lvm but I’ve never encountered this issue in other distros and now I’m not sure how to proceed. It keeps telling me the file can;t be found but when I got ot dev/mapper the pdc_heiegcdb file is there. I’ve tried looking for th answer on both the fedora forums as well as google and the error message doesn’t even get a hit. I found one lead on the fedora forums and that was dead end since the guy never got a response, neither did my own inquiries. So if you can help that would be great.
I’m no linux newb, I have been using linux since debian potato, and then switched to Redhat 9 and then FC1-4 then switched to Ubuntu at some point down the line (first version warty). This isn’t the first time i’ve had issues with the promise controller but that was like five years ago when I first built the machine, and it was with Mandrake (yes mandrake) 10. FC1-4 didn’t have this issue. From what I gathered on google it migh have something to do with lvm and libsata, but I’m not sure.
2007-11-13 12:58 amArkansas_Rebel
/dev/mapper/pdc_heiegcdb error
Can you install ISO distro’s, so I assume you have a DVD drive correct will it play other media formats?
Sounds to me like a hardware issue, LVM is the logical volume manager you can perform a custom install and partition out your drive without LVM’s. Also what kind of hardware is it, I was having all sorts of problems with the file system and the hard-drive was defective.
-
-
-
2007-11-13 8:13 pm
A couple users have noted problems getting X up and running and one person couldn’t get their Network card recognized.
Linux is not Windows. I always check before purchasing new hardware to ensure it will work with the kernel I am using and the distro. It takes extra effort but there is the reality that drivers are more challenging to find for Linux.
Fedora 8 worked right out of the box for an older Athlon Thunderbird system and a new Intel dual core chip. The only issue I had was the resolution of the LCD monitor had to be set by hand. It was not listed in xorg.conf and the Gnome utilities wouldn’t recognize the correct resolution. I wish minor things like this didn’t exist on Linux but they do. Less common hardware profiles don’t get recognized properly. I had the same problem with Ubuntu and this LCD monitor. I don’t mind doing a little extra technical work to get everything free. It is frustration sometimes, yes. Fedora lets me sample the cutting edge and I expect some tweaking. If I was looking for more stability I would probably use a commercial version of Linux but then I wouldn’t be able to mess around with pulseaudio and other cool features.
Edited 2007-11-11 15:19
I have a Toshiba laptop with a Celeron processor.
Not a fancy laptop but it’s quite functional.
Anyway I downloaded the LiveCD and then installed it to my hard drive. The install went smoothly.
The only problem is that now and again my usb mouse and keyboard keep locking up.
I don’t know if it’s something peculiar to my hardware.
A sincere good luck to everyone.
One thing I did do was report bugs on ‘bugzilla’ and I got prompt responses and the devs wanted more info and the problems were resolved very quickly.
I am not programmer, but sometimes the Dev’s don’t know certain problems exist and they can fix things or maybe escalate it to a higher level.
Just a thought, an example was Evolution the latest version was crashing for no reason I posting it within a week it was patched I find that amazing!
F8’s features far exceed any other distro out there, pulseaudio for one is unique, more details below
check it out >
cheers
anyweb | https://www.osnews.com/story/18900/fedora-8-released/ | CC-MAIN-2020-29 | refinedweb | 9,939 | 72.76 |
Summary
Enables COGO on a line feature class and adds COGO fields and COGO-enabled labeling to a line feature class. COGO fields store dimensions that are used to create line features in relation to each other.
Usage
The tool adds the following COGO fields to the selected line feature class: Arc Length, Direction, Distance, Radius, and Radius2. All fields are of type double.
The tool adds COGO-related labeling and symbology to the selected line feature class. Lines are drawn with added COGO symbology, and a label expression labels each line with its COGO dimensions if they exist.
- Run the Disable COGO tool to disable COGO on the line feature class. The COGO fields can be deleted.
If one or more of the COGO fields already exist and are of the correct type, only the remaining, missing COGO fields are added.
If a line feature class is COGO enabled, editing tools such as the Traverse tool
populate the COGO fields with the dimensions provided.
The Direction field stores the direction (bearing) of the line from its start point to its endpoint. The direction value is stored in the database as north azimuth (decimal degrees). You can display the direction in other units by setting display units for your project.
The Distance field stores the distance (length) of the line. The distance is stored in the database in the linear unit of the projection. You can display the distance in other units by setting display units for your project.
The ArcLength field stores the arc distance between the start point and endpoint of a curved line. The arc length distance is stored in the database in the linear unit of the projection. You can display the arc length distance in other units by setting display units for your project.
The Radius field stores the distance between the curve center point and the curve line. The radius distance is stored in the database in the linear unit of the projection. You can display the radius distance in other units by setting display units for your project.
The Radius2 field stores the second radius for a spiral curve. This radius can be set to infinity.
Parameters
arcpy.management.EnableCOGO(in_line_features)
Derived Output
Code sample
The following Python window script demonstrates how to use the EnableCOGO function in immediate mode.
import arcpy arcpy.env.workspace = "E:\ArcGISXI\Mont\Montgomery.gdb" arcpy.EnableCOGO_management("\Landbase\Road_cl")
The following stand-alone script demonstrates how to check for and enable COGO on a line feature class.
import arcpy # Variable to contain the path of the feature class that is to be COGO enabled lineFeatureClass = r"d:\test.gdb\myLineFC" # Check to see if the feature class is already enabled by using .isCOGOEnabled on a Describe if arcpy.Describe(lineFeatureClass).isCOGOEnabled == False: # If it returns False, run EnableCOGO_management and pass the feature class arcpy.EnableCOGO_management(lineFeatureClass) else: print("{} is already COGO Enabled".format(lineFeatureClass))
Environments
Licensing information
- Basic: No
- Standard: Yes
- Advanced: Yes | https://pro.arcgis.com/en/pro-app/latest/tool-reference/data-management/enable-cogo.htm | CC-MAIN-2022-33 | refinedweb | 495 | 57.06 |
The Dequeue() method is used to returns the object at the beginning of the Queue. This method is similar to the Peek() Method. The only difference between Dequeue and Peek method is that Peek() method will not modify the Queue but Dequeue will modify. This method is an O(1) operation and comes under
System.Collections.Generic namespace.
Syntax:
public T Dequeue ();
Return value: It returns the object which is removed from the beginning of the Queue.
Exception: The method throws InvalidOperationException on calling empty queue, therefore always check that the total count of a queue is greater than zero before calling the Dequeue() method.
Below programs illustrate the use of the above-discussed method:
Number of elements in the Queue: 4 Top element of queue is: 3 Number of elements in the Queue: 3
Example 2:
Number of elements in the Queue: 2 Top element of queue is: 2 Number of elements in the Queue: 1
Reference:
- | https://www.geeksforgeeks.org/getting-an-object-at-the-beginning-of-the-queue-in-c-sharp/ | CC-MAIN-2021-10 | refinedweb | 158 | 57.61 |
The API Gateway can request information about an authenticated end-user in the form of user attributes from a SAML PDP (Policy Decision Point) using the SAML Protocol (SAMLP). In such cases, the API Gateway presents evidence to the PDP in the form of some user credentials, such as the Distinguished Name of a client's X.509 certificate.
The PDP looks up its configured user store and retrieves attributes associated with that user. The attributes are inserted into a SAML attribute assertion and returned to the API Gateway in a SAMLP response. The assertion and/or SAMLP response is usually signed by the PDP.
When the API Gateway receives the SAMLP response, it performs a number of checks on the response, such as validating the PDP signature and certificate, and examining the assertion. It can also insert the SAML attribute assertion into the original:
These details.
Subject Confirmation:
The settings on the Confirmation Method tab determine
how the
<SubjectConfirmation> block of the
SAML assertion is generated. When the assertion is consumed by a
downstream Web Service, the information contained in the
<SubjectConfirmation> block can be used
to authenticate either.
Attributes:
You can list a number of user attributes to include in the SAML
attribute assertion that is generated by the API Gateway. If no attributes
are explicitly listed in this section, the API Gateway inserts all attributes
associated with the user (all user attributes in the
attribute.lookup.list
message attribute) in the assertion.
To add a specific attribute to the SAML attribute assertion, click the Add button. A user attribute can be configured using the Attribute Lookup dialog.
Enter the name of the attribute that is added to the assertion in the Attribute Name field. Enter the namespace that is associated with this attribute in the Namespace field.
You can edit and remove previously configured attributes using the Edit and Remove buttons.
The fields on this tab relate to the SAMLP Response returned from the SAML PDP. The following fields are available:
SOAP Actor/Role:
If the SAMLP response from the PDP contains a SAML attribute assertion, the API Gateway can extract it from the response and insert it into the downstream message. The SAML assertion is inserted into the WS-Security block identified by the specified SOAP actor/role.
Drift Time:
The SAMLP request to the PDP is time stamped by the API Gateway. To account for differences in the times on the machines running the API Gateway and the SAML PDP the specified time is subtracted from the time at which the API Gateway generates the SAMLP request. | https://docs.oracle.com/cd/E39820_01/doc.11121/gateway_docs/content/connector_saml_pdp_attrs.html | CC-MAIN-2020-50 | refinedweb | 432 | 51.99 |
06 August 2010 23:59 [Source: ICIS news]
LONDON (ICIS)--The European August phenol contract price has decreased by €12/tonne ($16/tonne) from July as a result of a fall in the price of the feedstock benzene, phenol producers and consumers confirmed on Friday.
The August phenol pre-discounted contract price settled at €1,162-1,202/tonne FD (free delivered) NWE (northwest ?xml:namespace>
The August benzene contract was agreed at €668/tonne FOB (free on board) NWE (northwest
Despite the tight supply situation, the phenol contract price has moved down because it is fully linked to a benzene price formula.
“I can confirm that phenol went down €12/tonne with benzene. Phenol demand is very good and very strong. We are sold-out and are unable to take any spot orders,” said a major producer.
Referring to the phenol contract price, a second European producer said: “For phenol, it’s the same story: contract down with benzene. There is still very good demand and the market remains challenging.”
A major phenol buyer confirmed that its August contract price moved down €12/tonne and that its demand for downstream bisphenol A (BPA) was healthy.
“The €12/tonne drop in benzene will definitely be passed on. BPA demand is still strong and phenol is tight, but I am feeling that maybe availability is a little easier,” said the buyer.
A distributor of phenol said that demand remained at a high level not only in Europe, but also in
“There is still good demand and producers are shipping every available molecule to
The main chemical intermediates and derivatives of phenol are BPA, which is used to make polycarbonate (PC), and epoxy resins, phenolic resins, caprolactam, alkylphenols, aniline and adipic acid.
($1 = €0.76)
For more on phenol and benz | http://www.icis.com/Articles/2010/08/06/9382751/europe-august-phenol-falls-12tonne-on-benzene-decrease.html | CC-MAIN-2015-06 | refinedweb | 299 | 57.91 |
I'm having a issue with a problem where I want to replace a special character in a text-file that I have created with a string that I defined.
"""
This is a guess the number game.
"""
import datetime, random
from random import randint
fileOpen = open('text.txt','r')
savedData = fileOpen.read()
fileOpen.close()
#print (savedData)
now = datetime.datetime.now()
date = now.strftime("%Y-%m-%d")
date = str(date)
#print(date)
time = now.strftime("%H:%M")
time = str(time)
#print(time)
savedData.replace(";", date)
print (savedData)
replace
saveData
Today it's the ; and the time is (. I'm feeling ) and : is todays lucky number. The unlucky number of today is #
str.replace doesn't alter the string in place, no string operations do because strings are immutable. It returns a copy of the replaced string which you need to assign to another name:
replacedData = savedData.replace(";", date)
now your replaced string will be saved to the new name you specified in the assignment. | https://codedump.io/share/eQtQmHwoZrZR/1/replace-special-character-in-string | CC-MAIN-2018-26 | refinedweb | 163 | 69.99 |
I'm not sure what the purpose of this is, but the Management Console causes the browser to refresh periodically. If you're in a form or an editor such as the DTL or Routing Rule editors, you may lose work unless you save frequently. This did not occur in Caché 2018 and earlier releases.
I've had a couple of incidents where I've created a number of rules in the DTL editor, answered the phone or stepped away for a few minutes, then come back to find any work since the last save erased.
I've noticed this in both 2019 and 2020 releases of IRIS.
Heads up!
You can stop this behavior with the following commands but be aware that this may cause screens such as the production configuration and the production monitor to not automatically refresh when connections are lost, etc. Therefore it is not recommended to do this on a production instance.
The issue was initially seen with multiple tabs/windows on the same user, but has now also been seen with multiple users in the management portal into the same namespace.
For a workaround you'll need to set the following global. In all namespaces that you are developing within.
SET ^%SYS("Portal","EnableAutoRefresh")=0
We have seen some instances where that doesn't completely work. If there are still issues you can disable the inactivity timeout. (I suspect that a timeout on one portal screen is causing all screens to refresh)
SET ^EnsPortal(“DisableInactivityTimeout”,”Portal”)=1
I'll assume that I'm not the first person to complain about this
Thanks!
^%SYS is mapped from IRISSYS, so it should only need to be modified int he %SYS namespace, right?
Does IRIS need to be restarted after this change? It doesn't seem to be having an effect. I'll be trying the ^EnsPortal change shortly and will report back.
Or ... just save before answering that phone call ;) | https://community.intersystems.com/post/iris-periodic-forced-refresh-management-console | CC-MAIN-2020-10 | refinedweb | 324 | 62.68 |
This page outlines the naming guidelines you should follow when creating buckets in Cloud Storage. To learn how to create a bucket, see the Creating storage buckets guide.
Bucket name requirements
Your bucket names must meet the following".
Bucket name considerations
Bucket names reside in a single namespace that is shared by all Cloud Storage users.
This means that:
Every bucket name must be globally unique.
If you try to create a bucket with a name that already belongs to an existing bucket, such as
example-bucket, Cloud Storage responds with an error message.
Bucket names are publicly visible.
Don't use user IDs, email addresses, project names, project numbers, or any personally identifiable information (PII) in bucket names because anyone can probe for the existence of a bucket., keep in mind the following:
If the new bucket is created in a different location and within 10 minutes of the old bucket's deletion, requests made to the new bucket during this 10 minute time frame might fail with a
404-Bucket Not Founderror.
If your requests go through the XML API, attempts to create a bucket that reuses a name in a new location might fail with a
404-Bucket Not Founderror for up to 10 minutes after the old bucket's deletion.) | https://cloud.google.com/storage/docs/naming-buckets?hl=ru | CC-MAIN-2021-49 | refinedweb | 214 | 58.82 |
In the last article of the Time Series Analysis series we discussed the importance of serial correlation and why it is extremely useful in the context of quantitative trading.
In this article we will make full use of serial correlation by discussing our first time series models, including some elementary linear stochastic models. In particular we are going to discuss White Noise and Random Walks.
Recapping Our Goal
Before we dive into definitions I want to recap our reasons for studying these models as well as our end goal in learning time series analysis.
Fundamentally we are interested in improving the profitability of our trading algorithms. As quants, we do not rely on "guesswork" or "hunches".
Our approach is to quantify as much as possible, both to remove any emotional involvement from the trading process and to ensure (to the extent possible) repeatability of our trading.
In order to improve the profitability of our trading models, we must make use of statistical techniques to identify consistent behaviour in assets which can be exploited to turn a profit. To find this behaviour we must explore how the properties of the asset prices themselves change in time.
Time Series Analysis helps us to achieve this. It provides us with a robust statistical framework for assessing the behaviour of time series, such as asset prices, in order to help us trade off of this behaviour.
Time Series Analysis provides us with a robust statistical framework for assessing the behaviour of asset prices.
So far we have discussed serial correlation and examined the basic correlation structure of simulated data. In addition we have defined stationarity and considered the second order properties of time series. All of these attributes will aid us in identifying patterns among time series. If you haven't read the previous article on serial correlation, I strongly suggest you do so before continuing with this article.
In the following we are going to examine how we can exploit some of the structure in asset prices that we've identified using time series models.
Time Series Modeling Process
So what is a time series model? Essentially, it is a mathematical model that attempts to "explain" the serial correlation present in a time series.
When we say "explain" what we really mean is once we have "fitted" a model to a time series it should account for some or all of the serial correlation present in the correlogram. That is, by fitting the model to a historical time series, we are reducing the serial correlation and thus "explaining it away".
Our process, as quantitative researchers, is to consider a wide variety of models including their assumptions and their complexity, and then choose a model such that it is the "simplest" that will explain the serial correlation.
Once we have such a model we can use it to predict future values or future behaviour in general. This prediction is obviously extremely useful in quantitative trading.
If we can predict the direction of an asset movement then we have the basis of a trading strategy (allowing for transaction costs, of course!). Also, if we can predict volatility of an asset then we have the basis of another trading strategy or a risk-management approach. This is why we are interested in second order properties, since they give us the means to help us make forecasts.
One question that arises here is "How do we know when we have a good fit for a model?". What criteria do we use to judge which model is best? In fact, there are several! We will be considering these criteria in this article series.
Let's summarise the general process we will be following throughout the series:
- Outline a hypotheis about a particular time series and its behaviour
- Obtain the correlogram of the time series (perhaps using R or Python libraries) and assess its serial correlation
- Use our knowledge of time series models and fit an appropriate model to reduce the serial correlation in the residuals (see below for a definition) of the model and its time series
- Refine the fit until no correlation is present and use mathematical criteria to assess the model fit
- Use the model and its second-order properties to make forecasts about future values
- Assess the accuracy of these forecasts using statistical techniques (such as confusion matrices, ROC curves for classification or regressive metrics such as MSE, MAPE etc)
- Iterate through this process until the accuracy is optimal and then utilise such forecasts to create trading strategies
That is our basic process. The complexity will arise when we consider more advanced models that account for additional serial correlation in our time series.
In this article we are going to consider two of the most basic time series models, namely White Noise and Random Walks. These models will form the basis of more advanced models later so it is essential we understand them well.
However, before we introduce either of these models, we are going to discuss some more abstract concepts that will help us unify our approach to time series models. In particular, we are going to define the Backward Shift Operator and the Difference Operator.
Backward Shift and Difference Operators
The Backward Shift Operator (BSO) and the Difference Operator will allow us to write many different time series models in a particular way that helps us understand how they differ from each other.
Since we will be using the notation of each so frequently, it makes sense to define them now.
Backward Shift Operator
The backward shift operator or lag operator, ${\bf B}$, takes a time series element as an argument and returns the element one time unit previously: ${\bf B} x_t = x_{t-1}$.
Repeated application of the operator allows us to step back $n$ times: ${\bf B}^n x_t = x_{t-n}$.
We will use the BSO to define many of our time series models going forward.
In addition, when we come to study time series models that are non-stationary (that is, their mean and variance can alter with time), we can use a differencing procedure in order to take a non-stationary series and produce a stationary series from it.
Difference Operator
The difference operator, $\nabla$, takes a time series element as an argument and returns the difference between the element and that of one time unit previously: $\nabla x_t = x_t - x_{t-1}$, or $\nabla x_t = (1-{\bf B}) x_t$.
As with the BSO, we can repeatedly apply the difference operator: $\nabla^n = (1-{\bf B})^n$.
Now that we've discussed these abstract operators, let us consider some concrete time series models.
White Noise
Let's begin by trying to motivate the concept of White Noise.
Above, we mentioned that our basic approach was to try fitting models to a time series until the remaining series lacks any serial correlation. This motivates the definition of the residual error series:
Residual Error Series
The residual error series or residuals, $x_t$, is a time series of the difference between an observed value and a predicted value, from a time series model, at a particular time $t$.
If $y_t$ is the observed value and $\hat{y}_t$ is the predicted value, we say: $x_t = y_t - \hat{y}_t$ are the residuals.
The key point is that if our chosen time series model is able to "explain" the serial correlation in the observations, then the residuals themselves are serially uncorrelated.
This means that each element of the serially uncorrelated residual series is an independent realisation from some probability distribution. That is, the residuals themselves are independent and identically distributed (i.i.d.).
Hence, if we are to begin creating time series models that explain away any serial correlation, it seems natural to begin with a process that produces independent random variables from some distribution. This directly leads on to the concept of (discrete) white noise:
Discrete White Noise
Consider a time series $\{w_t: t=1,...n\}$. If the elements of the series, $w_i$, are independent and identically distributed (i.i.d.), with a mean of zero, variance $\sigma^2$ and no serial correlation (i.e. $\text{Cor}(w_i, w_j) \neq 0, \forall i \neq j$) then we say that the time series is discrete white noise (DWN).
In particular, if the values $w_i$ are drawn from a standard normal distribution (i.e. $w_t \sim N(0,\sigma^2)$), then the series is known as Gaussian White Noise.
White Noise is useful in many contexts. In particular, it can be used to simulate a "synthetic" series.
As we've mentioned before, a historical time series is only one observed instance. If we can simulate multiple realisations then we can create "many histories" and thus generate statistics for some of the parameters of particular models. This will help us refine our models and thus increase accuracy in our forecasting.
Now that we've defined Discrete White Noise, we are going to examine some of the attributes of it, including its second order properties and its correlogram.
Second-Order Properties
The second-order properties of DWN are straightforward and follow easily from the actual definition. In particular, the mean of the series is zero and there is no autocorrelation by definition:\begin{eqnarray} \mu_w = E(w_t) = 0 \end{eqnarray} $$\rho_k = \text{Cor}(w_t, w_{t+k}) = \left\{\begin{aligned} &1 && \text{if} \enspace k = 0 \\ &0 && \text{if} \enspace k \neq 0 \end{aligned} \right.$$
Correlogram
We can also plot the correlogram of a DWN using R. Firstly we'll set the random seed to be 1, so that your random draws will be identical to mine. Then we will sample 1000 elements from a normal distribution and plot the autocorrelation:
> set.seed(1) > acf(rnorm(1000))
Correlogram of Discrete White Noise
Notice that at $k=6$, $k=15$ and $k=18$, we have three peaks that differ from zero at the 5% level. However, this is to be expected simply due to the variation in sampling from the normal distribution.
Once again, we must be extremely careful in our interpretation of results. In this instance, do we really expect anything physically meaningful to be happening at $k=6$, $k=15$ or $k=18$?
Notice that the DWN model only has a single parameter, namely the variance $\sigma^2$. Thankfully, it is straightforward to estimate the variance with R, we can simply use the
var function:
> set.seed(1) > var(rnorm(1000, mean=0, sd=1))
1.071051
We've specifically highlighted that the normal distribution above has a mean of zero and a standard deviation of 1 (and thus a variance of 1). R calculates the sample variance as 1.071051, which is close to the population value of 1.
The key takeaway with Discrete White Noise is that we use it as a model for the residuals. We are looking to fit other time series models to our observed series, at which point we use DWN as a confirmation that we have eliminated any remaining serial correlation from the residuals and thus have a good model fit.
Now that we have examined DWN we are going to move on to a famous model for (some) financial time series, namely the Random Walk.
Random Walk
A random walk is another time series model where the current observation is equal to the previous observation with a random step up or down. It is formally defined below:
Random Walk
A random walk is a time series model ${x_t}$ such that $x_t = x_{t-1} + w_t$, where $w_t$ is a discrete white noise series.
Recall above that we defined the backward shift operator ${\bf B}$. We can apply the BSO to the random walk:\begin{eqnarray} x_t = {\bf B} x_t + w_t = x_{t-1} + w_t \end{eqnarray}
And stepping back further:\begin{eqnarray} x_{t-1} = {\bf B} x_{t-1} + w_{t-1} = x_{t-2} + w_{t-1} \end{eqnarray}
If we repeat this process until the end of the time series we get:\begin{eqnarray} x_t = (1 + {\bf B} + {\bf B}^2 + \ldots) w_t \implies x_t = w_t + w_{t-1} + w_{t-2} + \ldots \end{eqnarray}
Hence it is clear to see how the random walk is simply the sum of the elements from a discrete white noise series.
Second-Order Properties
The second-order properties of a random walk are a little more interesting than that of discrete white noise. While the mean of a random walk is still zero, the covariance is actually time-dependent. Hence a random walk is non-stationary:\begin{eqnarray} \mu_x &=& 0 \\ \gamma_k (t) &=& \text{Cov}(x_t, x_{t+k}) = t \sigma^2 \end{eqnarray}
In particular, the covariance is equal to the variance multiplied by the time. Hence, as time increases, so does the variance.
What does this mean for random walks? Put simply, it means there is very little point in extrapolating "trends" in them over the long term, as they are literally random walks.
Correlogram
The autocorrelation of a random walk (which is also time-dependent) can be derived as follows:\begin{eqnarray} \rho_k (t) = \frac{\text{Cov}(x_t, x_{t+k})} {\sqrt{\text{Var}(x_t) \text{Var}(x_{t+k})}} = \frac{t \sigma^2}{\sqrt{t \sigma^2 (t+k) \sigma^2}} = \frac{1}{\sqrt{1+k/t}} \end{eqnarray}
Notice that this implies if we are considering a long time series, with short term lags, then we get an autocorrelation that is almost unity. That is, we have extremely high autocorrelation that does not decrease very rapidly as the lag increases. We can simulate such a series using R.
Firstly, we set the seed so that you can replicate my results exactly. Then we create two sequences of random draws ($x$ and $w$), each of which has the same value (as defined by the seed).
We then loop through every element of $x$ and assign it the value of the previous value of $x$ plus the current value of $w$. This gives us the random walk. We then plot the results using
type="l" to give us a line plot, rather than a plot of circular points:
> set.seed(4) > x <- w <- rnorm(1000) > for (t in 2:1000) x[t] <- x[t-1] + w[t] > plot(x, type="l")
Realisation of a Random Walk with 1000 timesteps
It is simple enough to draw the correlogram too:
> acf(x)
Correlogram of a Random Walk
Fitting Random Walk Models to Financial Data
We mentioned above and in the previous article that we would try and fit models to data which we have already simulated.
Clearly this is somewhat contrived, as we've simulated the random walk in the first place! However, we're trying to demonstrate the fitting process. In real situations we won't know the underlying generating model for our data, we will only be able to fit models and then assess the correlogram.
We stated that this process was useful because it helps us check that we've correctly implemented the model by trying to ensure that parameter estimates are close to those used in the simulations.
Fitting to Simulated Data
Since we are going to be spending a lot of time fitting models to financial time series, we should get some practice on simulated data first, such that we're well-versed in the process once we start using real data.
We have already simulated a random walk so we may as well use that realisation to see if our proposed model (of a random walk) is accurate.
How can we tell if our proposed random walk model is a good fit for our simulated data? Well, we make use of the definition of a random walk, which is simply that the difference between two neighbouring values is equal to a realisation from a discrete white noise process.
Hence, if we create a series of the differences of elements from our simulated series, we should have a series that resembles discrete white noise!
In R this can be accomplished very straightforwardly using the
diff function. Once we have created the difference series, we wish to plot the correlogram and then assess how close this is to discrete white noise:
> acf(diff(x))
Correlogram of the Difference Series from a Simulated Random Walk
What can we notice from this plot? There is a statistically significant peak at $k=10$, but only marginally. Remember, that we expect to see at least 5% of the peaks be statistically significant, simply due to sampling variation.
Hence we can reasonably state that the the correlogram looks like that of discrete white noise. It implies that the random walk model is a good fit for our simulated data. This is exactly what we should expect, since we simulated a random walk in the first place!
Fitting to Financial Data
Let's now apply our random walk model to some actual financial data. As with the Python library, pandas, we can use the R package quantmod to easily extract financial data from Yahoo Finance.
We are going to see if a random walk model is a good fit for some equities data. In particular, I am going to choose Microsoft (MSFT), but you can experiment with your favourite ticker symbol!
Before we're able to download any of the data, we must install quantmod as it isn't part of the default R installation. Run the following command and select the R package mirror server that is closest to your location:
> install.packages('quantmod')
Once quantmod is installed we can use it to obtain the historical price of MSFT stock:
> require('quantmod') > getSymbols('MSFT', src='yahoo') > MSFT
.. .. 2015-07-15 45.68 45.89 45.43 45.76 26482000 45.76000 2015-07-16 46.01 46.69 45.97 46.66 25894400 46.66000 2015-07-17 46.55 46.78 46.26 46.62 29262900 46.62000
This will create an object called MSFT (case sensitive!) into the R namespace, which contains the pricing and volume history of MSFT. We're interested in the corporate-action adjusted closing price. We can use the following commands to (respectively) obtain the Open, High, Low, Close, Volume and Adjusted Close prices for the Microsoft stock:
Op(MSFT),
Hi(MSFT),
Lo(MSFT),
Cl(MSFT),
Vo(MSFT),
Ad(MSFT).
Our process will be to take the difference of the Adjusted Close values, omit any missing values, and then run them through the autocorrelation function. When we plot the correlogram we are looking for evidence of discrete white noise, that is, a residuals series that is serially uncorrelated. To carry this out in R, we run the following command:
> acf(diff(Ad(MSFT)), na.action = na.omit)
The latter part (
na.action = na.omit) tells the
acf function to ignore missing values by omitting them. The output of the
acf function is as follows:
Correlogram of the Difference Series from MSFT Adjusted Close
We notice that the majority of the lag peaks do not differ from zero at the 5% level. However there are a few that are marginally above. Given that the lags $k_i$ where peaks exist are someway from $k=0$, we could be inclined to think that these are due to stochastic variation and do not represent any physical serial correlation in the series.
Hence we can conclude, with a reasonable degree of certainty, that the adjusted closing prices of MSFT are well approximated by a random walk.
Let's now try the same approach on the S&P500 itself. The Yahoo Finance symbol for the S&P500 index is ^GSPC. Hence, if we enter the following commands into R, we can plot the correlogram of the difference series of the S&P500:
> getSymbols('^GSPC', src='yahoo') > acf(diff(Ad(GSPC)), na.action = na.omit)
The correlogram is as follows:
Correlogram of the Difference Series from the S&P500 Adjusted Close
The correlogram here is certainly more interesting. Notice that there is a negative correlation at $k=1$. This is unlikely to be due to random sampling variation.
Notice also that there are peaks at $k=10$, $k=15$, $k=16$, $k=18$ and $k=21$. Although it is harder to justify their existence beyond that of random variation, they may be indicative of a longer-lag process.
Hence it is much harder to justify that a random walk is a good model for the S&P500 Adjusted Close data. This motivates more sophisticated models, namely the Autoregressive Models of Order p, which will be the subject of the next article! | https://quantstart.com/articles/White-Noise-and-Random-Walks-in-Time-Series-Analysis/ | CC-MAIN-2022-27 | refinedweb | 3,424 | 50.46 |
10 Interesting Python Modules to Learn in 2016
In this article I will give you an introduction into some Python modules I think of as useful. Naturally this can vary in your case but anyway it is a good idea to look at them, maybe you will use them in the future.
I won't give a thorough introduction for each module because it would blow-up the boundaries of this article. However I will try to give you a fair summary on the module: what id does and how you can use it in the future if you are interested. For a detailed introduction you have to wait for either future articles which explain these libraries in more depth or you have to search the web to find resources. And there are plenty of good resources out there.
In some languages static typing is the only way to go. In Python there is dynamic typing which means once a variable has assigned a value of a given type (int for example) it can later become a value which is from another type (string). However in some applications this leads to errors because code blocks expect values from a given type and you pass in something else instead.
Python 3.5 introduced the standard for type annotations where you can annotate your code to display what type the function parameters should have and what type the function returns for example. However type checking is only an annotation which you see in the code but is removed when this code gets executed.
>>> def add(a: int, b: int) -> int: ... return a+b ...
The example function above adds two variables. The type hint shows that the expected numbers are from type int and the result is an int too.
To demonstrate that the type hints are only for the developers let's call this function:
>>> add(1, 5) 6 >>> add('a', '3') 'a3'
There is no problem if you provide other types to the function as long as Python can use the + operator on both variables.
Now mypy is a library which looks for such type hints and gives you warnings if you try to call a function with the wrong type. Even if your static type checking with mypy fails you can run your application with your interpreter of choice.
Let's convert the example above to a Python module and call it typing_test.py:
__author__ = 'GHajba' def add(a: int, b: int) -> int: return a + b if __name__ == '__main__': print(add(1, 5)) print(add('a', '3'))
As you can see, I call the function with the same parameters as previously and I expect that I get some type warnings:
mypy typing_test.py typing_test.py:6: error: Argument 1 to "add" has incompatible type "str"; expected "int" typing_test.py:6: error: Argument 2 to "add" has incompatible type "str"; expected "int"
And yes, there it is. If I run the script (as mentioned above) it executes:
python3 typing_test.py
6
a3
An interesting (and in my eyes not a very good) result comes when you execute type checking on a file which has no type errors: you get no output. I would expect at least one line telling me that 0 errors were found. To satisfy my needs I have to run mypy with the -v flag for verbose mode however this gives too much information and if you get some errors the screen is full with data you are not interested in. I hope this will change in a future version, I am currently using 0.4.3.
You can find a quick documentation about mypy at (including how to install this library).
Nose () is a testing library. As their homepage states: "nose is nicer testing for python". And their tagline is: "nose extends unit test to make testing easier."
And I know you do not want to write unit tests. You are a Python developer who is a rebel and does not want to compile with Java-like developers who have to write tests to get their code accepted.
Anyway I do not want to go into detail that you must or don't have to write tests (sometimes I even don't test my Java code) but sometimes it is a good approach to write tests -- the best use case for this is when you write a complex block of code. So let's install nose:
pip install nose
For an example let's create a module to test and call it overcomplicated.py:
author__ = 'GHajba' def multiply(a, b): if a == 0: return 0 return a + multiply(a-1, b)
As the name mentions this module contains overcomplicated things and, well, the contents are really over-complicated. Now we want to test this module. For this let's create a simple test file and call it overcomplicated_test.py (and I know there are errors in the assertion but they are there for good):
from overcomplicated import multiply
__author__ = 'GHajba' def test_simple(): assert multiply(3, 4) == 12 def test_with_0(): assert multiply(0, 0) == 1 assert multiply(0, 12) == 0 assert multiply(45, 0) == 0 assert multiply(-3, 0) == 0 def test_negative(): assert multiply(-1, 3) == -3 assert multiply(-9, -2) == 18 assert multiply(42, -3) == -125
Well, the test itself is not as complicated as the module to test but this is good for a simple introduction. We test here some corner cases like multiplying with 0, multiplying 0 and multiplication with negative numbers.
Now let's see how we can utilize nose and why it is useful for testing:
nosetest overcomplicated_test.py
If you run the test above you will get an error and one failed test. And these tests test negative numbers. Why is this?
File "/Users/GHajba/dev/python/overcomplicated.py", line 4, in multiply return b + multiply(a-1, b) File "/Users/GHajba/dev/python/overcomplicated.py", line 4, in multiply return b + multiply(a-1, b) File "/Users/GHajba/dev/python/overcomplicated.py", line 2, in multiply if a == 0: RecursionError: maximum recursion depth exceeded in comparison ---------------------------------------------------------------------- Ran 3 tests in 0.018s FAILED (errors=2) That's because of the stop-condition in the multiply function: the recursion stops if a has the value of 0. However with negative multiplier there is a problem. Now let's correct the script and re-run the tests: .FF ====================================================================== FAIL: overcomplicated_test.test_with_0 ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/nose/case.py", line 198, in runTest self.test(*self.arg) File "/User/GHajba/dev/python/overcomplicated_test.py", line 7, in test_with_0 assert multiply(0,0) == 1 AssertionError ====================================================================== FAIL: overcomplicated_test.test_negative ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/nose/case.py", line 198, in runTest self.test(*self.arg) File "/Users/GHajba/dev/python/overcomplicated_test.py", line 15, in test_negative assert multiply(42, -3) == -125 AssertionError ---------------------------------------------------------------------- Ran 3 tests in 0.001s FAILED (failures=2)
The 2 failures are there however now it is not a problem with recursion. As you can see, the test is failing but it shows you the spots where the errors happened. Now let's fix all the errors in the tests and run the script again:
nosetests -v overcomplicated_test.py overcomplicated_test.test_simple ... ok overcomplicated_test.test_with_0 ... ok overcomplicated_test.test_negative ... ok ---------------------------------------------------------------------- Ran 3 tests in 0.001s
OK
Now we are done and all tests are finished without errors. The -v flag tells nose to display a verbose result on the tests which were executed.
This is a module which comes with your Python installation. This library arrived in this article because it is useful if you just want to start a simple HTTP server where you can try your homepage and see if the internal links are working.
To start a server just execute the following command from your command line:
python -m http.server
Note that this module is only available in Python 3. Well, this sentence is not fully true, the server is available for Python 2 too but it has to be used in a bit different way:
python -m SimpleHTTPServer
The default port both versions listen to is 8000. If you want to use a different port (like 8080) you can put the port number at the end of the starting command.
For Python 3 it looks like this:
python -m http.server 8080
And for Python 2 like this:
python -m SimpleHTTPServer 8080
If the server is running, you can see all the requests logged to the console (if you do not redirect the output into a file of course). This gives you an opportunity to see if the template you downloaded for free does not contain a malicious part where you are redirected or getting unwanted content.
Sometimes you have data that you want to display to your users in an application or just want to create nice graphics from it and showcase it in a document.
For this I suggest using the matplotlib Python library. The best way to demonstrate this is with a simple example where we display functions because functions may be easily calculated and they have a lot of data.
We will display the x^2 function between -3 and 3. One way would be to utilize NumPy (another Python library which helps with large mathematical calculations in an easy way) however in that case I would have to introduce that module too. Therefore I use a simple "hack" and create my data as a range:
import matplotlib.pyplot as plt
author__ = 'GHajba' x_axis = [x for x in range(-3,4)] y_axis = [x * x for x in x_axis] plt.plot(x_axis, y_axis) plt.show()
If you know mathematical functions and the range Python function you will know that the mathematical function works on every rational number but the range function creates only integers. And yes, the displayed image is not what we imagine:
To fix this we could create a range of rational numbers -- or we can display only the distinct integers. Let's do this second version because in this case we move out from the defaults of matplotlib.
The only thing to change here is the plt.plot function to the following line:
plt.scatter(x_axis, y_axis)
Now it is time to explain a bit what we have done: generating the axis' data is easy to understand: we have the numbers of the x axis and apply our function on each of these numbers to have the data of the y axis.
The interesting parts are the plt.plot or plt.scatter functions. The first function displays a simple plot where each data point is connected with a line and the second displays a scatter-plot where only the distinct data-points are displayed.
The plt.show() function is needed to display the generated plot.
Naturally you can create some more plot types with matplotlib like bar plots, pie charts, quiver plots and so on.
The documentation of the library is very good and you can find a lot of resources on the internet if you want to dig deeper into displaying data with Python in a better understandable way.
The documentation of matplotlib can be found here:
It can happen that you get a job inquiry where you need to interact with the Internet in order to get some website information, and send out
specific requests based on it. In such a situation you can use some default Python library like urllib but it has its limitations in readability and in the end your code will become too complex.
A very good alternative to leverage complexity and maintain readability is to use the Requests() module. As their statement on their homepage tells us: "Requests: HTTP for Humans". And this well describes what the developers of this module try to achieve.
In this section I will only showcase what requests can do and will not compare it to different solutions with default Python tools.
A nice introductory example is always to gather some JSON data from various APIs because it returns structured data which is easy to read.
r = requests.get('') print(r.status_code) print(r.headers['content-type']) print(r.encoding) print(r.json())
This basic example goes up to HTTPBin and requests user information. As a result you get back a JSON string which you can parse or just simply print to the console as I do in the example. HTTPBin is a nice site because it implements endpoints for the various HTTP requests. It is a good place to go to if you just want to try out your request-response handling.
You can naturally do other requests not just GET but POST, PUT, DELETE, HEAD and OPTIONS with the requests library.
And beside such basic requests you can provide additional information to a request like custom headers to send, cookies to add to the request or data for POST requests.
Why would someone send custom headers or even cookies? Well, that's because some websites filter for scraping bots. Those bots either do not use allowed user agents (which is sent in the request's header) or they do not send any information regarding this. In this case you can get your own header information and give it to the request.
Cookies are used if you communicate with a given website in a logged-in session where you have to send your authentication token every time you file a request. Here is one option to extract this token after authentication from the response's cookie jar or use session which is a sub-module of requests and handles such sessions for you where the cookies are sent to the server automatically.
The documentation of the Requests module can be found here:
PeeWee
PeeWee () is an ORM (Object-Relational Mapping) framework which comes in handy if you have an application where you have to fill data from objects (from class representations) to a relational database and vice versa.
Because it is easy to learn and use it can be a good alternative to SQLAlchemy if you just want to create a proof-of-concept in a short time.
import peewee __author__ = 'GHajba' db = peewee.SqliteDatabase('my_app.db') class Book(peewee.Model): author = peewee.CharField() title = peewee.TextField() class Meta: database = db if __name__ == '__main__': db.connect() Book.create_table(True) book = Book(author='Gabor Laszlo Hajba', title='Python 3 in Anger') book.save() for book in Book.filter(author="Gabor Laszlo Hajba"): print(book.title) db.close()
In the basic example above I create a database table, fill it with data and extract the contents of the table. Let's see how it is done in detail.
The
db = peewee.SqliteDatabase('my_app.db')
line initializes an object with a database connection. It is an SQLite database in the same folder where you run the script (if it is not present it will be created).
The class Book describes the model which we want to map into the database. It extends peewee.Model which makes it available for ORM. This model has two fields which are both text representations. Naturally in a real-life example authors would be another model and the reference would be done through foreign keys.
The Meta class inside our model class tells peewee which database configuration to use.
In the main part of the script I create a table through the model definition. The parameter provided tell peewee to ignore if the table already exists.
Creating a book happens through the constructor of the model class. It accepts keyword arguments where the keys correspond to the model's fields.
Calling the save() method on the newly created object will persist the information into the database.
In the for loop I display all books' titles where the author field matches the provided string.
If you run this sample application it will display Python 3 in Anger as many times as you have started this example script.
Naturally this is only the beginning where PeeWee can take you. My intention in adding this library to this article was to expand your library-set for ORM because SQLAlchemy is not the only way to map objects into relational databases.
PyGame
PyGame () is a library for creating games. I won't introduce the whole library in this article because I could write a book because there are so many options (and actually there are some books already out there which introduce you to PyGame). However I will give you a brief introduction how to install and run it with a simple example.
And by brief I mean really brief.
Installing the library is easy: pip install pygame. Here I have to mention that the last final stable release came out in 2009. Now it is in development again and the current (5th August 2016) available version is 1.9.2b8. I hope it will be released soon. From version 1.9.2 PyGame supports Python 3.2 and up therefore my examples are written in Python 3.5.
import pygame from pygame.locals import * __author__ = 'GHajba' clock = pygame.time.Clock() screen = pygame.display.set_mode((screen_width, screen_height)) pokemon = pygame.image.load('drowzee.png') position = (start_x, start_y)
while True:
clock.tick(40) screen.fill(BLACK) for event in pygame.event.get(): if not hasattr(event, 'key'): continue if event.key == K_ESCAPE: exit(0) position = (position[0] + speed, position[1]) screen.blit(pokemon, position) pygame.display.flip() if position[0] > screen_width or position[0] < start_x: pokemon = pygame.transform.flip(pokemon, True, False) speed = -1 * speed
This is a basic example where we take an image and move it around the screen. Because Pokémon Go is currently all the hype, I have taken a Pokémon to showcase it here. Note that some of the code have been removed to show only the interesting parts.
I think this example is pretty straightforward. Where it needs some explaining is inside the endless loop. The clock.tick tells PyGame how much frames per second the application has (technically it tells the library to pause until 1/40th of a second has passed and with this it limits the number of ticks to 40 per second).
Filling the screen in black is to remove previous renderings from the screen. If you do not include this line you may end-up with something like this when running your code:
Bad for the app, isn't it?
Anyway, the most important part of this code above is inside the for loop. That's because it controls event handling from outside. It looks only for key events and if it is the Esc key (top left on your keyboard) it exits. Naturally you can handle here other keys or mouse clicks which could influence the displayed information (moving the Pokémon only when the user presses a key or clicks with the mouse).
The code-part after the event handling takes care of moving the image around the screen from left-to-right. If it reaches the right edge it flips the image and makes it move from right-to-left. And it goes on and on and on and... Until the user press the Esc key.
You can read more about PyGame at their website:
As the name already suggests, prettytable () is a table printing library which displays the contents of the table as a pretty formatted table on the console. Let's see an example:
>>> from prettytable import PrettyTable >>> >>> table = PrettyTable(['food','price']) >>> table.add_row(['ham','$2']) >>> table.add_row(['eggs','$1']) >>> table.add_row(['spam','$4']) >>> print(table) +------+-------+ | food | price | +------+-------+ | ham | $2 | | eggs | $1 | | spam | $4 | +------+-------+ Beside this basic usage you can sort by the columns of the tables. Let's sort by the price column: >>> table.>> print(table) +------+-------+ | food | price | +------+-------+ | eggs | $1 | | ham | $2 | | spam | $4 | +------+-------+ And here it goes: a pretty table sorted by the second column. Note that the sort in the price column is alphabetical sorting. That's because the column contains strings. Another option where you can use this library is to display dictionary contents pretty: >>> from prettytable import PrettyTable >>> >>> d = {'ham':2, 'eggs':1, 'spam':4} >>> table = PrettyTable(['food', 'price']) >>> for key, value in d.items(): ... table.add_row([key, value]) ... >>> print(table) +------+-------+ | food | price | +------+-------+ | ham | 2 | | eggs | 1 | | spam | 4 | +------+-------+ >>> table.>> print(table) +------+-------+ | food | price | +------+-------+ | eggs | 1 | | ham | 2 | | spam | 4 | +------+-------+
Naturally this introduction scratches the features of this library, you can find more information in the documentation, like how to change the alignment of the columns or how to display your table as HTML markup. The documentation can be found here:
Progressbar () is a little tool where you can display a progress of your console application. I think there is not much more I can tell you because I think you get the idea. However there are many use cases where you can use this library, let's see some.
The easiest example is to wrap a list inside the progress bar and let it do it's magic:
>>> from time import sleep >>> from progressbar import ProgressBar >>> >>> bar = ProgressBar() >>> for i in bar(range(50)): ... sleep(0.5) ... 32% (16 of 50) |################### | Elapsed Time: 0:00:08 ETA: 0:00:17
As you can see (with this long sleep) it takes quite some time to get the bar filled up. And what you won't see so clearly is that the library supports screen resizing. This means that the output is always displayed on one line. If you have a smaller screen then it will be displayed in that small line, if you have a wider screen then the result will be broader using the whole screen width.
Note that in the example above you cannot re-use the bar variable because it iterated over the complete list. If you try to do that you will get an exception:
100% (50 of 50) |############################################################| Elapsed Time: 0:00:25 Time: 0:00:25 >>> for i in bar(range(50)): ... sleep(0.5) ... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/progressbar/bar.py", line 387, in __next__ self.update(self.value + 1) File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/progressbar/bar.py", line 496, in update % (self.min_value, self.max_value)) ValueError: Value out of range, should be between 0 and 50
To solve this, you have to create a new instance of the ProgressBar class.
Progressbar comes in two different flavors: the original version does not support Python 3 therefore you have to install the library called progressbar2 with pip if you use Python 3. For Python 2 pip install progressbar is fine.
The Python 2 version is a bit different: it does not display the elapsed time nor an ETA.
At some point in your development career you will reach the conclusion that you need universally unique identifiers. For example you want to send out promotion codes and want to ensure that everyone gets a unique code -- and do not want that some users could re-use the codes of others by guessing it.
For this comes the uuid module which ships with Python. It implements the versions 1, 3, 4 and 5 of the UUID () standards. This means you have quite some options to get things rolling.
I am not going into the details of the four implemented standards but I suggest you use uuid.uuid4() if you require some unique and random UUID. The uuid.uuid1() function contains the computer's network address that is quite problematic privacy-wise. I hope you won't be angry if I do not provide example results where you can see that it really contains the address (thus the UUIDs generated have very much in common). Just execute the following in your interactive interpreter:
>>> from uuid import uuid1 >>> for i in range(5): ... print(uuid1()) ... For comparison, let's take a look at uuid4: >>> from uuid import uuid4 >>> for i in range(5): ... print(uuid4()) ... 3d6031d8-e52f-42d9-9946-e4097b76afda 531d2050-b7cf-4529-aa49-ec21c02c1463 6e3169eb-877e-436f-b92e-9e5ada080e19 29b58d6b-b94d-4a31-bd9d-3130558a931c 10df135f-771d-44dc-b110-896646869fdc
As you can see, here the UUIDs are really different and do not contain matching parts.
Conclusion
There are many more interesting Python modules out there. I have chosen these to give you some introduction to some modules you may have never encountered or you thought it would be too hard to get acquainted with.
Recent Stories
Top DiscoverSDK Experts
Compare Products
Select up to three two products to compare by clicking on the compare icon () of each product.{{compareToolModel.Error}}
Your Comment | http://www.discoversdk.com/blog/10-interesting-python-modules-to-learn-in-2016 | CC-MAIN-2021-04 | refinedweb | 4,125 | 62.78 |
textclean 0.4
Handle the very famous decoding - encoding errors in Python !
Python package for cleaning text, Handle Decoding and Encoding errors for all formats.
Usage
Place bad text in a file and read it in python source and clean it using textclean.clean() function.
test.py
from textclean.textclean import textclean
text = open(“badtext.txt”).read()
cleaned_text = textclean.clean(text)
print cleaned_text
- Downloads (All Versions):
- 16 downloads in the last day
- 66 downloads in the last week
- 272 downloads in the last month
- Author: Vikas Bharti
- Maintainer: Vikas Bharti
- License: MIT
- Categories
- Package Index Owner: vikasbharti
- DOAP record: textclean-0.4.xml | https://pypi.python.org/pypi/textclean/0.4 | CC-MAIN-2015-22 | refinedweb | 103 | 66.74 |
Vue 3 with Typescript, Options API vs Composition API
This article is available as a screencast!
Options Composition API, JavaScript and TypeScript - one API and language to rule them all?
In this article, I will convert Vue.js 3 component built using regular JavaScript and the options API to use TypeScript and the Composition API. We will see some of the differences, and potential benefits.
You can find the source code for this article here.
Since the component has tests, and we will see if those tests are useful during the refactor, and if we need to improve them. A good rule of thumb is if you are purely refactoring, and not changing the public behavior of the component, you should not need to change you tests. If you do, you are testing implementation details, which is not ideal.
The Component
I will be refactoring the
News component. It’s written using render functions, since Vue Test Utils and Jest don’t have official support for Vue.js 3 components yet. For those unfamiliar with render functions, I commented the generated HTML. Since the source code is quite long, the basic idea is this markup is generated:
<div> <h1>Posts from {{ selectedFilter }}</h1> <Filter v- <NewsPost v- </div>
This post shows some news posts, rendered by
<NewsPost />. The user can configure which period of time they’d like to see news from using the
<Filter /> component, which basically just renders some buttons with labels like “Today”, “This Week” etc.
I’ll introduce the source code for each component as we work through the refactor. To give an idea of how a user interacts with the component, here are the tests:
describe('FilterPosts', () => { it('renders today posts by default', async () => { const wrapper = mount(FilterPosts) expect(wrapper.find('.post').text()).toBe('In the news today...') expect(wrapper.findAll('.post')).toHaveLength(1) }) it('toggles the filter', async () => { const wrapper = mount(FilterPosts) wrapper.findAll('button')[1].trigger('click') await nextTick() expect(wrapper.findAll('.post')).toHaveLength(2) expect(wrapper.find('h1').text()).toBe('Posts from this week') expect(wrapper.findAll('.post')[0].text()).toBe('In the news today...') expect(wrapper.findAll('.post')[1].text()).toBe('In the news this week...') }) })
The changes I’ll be discussing are:
- using the composition API’s
refand
computedinstead of
dataand
computed
- using TypeScript to strongly type
filters, etc.
- most importantly, which API I like, and the pros and cons of JS and TS
Typing the
filter type and Refactoring
Filter
It makes sense to start from the simplest component, and work our way up. The
Filter component looks like this:
const filters = ['today', 'this week'] export const Filter = defineComponent({ props: { filter: { type: String, required: true } }, render() { // <button @click="$emit('select', filter)>{{ filter }}/<button> return h('button', { onClick: () => this.$emit('select', this.filter) }, this.filter) } })
The main improvement we will make it typing the
filter prop. We can do this using a
type (you could also use an
enum):
type FilterPeriod = 'today' | 'this week' const filters: FilterPeriod[] = ['today', 'this week'] export const Filter = defineComponent({ props: { filter: { type: String as () => FilterPeriod, required: true } }, // ... )
You also need this weird
String as () => FilterPeriod syntax - I am not too sure why, some limitation of Vue’s
props system, I suppose.
This change is already a big improvement - instead of the reader trying to figure out what kind of
string is actual a valid
filter, and potentially making a typo, they can leverage an IDE and find out before they even run the tests or try to open the app.
We can also move the
render function to the
setup function; this way, we get better type inference on
this.filter and
this.$emit:
setup(props, ctx) { return () => h('button', { onClick: () => ctx.emit('select', props.filter) }, props.filter) }
The main reason this gives better type inference is that it is easier to type
props and
context, which are easily defined objects, to
this, which is highly dynamic in JavaScript.
I’ve heard when Vetur, the VSCode plugin for Vue components is updated for Vue 3, you will actually get type inference in
<template>, which is really exciting!
The tests still pass - let’s move on to the
NewsPost component.
Typing the
post type and
NewsPost
NewsPost looks like this:
export const NewsPost = defineComponent({ props: { post: { type: Object, required: true } }, render() { return h('div', { className: 'post' }, this.post.title) } })
Another very simple component. You’ll notice that
this.post.title is not typed - if you open this component in VSCode, it says
this.post is
any. This is because it’s difficult to type
this in JavaScript. Also,
type: Object is not exactly the most useful type definition. What properties does it have? Let’s solve this by defining a
Post interface:
interface Post { id: number title: string created: Moment }
While we are at it, let’s move the
render function to
setup:
export const NewsPost = defineComponent({ props: { post: { type: Object as () => Post, required: true }, }, setup(props) { return () => h('div', { className: 'post' }, props.post.title) } })
If you open this in VSCode, you’ll notice that
props.post.title can have it’s type correctly inferred.
Updating
FilterPosts
Now there is only one component remaining - the top level
FilterPosts component. It looks like this:
export const FilterPosts = defineComponent({ data() { return { selectedFilter: 'today' } }, computed: { filteredPosts() { return posts.filter(post => { if (this.selectedFilter === 'today') { return post.created.isSameOrBefore(moment().add(0, 'days')) } if (this.selectedFilter === 'this week') { return post.created.isSameOrBefore(moment().add(1, 'week')) } return post }) } }, // <h1>Posts from {{ selectedFilter }}</h1> // <Filter // render() { return ( h('div', [ h('h1', `Posts from ${this.selectedFilter}`), filters.map(filter => h(Filter, { filter, onSelect: filter => this.selectedFilter = filter })), this.filteredPosts.map(post => h(NewsPost, { post })) ], ) ) } })
I will start by removing the
data function, and defining
selectedFilter as a
ref in
setup.
ref is generic, so I can pass it a type using
<>. Now
ref know what values can and cannot be assigned to
selectedFilter.
setup() { const selectedFilter = ref<FilterPeriod>('today') return { selectedFilter } }
The test are still passing, so let’s move the
computed method,
filteredPosts, to
setup.
const filteredPosts = computed(() => { return posts.filter(post => { if (selectedFilter.value === 'today') { return post.created.isSameOrBefore(moment().add(0, 'days')) } if (selectedFilter.value === 'this week') { return post.created.isSameOrBefore(moment().add(1, 'week')) } return post }) })
This hardly changes - the only real difference is instead of
this.selectedFilter, we use
selectedFilter.value.
value is required to access the
selectedFilter - without
value, you are referring to the
Proxy object, which is a new ES6 JavaScript API that Vue uses for reactivity in Vue 3. If you open this in VSCode, you will notice that
selectedFilter.value === 'this year', for example, would be flagged as a compiler error. We typed
FilterPeriod so errors like this can be caught by the IDE or compiler.
This final change is to move the
render function to
setup:
return () => h('div', [ h('h1', `Posts from ${selectedFilter.value}`), filters.map(filter => h(Filter, { filter, onSelect: filter => selectedFilter.value = filter })), filteredPosts.value.map(post => h(NewsPost, { post })) ], )
We are now returning a function from
setup, so we not longer need to return
selectedFilter and
filteredPosts - we directly refer to them in the function we return, because they are declared in the same scope.
All the tests pass, so we are finished with the refactor.
Discussion
One important thing to notice is I did not have to change my tests are all for this refactor. That’s because the tests focus on the public behavior of the component, not the implementation details. That’s a good thing.
While this refactor is not especially interesting, and doesn’t bring any direct business value to the user, it does raise some interesting points to discuss as developers:
- should we use the Composition API or Options API?
- should we use JS or TS?
Composition API vs Options API
This is probably the biggest change moving from Vue 2 to Vue 3. Although you can just stick with the Options API, the fact both are present will natural lead to the question “which one is the best solution for the problem?” or “which one is most appropriate for my team?”.
I don’t think one is superior to the other. Personally, I find that the Options API is easier to teach people who are new to JavaScript framework, and as such, more intuitive. Understanding
ref,
reactive, and the need to refer to
ref using
.value is a lot to learn. The Options API, at the very least, forces you into some kind of structure with
computed,
methods and
data.
Having said that, it is very difficult to leverage the full power of TypeScript when using the Options API - one of the reasons the Composition API is being introduced. This leads into the second point I’d like to discuss…
Typescript vs JavaScript
I found the TypeScript learning curve a bit difficult at first, but now I really enjoy writing applications using TypeScript. It has helped me catch lots of bugs, and makes things much easier to reason about - knowing a
prop is an
Object is nearly useless if you don’t know what properties the object has, and if they are nullable.
On the other hand, I still prefer JavaScript when I want to learn a new concept, build a prototype, or just try a new library out. The ability to write code and run it in a browser without a build step is valuable, and I also don’t generally care about specific types and generics when I’m just trying something out. This is how I first learned the Composition API - just using a script tag and building a few small prototypes.
Once I’m confident in a library or design pattern, and have a good idea of the problem I’m solving, I prefer to use TypeScript. Consider how widespread TypeScript is, the similarities to other popular typed languages, and the many benefits it brings, it feels professional negligent to write a large, complex application in JavaScript. The benefits of TypeScript are too attractive, especially for defining complex business logic or scaling a codebase with a team.
Another place I still like JavaScript is design centric components or applications - if I’m building something that primarily operates using CSS animations, SVG and only uses Vue for things like
Transition, basic data binding and animation hooks, I find regular JavaScript to be appropriate. The moment business logic or complexity creeps in, however, I like to move to TypeScript.
In conclusion, I like TypeScript a lot, and the Composition for that reason - not because I think it is more intuitive or concise than the Options API, but because it lets me leverage TypeScript more effectively. I think both the Options API and Composition API are appropriate ways to build Vue.js components.
Conclusion
I demonstrated and discussed:
- gradually adding types to a component written in regular JavaScript
- good tests focus on behavior, not implementation details
- the benefits of TypeScript
- Options API vs Composition API
Absolutely no unsolicted spam. Unsubscribe anytime. | https://vuejs-course.com/blog/vuejs-3-typescript-options-composition-api | CC-MAIN-2021-04 | refinedweb | 1,830 | 55.03 |
Introduction to CNN
Convolutional neural networks (CNN) are primarily used to classify images or identify pattern similarities between them.
So a convolutional network receives a normal color image as a rectangular box whose width and height are measured by the number of pixels along those dimensions, and whose depth is three layers deep, one for each letter in RGB.
Also, read – Understanding a Neural Network
As images move through a convolutional network, different patterns are recognised just like a normal neural network.
But here rather than focussing on one pixel at a time, a convolutional net takes in square patches of pixels and passes them through a filter.
That filter is also a square matrix smaller than the image itself, and equal in size to the patch. It is also called a kernel.
Now lets start with importing the libraries
import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) import cv2 import matplotlib.pyplot as plt import tensorflow as tf from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Flatten, Dropout, Activation, Conv2D, MaxPooling2D
We need to train a model first so we will check training data In the below code we are iterating through all images in train folder and then we will split image name with deliminiter “.”
We have names like dog.0, dog.1, cat.2 etc.. Hence after splitting we are gonna get results like “dog’, “cat” as category value of the image. To make this example more easy we will consider dog as “1” and cat as “0”.
Now every image is actually a set of pixels so how to get our computer know that. Its simple convert all those pixels into an array.
So we are going to use here a cv2 library to read our image into an array and also it will read as a gray scale image.
train_dir = # your path to train dataset path = os.path.join(main_dir,train_dir) for p in os.listdir(path): category = p.split(".")[0] img_array = cv2.imread(os.path.join(path,p),cv2.IMREAD_GRAYSCALE) new_img_array = cv2.resize(img_array, dsize=(80, 80)) plt.imshow(new_img_array,cmap="gray") break
Okay so the above code was more for understanding purpose. Nowe we will get to the real part of coding here.
Declare your training array X and your target array y. Here X will be the array of pixels and y will be value 0 or 1 indicating its a dog or cat Write convert function to map category “dog” or “cat” into 1 and 0
Create a function create_test_data which takes all training images into a loop. Converts into image array. Resize image into 80 X80. Append image into X array. Append category value into y array.
X = [] y = [] convert = lambda category : int(category == 'dog') def create_test_data(path): for p in os.listdir(path): category = p.split(".")[0] category = convert(category) img_array = cv2.imread(os.path.join(path,p),cv2.IMREAD_GRAYSCALE) new_img_array = cv2.resize(img_array, dsize=(80, 80)) X.append(new_img_array) y.append(category)
Now call the function, but also later convert X and y into numpy array We also have to reshape X with the below code
create_test_data(path) X = np.array(X).reshape(-1, 80,80,1) y = np.array(y)
If you see the values of X you can see a variety of values between 0- 255 . Its because every pixel has different density of black and white. But with the wide range of values it becomes difficult for a training model to learn ( sometimes memorize ).
How to resolve this And you guessed it right . You can normalize the data. We can use Keras normalize here also . But well we already know all values are having range between 0-255 so we can just divide it by 255 and get all values scaled between 0 -1
That’s what we have done below. You can skip this step to see the difference between accuracy. Don’t believe everything I say. Experiment and see for yourself
#Normalize data X = X/255.0
Now Lets train our model
model = Sequential() # Adds a densely-connected layer with 64 units to the model: model.add(Conv2D(64,(3,3), activation = 'relu', input_shape = X.shape[1:])) model.add(MaxPooling2D(pool_size = (2,2))) # Add another: model.add(Conv2D(64,(3,3), activation = 'relu')) model.add(MaxPooling2D(pool_size = (2,2))) model.add(Flatten()) model.add(Dense(64, activation='relu')) # Add a softmax layer with 10 output units: model.add(Dense(1, activation='sigmoid')) model.compile(optimizer="adam", loss='binary_crossentropy', metrics=['accuracy'])
Now we will fit our model with training data.
Epochs :- How many times our model will go through data
Batch size :- How much amount of data at once you wanna pass through the model
validation_split :- How much amount of data (in this case its 20 %) you will need to check cross validation error
model.fit(X, y, epochs=10, batch_size=32, validation_split=0.2)
Train on 20000 samples, validate on 5000 samples Epoch 1/10 20000/20000 [==============================] - 16s 790us/step - loss: 0.6109 - acc: 0.6558 - val_loss: 0.5383 - val_acc: 0.7308 Epoch 2/10 20000/20000 [==============================] - 14s 679us/step - loss: 0.4989 - acc: 0.7557 - val_loss: 0.4989 - val_acc: 0.7564 Epoch 3/10 20000/20000 [==============================] - 14s 679us/step - loss: 0.4502 - acc: 0.7916 - val_loss: 0.4728 - val_acc: 0.7796 Epoch 4/10 20000/20000 [==============================] - 14s 680us/step - loss: 0.4059 - acc: 0.8143 - val_loss: 0.5290 - val_acc: 0.7644 Epoch 5/10 20000/20000 [==============================] - 14s 679us/step - loss: 0.3675 - acc: 0.8334 - val_loss: 0.4572 - val_acc: 0.7938 Epoch 6/10 20000/20000 [==============================] - 14s 679us/step - loss: 0.3181 - acc: 0.8610 - val_loss: 0.4744 - val_acc: 0.7958 Epoch 7/10 20000/20000 [==============================] - 14s 680us/step - loss: 0.2704 - acc: 0.8841 - val_loss: 0.4575 - val_acc: 0.7976 Epoch 8/10 20000/20000 [==============================] - 14s 681us/step - loss: 0.2155 - acc: 0.9104 - val_loss: 0.5198 - val_acc: 0.7878 Epoch 9/10 20000/20000 [==============================] - 14s 679us/step - loss: 0.1646 - acc: 0.9357 - val_loss: 0.6021 - val_acc: 0.7928 Epoch 10/10 20000/20000 [==============================] - 14s 680us/step - loss: 0.1227 - acc: 0.9532 - val_loss: 0.6653 - val_acc: 0.7874
Now the time has come to finally PREDICT, so feed your CNN model with test data to predict.
predictions = model.predict(X_test)
We are rounding the result here as we used sigmoid function and we got the probability values in our predicted data set
predicted_val = [int(round(p[0])) for p in predictions]
Now you have to make submission data frame to submit your result set.
submission_df = pd.DataFrame({'id':id_line, 'label':predicted_val})
Write your data frame to a csv file
submission_df.to_csv("submission.csv", index=False)
Also, read – 10 Machine Learning Projects to Boost your Portfolio
I hope this CNN model will help you, mention in comments on what topic you want the next article. | https://thecleverprogrammer.com/2020/06/16/dog-and-cat-classification-using-convolutional-neural-networks-cnn/ | CC-MAIN-2021-43 | refinedweb | 1,151 | 70.39 |
Back to: C#.NET Tutorials For Beginners and Professionals
Thread Pool in C#
In this article, I am going to discuss Thread Pool in C# with examples. Please read our previous article where we discussed the Performance Testing of a multithreaded application in C#. As part of this article, we are going to discuss the following pointers.
- The Request Life cycle of a Thread.
- What is Thread Pooling in C#?
- Why do we need C# Thread Pool?
- Performance testing between normal thread and thread pooling
The Request Life cycle of a Thread in C# with Example.
Let us understand the life cycle of a thread in C#. In order to understand this, please have a look at the following image. When the .NET framework receives a request (the request can be a method call or function call from any kind of application). To that handle request, a thread object is created. When the thread object is created some resources are allocated to that thread object such as memory. After then the task is executed and once the task is completed then the garbage collector removes that thread object for free-up memory allocation. This is the life cycle of a thread in C#.
These steps are going to be repeated again and again for each request that comes in a multithread application. That means every time a new thread object created and get allocated in the memory. If there are many requests then there will be many thread objects and if there are many thread objects then there will be load on the memory which slows down your application.
There is a great room for performance improvements. The Thread object is created, resources are allocated, the task is executed, and then it should not go for garbage collection, instead of how about taking the thread object and put it into a pool as shown in the below image. This is where thread pooling comes into the picture.
Thread Pool in C#:
Thread pool in C# is nothing but a collection of threads that can be reused to perform no of tasks in the background. Now when a request comes, then it directly goes to the thread pool and checks whether there are any free threads available or not. If available, then it takes the thread object from the thread pool and executes the task as shown in the below image.
Once the thread completes its task then it again sent back to the thread pool so that it can reuse. This reusability avoids an application to create the number of threads and this enables less memory consumption.
How to use C# Thread Pool?
Let us see a simple example to understand how to use Thread Pooling. Once you understand how to use thread pooling then we will see the performance benchmark between the normal thread object and thread pool.
Step1:
In order to implement thread pooling in C#, first, we need to import the Threading namespace as ThreadPool class belongs to this namespace as shown below.
using System.Threading;
Step2:
Once you import the Threading namespace, then you need to use the ThreadPool class and using this class you need to call the QueueUserWorkItem static method. If you go to the definition of the QueueUserWorkItem method, then you will see that this method takes one parameter of type WaitCallback object. While creating the object of the WaitCallback class, you need to pass the method name that you want to execute.
ThreadPool.QueueUserWorkItem(new WaitCallback(MyMethod));
Here, the QueueUserWorkItem method Queues the function for execution and that function executes when a thread becomes available from the thread pool. If no thread is available then it will wait until one thread gets freed. Here MyMethod is the method that we want to execute by a thread pool thread.
The complete code is given below.
As you can see in the below code, here, we create one method that is MyMethod and as part of that method, we simply printing the thread id, whether the thread is a background thread or not and whether it is from thread pool or not. And we want to execute this method 10 times using the thread pool threads. So, here we use a simple for each loop and use the ThreadPool class and call that method.
using System; using System.Threading; namespace ThreadPoolApplication { class Program { static void Main(string[] args) { for (int i = 0; i < 10; i++) { ThreadPool.QueueUserWorkItem(new WaitCallback(MyMethod)); } Console.Read(); } public static void MyMethod(object obj) { Thread thread = Thread.CurrentThread; string message = $"Background: {thread.IsBackground}, Thread Pool: {thread.IsThreadPoolThread}, Thread ID: {thread.ManagedThreadId}"; Console.WriteLine(message); } } }
Once you execute the above code, it will give you the following output. As you can see, it shows that it is a background thread and this thread is from the thread pool and the thread Ids may vary in your output. Here, you can see three threads handle all the 10 method calls.
Performance testing using and without using Thread Pool in C# with Example:
Let us see an example to understand the performance benchmark. Here, we will compare how much time does the thread object takes and how much time does the thread pool thread takes to do the same task i.e. to execute the same methods.
In order to do this, what we are going to do is, we will create a method called Test as shown below. This method takes an input parameter of type object and as part of that Test method we are doing nothing means an empty method.
Then we will create two methods such as MethodWithThread and MethodWithThreadPool and inside these two methods, we will create one for loop which will execute 10 times. Within for loop, we are going to call the Test as shown below. As you can see, the MethodWithThread method uses the Thread object to call the Test method while the MethodWithThreadPool method uses the ThreadPool object to call the Test method.
Now we need to call the above two methods (MethodWithThread and MethodWithThreadPool) from the main method. As we are going to test the performance benchmark, so we are going to call these two methods between the stopwatch start and end as shown below. The Stopwatch class is available in System.Diagnostics namespace. The for loop within the Main method is for warm-up. This is because when we run the code for the first time, compilation happens and compilation takes some time and we don’t want to measure that.
The complete code is given below.
using System; using System.Diagnostics; using System.Threading; namespace ThreadPoolApplication { class Program { static void Main(string[] args) { for (int i = 0; i < 10; i++) { MethodWithThread(); MethodWithThreadPool(); } Stopwatch stopwatch = new Stopwatch(); Console.WriteLine("Execution using Thread"); stopwatch.Start(); MethodWithThread(); stopwatch.Stop(); Console.WriteLine("Time consumed by MethodWithThread is : " + stopwatch.ElapsedTicks.ToString()); stopwatch.Reset(); Console.WriteLine("Execution using Thread Pool"); stopwatch.Start(); MethodWithThreadPool(); stopwatch.Stop(); Console.WriteLine("Time consumed by MethodWithThreadPool is : " + stopwatch.ElapsedTicks.ToString()); Console.Read(); } public static void MethodWithThread() { for (int i = 0; i < 10; i++) { Thread thread = new Thread(Test); } } public static void MethodWithThreadPool() { for (int i = 0; i < 10; i++) { ThreadPool.QueueUserWorkItem(new WaitCallback(Test)); } } public static void Test(object obj) { } } }
Output:
As you can see in the above output, the Time consumed by MethodWithThread is 663 and the Time consumed by MethodWithThreadPool is 93. If you observe there is a vast time difference between these two.
So it proofs that the thread pool gives better performance as compared to the thread class object. If there are needs to create one or two threads then you need to use Thread class object while if there is a need to create more than 5 threads then you need to go for thread pool class in a multithreaded environment.
That’s it for today. In the next article, I am going to discuss Asynchronous Programming in C# with examples. Here, in this article, I try to explain Thread Pool in C# with examples. I hope you enjoy this article and understood C# thread pooling.
5 thoughts on “Thread Pooling in C#”
When I am executing this program in a separate static class, and calling method from Main method, then time elapsed with thread is very less as compared to Thread Pooling!!! Mean opposite result
Why is it so.
This is the output :
Execution using threads
Total time taken with thread : 0
Execution using thread POOL
Total time taken with thread POOL : 8
Execution using Thread
Time consumed by MethodWithThread is : 4450
Execution using Thread Pool
Time consumed by MethodWithThreadPool is : 21905
for (int i = 0; i < 10; i++)
{
}
Thread thread = new Thread(Test);
thread.start();
Threadpool has a limit of 25 thread per pool when your has more then it becomes to deadlock. this is up to how many cores of your laptop. | https://dotnettutorials.net/lesson/thread-pooling/ | CC-MAIN-2022-21 | refinedweb | 1,478 | 71.75 |
1. import java.io.*;
2. public class Example4 {
3. public static void main(String[] args) {
4. Example4 e4 = new Example4();
5. try{
6. e4.check();}
7. catch(IOException e){}}
8. void check() throws IOException{
9. System.out.println("Inside check() method of Class ");
10. throw new IOException();}}
11. class Subexample4 extends Example4 {
12. void check() {
13. System.out.println("Inside check() Method of Subclass");
14. }}
What will be the output of the following code snippet :
(1) Above program will throw exception & will not run.
(2).Inside check() method of class
(3) Inside check() method of Subclass
(4)Inside check() method of class
Inside check() method of Subclass.
(2)
Explanation :
Because the "e4"object is the member of the main class , therefore it can call only the methods of the main | http://www.roseindia.net/tutorial/java/scjp/part2/question4.html | CC-MAIN-2016-44 | refinedweb | 130 | 70.8 |
API to get the list of hotels
In our travelling application, we need to show the list of hotels in a city (St. Petersburg, Russia at the moment, but more will be needed in the future). The idea was to find a hotel information provider, and then upload the complete list into our own database. The following info is needed for each hotel:
- Name in English
- Location (latitude / longitude)
- An image would be nice
- Probably, some sort of rating We started with Booking.com, which does have API, but the API is NOT public, and one has to provide website/bank account information to become an affiliate and get the access.
Then, I had a try with GeoNames.org, which I once used for the list of populated localities in Europe. Unfortunately, the POI list in Russia is quite poor there.
We had next try with OpenStreetMap data. I've downloaded a 7 GB XML file with POIs of Russia. But I got disappointed once again after parsing it: only about 100 hotels in St. Petersburg + most names are in Russian.
Finally, we've found the solution. HotelsCombined has an easy-to-access and useful service to download the data feed files with hotels. 590 hotels in St. Petersburg - good enough! Here is how you get it:
- Go to
- Register there (no company or bank data is needed)
- Open "Data feeds" page
- Choose "Standard data feed" -> "Single file" -> "CSV format" (you may get XML as well)
Parsing the CSV file is a piece of cake, here is a sample Python code to filter out hotels from St. Petersburg:
def filter_hotels(from_file): with open(from_file, 'r') as fr: while True: line = fr.readline() if len(line) == 0: break # EOF hotel = line.split(',') city_code = hotel[5] country_code = hotel[10] if city_code == 'St_Petersburg' and country_code == 'RU': hotel_name = hotel[2] print hotel_name
Here is the complete list of fields in CSV/XML:
hotelId, hotelFileName, hotelName, rating, cityId, cityFileName, cityName, stateId, stateFileName, stateName, countryCode, countryFileName, countryName, imageId, address, minRate, currencyCode, Latitude, Longitude, NumberOfReviews, ConsumerRating, PropertyType, ChainID, Facilities
Update: Unfortunately, HotelsCombined.com has introduced the new regulations: they've restricted the access to data feeds by default. To get the access, a partner must submit some information on why one needs the data. The HC team will review it and then (maybe) will grant access. Sad but true. I'm in the middle of getting through this guarg, I'll let you know about the result.
Update 2: Yes, we got the access to data feeds again. After reviewing the application form, HotelsCombined asked us to let them know our IP, white-listed it and now we can download the files. Still, I don't know why they need all this procedure at all.
Like this post? Please share it! | https://mikhail.io/2012/05/17/api-to-get-the-list-of-hotels/ | CC-MAIN-2018-09 | refinedweb | 464 | 65.01 |
Here, we develop C and Java code to implement binary search using recursion. We develop a method
recBinarySearch that takes a sorted array
arr storing
n integers, where
n >= 1 and the search key, and returns the location of the search key if found; else -1.
Binary search looks at the middle element of the list and see if the element sought-for is found. If not, than it compares the sought-for element with the middle element of the array.
Following are the Java and C codes respectively to implement binary search using recursion.
C program for recursive binary search
#include <stdio.h> int recBinarySearch(int*, int, int, int); int main() { int arr[] = {11, 12, 13, 14, 15, 16, 17, 18 ,19, 20}; int key = 13; int found = recBinarySearch(arr, key, 0, 10); if (found > -1) { printf ("%d found on location: %d\n", key, found); } else { printf("Item not found.\n"); } } ====== 13 found on location: 2
Java program for recursive binary search
class RecursiveBinarySearch { public static void main (String[] args) { int arr[] = {11, 12, 13, 14, 15, 16, 17, 18 ,19, 20}; int found = recBinarySearch(arr, 13, 0, arr.length - 1); if (found > -1) { System.out.println ("Item found on location: " + found); } else { System.out.println("Item not found"); } } static ====== D:\JavaPrograms>javac RecursiveBinarySearch.java D:\JavaPrograms>java RecursiveBinarySearch Item found on location: 2
Hope you have enjoyed reading C and Java programs for recursive binary search. | http://cs-fundamentals.com/tech-interview/dsa/recursive-binary-search.php | CC-MAIN-2017-17 | refinedweb | 238 | 53 |
Comprehensive
Clinic Assessment Software Application (CoCASA)
CoCASA
Frequently Asked Questions
February 15, 2006
On
this page:
Q:
Is CDC eliminating CASA and ACASA once CoCASA is completed?
We have providers that utilize both software programs for
evaluative purposes at this time. If they have to shift to
CoCASA, we need to gauge training needs.
A:
CASA
and ACASA will no longer be supported once CoCASA has been
made available. Obviously there will be a transition period
between the release of the new software and the lack of any
support for the existing software, but once CoCASA is made
available, all existing CASA and ACASA users are encouraged
to begin using the new software. (As for training needs, there
will be a training module within CoCASA that will help new
users learn the software.)
Q:
When we initially set up CoCASA we entered multiple user names
and they are all still listed in the log-on screen. We understand
what we did and why the names are there, but is there a way
to delete the names from the log-on screen?
A:
Unfortunately,
there is no way to delete names from the log-on screen once
they have been created.
Q:
If we didn't use your evaluation software for 2004 how will
we populate the 2005 version?
A:
We
are working on the ability to import information from a registry
or from VACMAN for the next release of CoCASA. For the time
being, you will need to manually enter the data into the 2005
software.
Q:
We have our own inventory system that we use as our PIN number,
and don’t readily have access to VACMAN numbers. Can
we use our inventory 11-digit PIN number?
A:
We
told people to use the VACMAN number as their PIN because
during one of the quarterly conference calls the majority
of people said the VACMAN number and the VFC PIN number are
the same. Also, in the next release, users will have the ability
to import provider information from VACMAN to populate the
database. If you don’t plan on importing from VACMAN,
then don’t worry. You can use whatever VFC numbering
system you currently have in place. Just remember, each provider
you enter into the database must have a unique VFC number.
So if you have several providers with the same PIN, you will
have to come up with a system for altering the PIN to make
it unique (e.g. adding a letter at the end of the number).
Q:
We like to sort our providers by specialty (Pediatrician,
Family Practice, etc). Should we just select ‘Other’
in the Provider Type variable and enter the specialty there?
A:
It
is important to first understand how the Provider Type variable
functions in the database before selecting ‘other’.
‘Provider type’ is an important
variable for the summary reports that you need to produce
for the annual VFC Management Survey. Currently, the report
reviews the ‘Provider:
Is it possible to create an open-ended question in the Custom
Question section?
A:
At
this time you cannot create an open-ended question. A solution
to work around this problem would be to create a multiple
choice question with broad categories. For example, if you
wanted learn the hours of service, you could create a multiple
choice question with the options of: normal business hours
(8-9 hours with appointments available between 8 a.m. and
6 p.m.), extended early business hours (appointments available
before 8 a.m.), extended late business hours (appointments
available after 6 p.m.), Saturday a.m. hours (morning), etc.
We are considering the possibility of including an open-ended
option in the Custom Questions setup for future software versions..
We recommend that you contact us at nipCoCASA@cdc.gov
to assist with setting up your centralized database.
It
is important to note, however, that difficulties with using
CoCASA from a shared drive have been documented. In some cases
(particularly where users are accessing it remotely rather
than from one central site) the software has been very slow
to respond (slower than it is normally). website (exit site)
that lists the minimum system requirements and which machines
you should be able to deploy a .NET application.
Q:
Is the software compatible with Windows 98?
A:
At
this time, we do not have a solution for running CoCASA under
Windows 98. The software programmers are looking into the
issue and will provide updates, when available.
Top
Q:
If we retain all of the provider data and questionnaire data,
will duplicates be created when the import becomes functional?
Should the files be deleted before that? What do you suggest?
A:
There
should not be duplicate entries when you merge the data. The
software will be set to merge using the VFC PIN# for the providers.
As long as all users in a program use the same PIN for a provider,
then the visit data will all be merged into one record for
that provider. If different PIN numbers are used for the same
clinic, then you will have duplicate entries that will need
to be weeded out. This is why we are so adamant about unique
PIN numbers for each provider site and that everyone use them
consistently. If possible, populate all databases with the
provider names/address and PIN numbers from the start so that
differences are less likely to occur.
Q:
As we have several field staff that conduct VFC-AFIX visits
around the state, do you have any suggestion as to how we
can compile the information in CoCASA that we need to submit
with our Annual Report without having to do double entry?
Or are you expecting that the import function will be available
before the end of the year so we can import all the visits
and generate the reports we need for the Annual Report?
A:
The
import/export functions will be made available in the September
’05 patch release. You will be able to import data from
all end users in time to submit your annual reports to CDC
and elsewhere.
Q:
Can the field ‘Number of age-eligible children in provider
practice’ be imported along with the PINs prior to completing
visit information?
A:
Since
the number of age-eligible children is not a field in the
provider setup screen it cannot be imported.
Q:
How can our local import template be incorporated into CoCASA
so that it is available when the software is downloaded from
the web?
A:
Specific
instructions for this process have been added to the website
at the following location:
Q:
How can I import my WinCASA sites to my providers that are
already in CoCASA?
A:
First
import your legacy WinCASA data into CoCASA. Then for each
assessment use the “Move” button on the Assessment
Setup tab to move each assessment from the provider that was
created from the import to the provider that was already in
CoCASA.
Top
VFC/AFIX
Evaluation Module
Q:
If I conduct a follow-up site visit to see if a provider has
addressed issues of noncompliance from an earlier VFC Site
Visit, how do I document that visit? What type of visit would
this be?
A:
If
you do a follow-up visit, you should enter it as a new visit
and select ‘VFC
Follow-Up Visit’
as the purpose of visit. Although there is not currently anywhere
for you to enter notes summarizing the contents of that follow
up visit, the assumption is that you are following up on the
corrective actions you recommended at the previous VFC site
visit.
Q:
Regarding the storage section of the questionnaire, it has
fields for up to five different refrigerators. Our concern
is that our representatives won't remember which refrigerator
is which (when doing future site visits). This would be an
example where the person conducting the VFC site visit should
enter side notes for clarification.
A:
Since
there is not currently a separate notes field you may have
to keep track of information like this in a separate, electronic
file. If, however, one of the provider’s refrigerator
(or freezers) is outside the acceptable range, you can document
the exact location of this refrigerator in question #35 (where
you enter corrective actions). For example, “Refrigerator
#1 (next to sink in exam room #12) was out of the acceptable
temp. range. Vaccine needs to be moved to alternative unit
until temp can be controlled and monitored appropriately.”
Q:
If the provider's thermometer and our field representative's
thermometer have both been calibrated and certified, and if
there is a temperature difference (between thermometers, with
one outside the allowable range of proper storage), are there
any guidelines as to what is acceptable?
A:
If
the reviewer has brought their own thermometer to the practice,
then this is the temperature that should be entered into the
Site Visit Questionnaire. If this temperature is outside the
allowable range then the provider is not in compliance for
this aspect of vaccine storage.
Q:
Is there a way to lock the custom questions to ensure they
are always entered the same way?
A:
When
a questionnaire setup is exported or imported the custom questions
become locked and the end users cannot edit or deselect a
custom question. This way all exported/imported Custom Question
setups will be identical.
Q:
In the VFC Site Visit Questionnaire, why can’t I enter
data for question 22?
A:
Question
22 is auto-calculated by the computer. When you enter a temperature
in question 21, the computer will automatically determine
if the temperature was within the specified range and enter
the appropriate answer into question 22. After you enter data
into question 21 and then move your cursor to question 23,
the computer will fill in question 22.
Q:
When filling in information for the documentation section
of the survey, how should you answer question number 32 in
reference to VFC eligibility screening in the clinic/practice
if they are unable to provide you with records to review during
your site visit?
A:
Provider
offices should be contacted in advance of the VFC site visit
and informed of the purpose and expectations of the site visit;
including the specific number of records that should be made
available for review/inspection or available for the reviewer
to select from, depending on the protocol of the state VFC
Program.
If a reviewer finds that the provider records are not available
for inspection, the provider is noncompliant for this element
of the visit. Address the specific issue or reason with the
provider and schedule a follow-up visit within the next thirty
days. If records are located at an alternate location, schedule
a visit to that facility. If the records are not available
during the rescheduled visit, a letter should be sent from
the state program giving the provider a specific timeframe
to come into compliance. If compliance is not achieved, the
program should consider disciplinary actions such as suspension
from the program. Medical records maintenance is not only
a requirement for the VFC program, but a state requirement
and a Medicare and Medicaid requirement.
Q:
How can I move the information listed in the box titled ‘Issues
Requiring Corrective Actions’
to the text box to the left (Question 35)?
A:
You
are not able to move information from the list on the right
to the text box on the left. The purpose of the list on the
right is to let you know in which high priority areas the
provider was noncompliant. The purpose of the text box on
the left is for you to document what actions you took with
the provider to correct the problems. For example, if the
list on the right said the provider was not recording temperatures
at least 2 times per day, you might enter “Reviewed
guidelines for recording temperatures with provider; instructed
provider to record temps at least 2x a day”
in the text box on the left.
Reports
Q:
When I try to run the ‘VFC
Site Visit Questionnaire Results’
or ‘VFC
Site Visit Questionnaire Optional Questions’
reports, I get the following error message: “Index was
outside the boundaries of the array.” What does this
mean?
A:
This
error is due to a bug in that report. This problem has been
reported by a few users and is being fixed in the patch.
Top
Q:
I’m getting an error message that says “Failed
to load resources from resource file” when I start CoCASA.
What can I do to fix this?
A:
A.
Download and install Service Pack 1 for the .NET Framework
1.1 at.
Q:
I’m getting an error message that says “Foxpro
driver does not support this function” when I try to
import legacy WinCASA data. What can I do to fix this?
A:
If
you have Windows XP, download and install an updated Foxpro
driver at.
The
file you need to download and install is the VFPODBC.MSI file
(not the VFPODBC.MSM file). You should click on the link that
says “English” under the heading “VFPODBC.MSI”
towards the bottom of the web page.
Q:
The missed opportunity* rate for most assessments is significantly
higher (twice as big in many cases) in CoCASA than in CASA…we
used the "at last visit" (vs "at any visit")
setting. Do you know anything that might help explain why
the missed opp rates are so different in CoCASA vs CASA even
when the assessments are seemly set up to run similarly?
A:
One
important difference between CASA and CoCASA is in the definition
of a “missed opportunity.” CoCASA considers the
patient’s entire immunization history, while CASA considers
only the doses that were given in the series specified by
the user.
For example, suppose a user runs the CASA Summary report for
the 43133 series and the child received a dose of DTaP, Polio,
MMR, HIB and HepB on his/her last visit to the office (the
last visit date for all vaccines is the same date), but the
child is still missing a DTaP4. The child also received an
influenza vaccine 9 months after this last visit. CASA would
consider the child as “not-up-to-date” but not
as a “Missed Opportunity” because it does not
consider the influenza dose…it’s not part of the
43133 series that the user specified. CoCASA on the other
hand, would classify the child in the “Missed Opportunity”
category because it does consider the entire history and the
child could have received a 4th dose of DTaP on the date that
he/she received the flu shot. Therefore, the Missed Opps would
be greater in CoCASA than in CASA.
It’s important to remember (as in CASA) that the Not
Up-to-Date categories in CoCASA are hierarchical and mutually
exclusive. Once a patient is identified as Not Up-to-Date,
the record is reviewed to determine if a missed opportunity
occurred. If not, then the record is reviewed to determine
if the child is eligible for any doses (meaning minimum ages
or intervals have been met on the day of the assessment).
If the child is eligible, then the record is reviewed to see
if the last visit was within the past 12 months or over 12
months ago.
*NOTE:
The answer above assumes that Missed Opportunities are defined
as “On the last Immunization visit”.
Q:When
comparing single vaccine rates using the "ACIP Rec."
box vs not using ACIP recs., there are times when the single
vaccine rates actually are higher when the "ACIP rec."
box is checked (that was a very poorly written sentence).
I can't think of any reason why single vaccine rates would
be higher when the "ACIP Rec." box is checked but
that appears to be the case in some assessments.
A:
Most
likely you are looking at a report like the Diagnostic Report
which requires the user to select a series and then provides
the coverage for that series and each individual component
of that series. If this is the case, most likely HIB is the
issue. When the ACIP Recs are applied, the software is programmed
to accept 1 dose of HIB after 15 months as “up-to-date”
for HIB even if the series that was selected was the “43133”
(3 doses of HIB) series. If a user deselects “Apply
ACIP Recs”, then up-to-date for 3 doses of HIB would
count only situations where 3 dates were recorded for HIB.
Q:
What is the biggest difference between CASA and CoCASA?
A:
The
most important difference between reports results from CASA
and CoCASA is the option to Apply ACIP Recommendations. This
option is applied by default in CoCASA and means that only
valid doses are considered in the calculation of report results.
If a dose is determined as “invalid” by CoCASA,
then it is as if that dose never occurred. This can result
in a very different categorization of a patient included in
the Summary Report. For example, if a child received all doses
in the 43133 series before 24 months of age with the first
MMR after the first birthday, the child would be considered
“Up-to-Date” in CASA. However, if the 4th DTaP
was not given 6 months after the 3rd dose of DTaP, then this
child would be considered “Not Up-to-Date” in
CoCASA.
Deselecting the ACIP Recs.
The
“Apply ACIP Recs” option is only available in
CoCASA and is selected by default to apply all ACIP recommendations.
If it is deselected, all recommendations for minimum age and
minimum intervals between doses are not considered. The software
only counts the dates recorded for each vaccine type. In CASA,
the software does not consider minimum intervals for any report
that produces a coverage level, or rate. The CASA Summary
Report only verifies that the first dose of MMR and/or Varicella
was given after 12 months of age and that the specified series
was completed prior to 24 months of age
When
the “Apply ACIP Recs” option is deselected, the
results for CoCASA and CASA should be similar for the Up-to-Date,
Late Up-to-Date and Not Up-to-Date. They may not match exactly
because in this situation, CoCASA is not applying any ACIP
recommendations (minimum age, minimum interval, etc), but
CASA does look for a minimum age of 12 months for MMR and
Varicella. Therefore, the up-to-date rates in CoCASA may be
slightly higher. The biggest difference between CoCASA and
CASA when the “Apply ACIP Recs” is deselected
will be found in the 4 categories for “Not Up-to-Date”.
These four categories include: Missed Opportunities*, Not
Eligible for Vaccine, Last Visit <12 months and Last Visit
>= 12 months.
Without
applying the ACIP Recs, CoCASA will look for a missed opportunity
on the last visit date for any immunization regardless if
the minimum age or interval has been met (remember ACIP recs
are NOT applied). So, if a child received an immunization
but did not receive an immunization in all other vaccine groups
included in the series (i.e. 43133 series is selected in the
criteria and child did not get a MMR on the last immunization
date recorded), CoCASA will categorize that child as having
had a “Missed Opp” regardless if he/she has met
the minimum age of 12 months. CASA on the other hand, will
not consider this a Missed Opp, because the minimum age has
not been met. Therefore, the category results will drastically
differ.
*NOTE: The answer above assumes that Missed Opportunities
are defined as “On the last Immunization visit”.
Top of page
This
page last modified on February 15, 2006
Department of Health and Human Services
Centers for Disease Control and Prevention
CDC Home
|
CDC Search |
CDC Health Topics A-Z | http://www.cdc.gov/nip/cocasa/cocasa_faq.htm | CC-MAIN-2013-20 | refinedweb | 3,317 | 59.13 |
dos2unix use monospace font instead of bold fonts for code.
The file `mk.conf` is the central configuration file for everything that has to do with building software. It is used by the BSD-style `Makefiles` in `/usr/share/mk` and especially by pkgsrc. Usually, it is found in the `/etc` directory. If it doesn't exist there, feel free to create it. Because all configuration takes place in a single file, there are some variables so the user can choose different configurations based on whether he is building the base system or packages from pkgsrc. These variables are: * `BSD_PKG_MK`: Defined when a pkgsrc package is built. * `BUILDING_HTDOCS`: Defined when the NetBSD web site is built. * None of the above: When the base system is built. The file /usr/share/mk/bsd.README is a good place to start in this case. A typical `mk.conf` file would look like this: <pre><code> # This is /etc/mk.conf # .if defined(BSD_PKG_MK) || defined(BUILDING_HTDOCS) # The following lines apply to both pkgsrc and htdocs. #... LOCALBASE= /usr/pkg #... .else # The following lines apply to the base system. WARNS= 4 .endif </code></pre> | https://wiki.netbsd.org/cgi-bin/cvsweb/wikisrc/pkgsrc/mk.conf.mdwn?rev=1.2;sortby=log;only_with_tag=MAIN | CC-MAIN-2020-24 | refinedweb | 189 | 68.97 |
With the launch of PubNub BLOCKS last year, PubNub extended its functionality to enable you to execute functions on your data streaming over the PubNub network. And it fits in great with simple turn-based multiplayer games for both web and mobile.
In this tutorial, we’ll walk you through a simple way to implement BLOCKS by using our Hangman BLOCK to create a simple multiplayer game of Hangman. We’ll use Swift to create the UI for the application, but you can easily adapt the code to suit any of your development needs. With that in mind, let’s get started!
The full code repository is available on Github.
Creating the App
Before you start working with the BLOCK, you’re going to need to set up a fairly straightforward application using Swift. Make sure you’ve already followed the steps to enable PubNub using Cocoapods and that you’re working from your project workspace. Then, initialize PubNub in your AppDelegate.swift file so that you can use it in your application.
import PubNub @UIApplicationMain class AppDelegate: UIResponder, UIApplicationDelegate, PNObjectEventListener { var window: UIWindow? lazy var client: PubNub = { let configuration = PNConfiguration(publishKey: "insert-pubkey-here", subscribeKey: "insert-subkey-here") let pn = PubNub.clientWithConfiguration(configuration) return pn }() … }
For this app, you’ll be using three different view controllers. The first is a menu that enables all players to set the same GameID, ensuring that they’re all using the same PubNub channel to play. This file is called MainController.swift. The next view controller allows one player to set the word for the game, called WordViewController.swift. The last view controller is where the game will actually be played, and where you’ll be using PubNub’s Data Stream Network to communicate with your BLOCK. Call this one PlayerViewController.swift.
In AppDelegate.swift initialize your MainController as your Navigation Controller and create the window you’ll be using for your application.
func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplicationLaunchOptionsKey: Any]?) -> Bool { window = UIWindow() let navigationController = UINavigationController(rootViewController: MainController()) window?.rootViewController = navigationController window?.makeKeyAndVisible() return true }
Now, create a UI in your MainController so that all players can input a GameID and then navigate to the other view controllers to set up and start the game. You can do this however you’d like, but if you’d like to see how we did it, check out the GitHub repository for this project. All you need is a couple buttons and a text field to make the project work.
Also make sure that you declare global variables that contain the inputted GameId and SubmittedWord – the id that will identify the group of players playing your game and the chosen word for the players to guess. GameId should update after the user enters their GameId.
var gameID = "" var submittedWord = ""
Now let’s move on to WordViewController. This UI should contain a text box for the user to enter the word they want to use for the game, then it should save that input to the global variable submittedWord.
Finally, let’s work on PlayerViewController. This is where the game is played and where it will communicate with your BLOCK. First, let’s deal with the UI. You’ll need a textbox for the user to enter their guess, a label that displays the number of lives they have, a label that shows all of their previous guesses, and a representation of the letters in the word, with the letters hidden, of course. But because you need information from the BLOCK to determine the details of how many letters have been guessed and how many lives are left, you can hold off on programming the dynamic content of the different labels. We’ll add that in after we configure the BLOCK.
But because you need information from the BLOCK to determine the details of how many letters have been guessed and how many lives are left, you can hold off on programming the dynamic content of the different labels. We’ll add that in after we configure the BLOCK.
Now that your app is all ready to go, it’s time to set up your BLOCK and start using PubNub! But don’t worry, we’ll come back to the app later to fill in the missing fields.
Setting up the Hangman BLOCK
A helpful feature of PubNub BLOCKS is that it includes a catalog of pre-designed BLOCKS that allow you to implement a variety of features in your applications, from automatically posting tweets to converting street addresses to coordinates, with just a few clicks of a button.
For today, we’re going to be using the Hangman BLOCK to provide the backbone for our multiplayer game. Open the Hangman BLOCK and click on the Try It Out button and make sure you’re logged in to get started.
On the page that appears, select the app and keyset that you’d like to use for this project, or create new ones by clicking on the + button next to each box. When you’re done, click Get Started and the template should open.
To make the game easier to work with from our app, we’re going to tweak the BLOCK code a little bit. Instead of inputting the word directly into the BLOCK, we’re going to have one game user select the word and publish it in a message. To have multiple groups of people playing at the same time, they’re also going to input the channel they want to use for their specific game output.
With that in mind, update the code on your BLOCK so that both the word and the channel come from a published message, like below.
var word = request.message.word.toUpperCase(); var outputChannel = request.message.channel;
To more easily handle the format of messages in Swift, we’re also going to change guesses from an array to a string.
var guesses = ""; … var startGuesses = "";
Because we’re using our app to help keep track of the lives variable, we don’t need to use the key-value store in the template, so throughout the BLOCKS code we’re going to remove references to saving lives in store.
We want to reset the starting values of the BLOCK when we first start the game, and we’ll do this by publishing “1” as the submitted letter. To adjust to this change in format, we’ll add an if-else statement to the BLOCK that deals with the change. Re-formatting the JavaScript in the BLOCK to make the changes as shown below.
var a = store.get("guesses"); return Promise.all([a]).then(function (values) { if (request.message.letter == 1) { guesses = ""; lives = 5; store.set('guesses', ""); } else { guesses = values[0]; lives = + request.message.lives; guesses = guesses + request.message.letter.toUpperCase() + " "; if (word.indexOf(request.message.letter.toUpperCase()) > -1) { successGuess = true; } else { lives -= 1; } store.set('guesses',guesses); } … }
Lastly, change the responseMessage so that all the entries are Strings, to make it easier to deal with in Swift. For wordResponse, you’ll also have to remove the commas as shown below.
var responseMessage = { "won": won.toString(), "lost": lost.toString(), "word": wordResponse.toString().replace(/,/g, " "), "guesses": guesses.toString(), "lives": lives.toString(), "successGuess": successGuess.toString() };
To start running your BLOCK, press the Start Block button in the top right corner of the page, wait for the BLOCK to deploy. If you want to test it, type the following into the Test Payload section and press Publish.
{ "lives": "5", "channel": "1234", "letter": "1", "word": "pubnub" }
If everything is working properly, you should get a response with 6 blanks, have 5 lives, and an empty string for guesses. Now you’re ready to integrate your app.
Configuring the App
Now it’s time to connect your app to your BLOCK using PubNub. When PlayerViewController loads, you’ll need to publish a message to your BLOCK’s channel that has all of the starting information for the game: the submitted word, the gameId, the number of lives the player has, and an indication that the game has just started.
Just like when you tested your BLOCK, the indication that the game has just started is to publish a “1” in the “letter” field of your BLOCK. So, in the viewDidLoad() section of PlayerViewController, publish a message to the channel containing your BLOCK with the correct values inserted.
appDelegate.client.addListener(self) appDelegate.client.subscribeToChannels([gameID], withPresence: false) appDelegate.client.publish(["word":submittedWord, "channel": gameID, "letter": "1", "lives":"5"], toChannel: "hangman-game-after-publish", compressed: false, withCompletion:{(status)->Void in if !status.isError { } else{ print("Publishing Error (initial publish)") } })
After you publish a message to your BLOCK, it will send back a message with the status of the game. You need to take this information and make your PlayerViewController display it by accessing the different parts of the message and assigning them to different fields in your UI.
func client(_ client: PubNub, didReceiveMessage message: PNMessageResult) { if let data = message.data.message as? [String: String] { guesses.text = "Guesses: " + data["guesses"]! lives.text = "Lives: " + data["lives"]! dashes.text = data["word"] livesNum = data["lives"]! … }
The game status received will also tell you whether the game is won or lost, at which point you’ll want to update your screen to display that.
if data["won"] == "true" { lives.isHidden = true guesses.isHidden = true sendLetter.isHidden = true enterLetter.isHidden = true dashes.isHidden = true view.addSubview(winning) } if data["lost"] == "true" { lives.isHidden = true guesses.isHidden = true sendLetter.isHidden = true enterLetter.isHidden = true dashes.isHidden = true view.addSubview(losing) }
Lastly, you’ll need to create a function that sends the user’s inputted letter to your BLOCK, after which your BLOCK will send back the game status. This is pretty straightforward, just attach the function to the send button next to your text box.
func publishLetter() { submittedLetter = enterLetter.text! enterLetter.text = "" appDelegate.client.publish(["word":submittedWord, "channel": gameID, "letter": submittedLetter, "lives": livesNum], toChannel: "hangman-game-after-publish", compressed: false, withCompletion:{(status)->Void in if !status.isError { } else{ print("Publishing Error (initial publish)") } }) }
And you’re done! Make sure your BLOCK is running, open your app, and start playing Hangman! If you download the app onto different phones and all enter the same gameID, you can now play a multiplayer game with your friends.
Wrapping Up
Now that you’ve used BLOCKS, you have a brand new tool in your toolbelt when it comes to development. Take a minute to peruse the rest of the BLOCKS catalog and see all the cool ready-made BLOCKS we have available for you to test out. Or, be creative and make a BLOCK that’s all your own. Welcome to the brand new world of PubNub BLOCKS. | https://www.pubnub.com/blog/how-to-build-multiplayer-hangman-for-ios-with-swift-and-pubnub-blocks/ | CC-MAIN-2021-31 | refinedweb | 1,784 | 65.22 |
If you have been following my earlier blog posts, you know that I tend to explore areas like NFC and Bluetooth Low Energy from a BlackBerry 10 developer’s perspective, usually in conjunction with my colleague Martin Woolley.
So you may be wondering what this Unity 3D thing is all about, and why I’m rooting around gaming. Well, Unity 3D is a phenomenal framework that allows you to develop amazing games on multiple platforms, including BlackBerry 10. And what’s more, it contains a very flexible plugin framework to allow you to extend your game’s capabilities down into native capabilities of BlackBerry 10!
Because of this framework, I saw an opportunity to add the capability to a Unity 3D game to interact with a Bluetooth Smart device. Imagine influencing gameplay through data streamed from a heart rate monitor worn by the player; perhaps zombies could become more frenetic as your heart rate increases! This capability goes by the name of “haptics.” Hold that thought for a while!
I’ve already touched on this in a previous post but to I’m going to explain in a lot more detail how I implemented a Bluetooth Plugin for Unity to allow game objects to access live heart rate monitor (HRM) data worn by a player.
If you’ve never encountered Bluetooth Low Energy technology before, you’ve got a little bit of homework to do. Take a look at these documents:
- Resource index of material on Bluetooth Low Energy for BlackBerry 10
- Bluetooth Low Energy Primer for Developers
- BlackBerry 10 Heart Rate Monitor application sample in GitHub
Now that you’ve read about how a Bluetooth LE Heart Rate Monitor can be integrated into a standard native BlackBerry 10 application, it should be clear how we can do this in Unity 3D. It’s just a case of embedding the Bluetooth functionality in a native shared library and plumbing it into the Unity 3D framework. I’ll skip over the Bluetooth specific details in this article since these are already well documented in the links above; I’ll focus instead on the Unity 3D bits.
The Plugin Shared Library
Let’s start with a little background. Unity 3D plugins are basically shared libraries, such as libBtHrmPlugin.so (this is the name of my Bluetooth Low Energy Heart Rate Monitor Plugin) that expose a set of “C” functions something like this:
The first thing to notice is that the plugin shared library should expose functions using “C” linkages. If you expose these using “C++” linkage conventions then external names get “mangled” to allow accommodation of “C++” class qualifiers and other annotations. I’ve exposed five functions:
- initialiseBtLeImpl(): This function would be called by the Unity application to initialize the Bluetooth Low Energy environment. Essentially doing things like ensuring Bluetooth is on, establishing callbacks for Bluetooth events and other housekeeping.
- scanForHrmDevices(): Before we can use a particular HRM device, we perform a scan for candidate Bluetooth LE devices. There may be many devices that offer the HRM service so we need to enumerate them and pass the list of the ones that have been found back to the Unity application – more on how we actually do this later. Scanning for devices could be a time consuming activity so this task is handled by a separate thread that is started as part of this functions execution. When we return from this function back to the calling Unity application, the thread will still be running in the background searching for devices.
- startMonitoringImpl(const char *address): Once the Unity application has determined the number, names and addresses of the available Bluetooth HRM devices, it needs to select one to connect to and monitor. That’s what this function does. After successfully returning to the Unity application, Heart Rate Monitor events will be delivered to it. We’ll talk about the mechanics of how this happens later.
- stopMonitoringImpl(const char *address): You’ve guessed it. The Unity application will call this function to stop receiving Heart Rate Monitor events.
- terminateBtLeImpl(): This is the reverse of the initialize step where we tidy things up after the Unity application using the services of this library has finished.
These functions allow the Unity application to instruct the plugin to perform specific tasks, and they all pass back an integer return code to indicate whether they were called successfully or not. Functions such as initialiseBtLeImpl and terminateBtLeImpl are quite simple and a return code is all they need to return. However, you’re probably wondering how the plugin communicates information other than simple return codes back to the Unity application. If you take another look at the code snippet above, you’ll see a declaration of a function called UnitySendMessage(); this should give you a clue.
UnitySendMessage() is a function that is exposed by the Unity 3D framework itself. This is the way that data is passed back asynchronously to Unity from our shared library plugin. Let’s examine the parameters that this function takes:
- const char *obj: This a pointer to a NULL terminated string that identifies a “Game Object” within the Unity application.
- const char *method: This is a pointer to a NULL terminated string that identifies a method on the “Game Object” that we want to call.
- const char *msg: This is a pointer to a NULL terminated string that represents the data that will be passed to the above method on the aforesaid “Game Object.”
You can see that we can only pass a single string back into the Unity application. This means that if we want to pass back a complex object, such as a list of discovered Heart Rate Monitor devices with their names and addresses, we’d better find some way of encoding that information in a single string. Sounds like a job for JSON, which is exactly the approach taken in the plugin. Let’s solidify that with a concrete example.
The code fragment above comes from the plugin itself. You can see that it covers two cases:
- The plugin has successfully scanned and detected a number of HRM devices. It’s encoded these into a JSON string ( json_for_unity ) and passes that as the single parameter to a function called RequestScanForHrmDevicesSucceeded on the Game Object called “BtHrmPlugin” .
- In the case that there has been an error in the plugin while scanning for HRM devices, an error message is constructed as a JSON object with an error ID and an error text string. This is passed as the single string parameter to a function called RequestScanForHrmDevicesFailed on the same Game Object, “BtHrmPlugin”.
Some of you may have noticed that UnitySendMessage returns pointers to the Unity 3D framework. Isn’t that unsafe? In the above fragment it returns a pointer to a char array containing the JSON string and this becomes invalid after exiting the block after returning from UnitySendMessage. This is a good question and what seems to happen is that Unity copies the data rather than pass around pointers, so it’s safe to do it this way.
In essence, that’s all there is to the plugin. The key to constructing a Unity application and plugin as a functioning pair is to identify these functions, which represent events that can be emitted from the plugin asynchronously. Let’s just have a look at the ones that I’ve used:
- RequestScanForHrmDevicesSucceeded: This indicates that the plugin has successfully completed a scan for Bluetooth HRM devices and these are returned encoded in a JSON string.
- RequestScanForHrmDevicesFailed: This indicates that the plugin has failed in some way during its scan for Bluetooth HRM devices. An error ID and a description of the error are returned encoded in a JSON string.
- RequestServiceDisconnected: This indicates that the plugin has been notified that the HRM device has been disconnected from the plugin. This normally occurs as the result of a request from the Unity Application but the reason is returned as a JSON encoded string.
- BtHrmNotification: This indicates that the HRM device has notified the plugin of a heart rate measurement. The measurement data can contain many data points and is described in detail in the Bluetooth Special Interest Group (SIG) Heart Rate Profile specification. In essence, it contains the current measured heart rate in beats per minute, with optional details on energy expended, skin contact indication and other data points dependent on the device’s implementation.
- BtHrmMessage: This indicates that the plugin is sending a message to the Unity application. It’s simply a text string, but the application can chose to display it in a suitable way. I typically use it to show progress of the plugin’s processing and also simple debugging information.
Let’s now just a have a quick look at the structure of the project in Momentics.
This project was created using the standard BlackBerry shared library template.
I use “.c” and “.h” file types so that the compiler recognizes these as “C” files and not “C++” ones. Do you recall the discussion above on “C” versus “C++”?
The shared library itself is created in the standard locations and called libBtHrmPlugin.so.
I’ve also added a Scripts folder. The reason for this is that the shared library needs to be copied onto the Unity 3D framework project where I build my Unity application, and having to do this manually became tedious, so I added some scripts to make the process a little easier.
The Unity 3D Application
Now that we understand the way the plugin is architected, and the API that has been designed to interact with it in terms of functions it exposes and events that it communicates back to the Unity 3D framework, let’s look at other side of the API: how to interact with this plugin from a Unity 3D game.
Unity 3D applications typically use C# (using the cross-platform Mono Framework) or JavaScript to express game logic. I used C# in this example rather than JavaScript for no other reason than I quite like C#. C# has a very nice ability to represent events. It allows applications to subscribe to these events and be notified when they occur, and it sits nicely with the model of a Unity 3D application which is driven by external events such as button clicks, timers, network events,or even events that percolate up from our Bluetooth LE HRM plugin!
This means that we need to implement a “layer” beneath the Unity application which can map both calls made by the application onto the plugin and asynchronous events from the plugin onto C# events that can be subscribed to by the application. We do that in a C# class called BtHrmPlugin. Here are the relevant elements of the definition of this class focusing on the one method used to enable the application to initialise the plugin’s Bluetooth environment:
This class needs to link to the external functions in the plugin that we described in the previous section. C# uses the statement DllImport to achieve this. It has this peculiar name since C# originated on Windows and a DLL is a Dynamic Link Library, which is.
In this case the function that is exposed to the Unity 3D game application is called InitialiseBtLe() and this results in a call to the corresponding function in the plugin library called initialiseBtLeImpl(). Notice the #if … #else … #endif construct. This ensures that the call to the plugin is made only if the application is running on a BlackBerry 10 device and we’re not running the application inside the Unity editor. If we are running in the Unity editor we just return a good return code.
Now, how do we deal with C# events? When the plugin makes an asynchronous call to UnitySendMessage this needs to be mapped onto a C# event that the application can subscribe to. Here’s how it’s done; I’ve highlighted the key points to help you see what’s going on.
A successful scan for Bluetooth LE devices will result in the plugin calling the C# method in our class called RequestScanForHrmDevicesSucceeded. First we check to see if there are any subscriptions in place for this type of event, and if so, we parse the JSON string provided by the plugin and create a new instance of the class ScanForHrmDevicesEventArgs which is used as a container for the list of devices discovered and their attributes. Then we raise the ScanForHrmDevicesSuccessfulEvent event we’ve defined and it’s sent to the subscribers.
Here’s how part of the Unity game logic would register for this event in C#:
The successful scan resulting in a list of Bluetooth LE heart rate monitor devices end up being processed asynchronously in this function (ScanSuccessful) in the game logic. All the other functions and events supported by this plugin work in exactly the same way, the pattern is identical.
It’s worth looking at how these elements are arranged in a simple Unity 3D project. In order to test it, I wrote a simple Unity 3D sample application. I’m only a newbie when it comes to being able to design sophisticated and elegant user interfaces, so the application is a very functional one using OnGUI() to demonstrate how the plugin is functions. Here’s the layout of the project:
This is where you must place the shared library plugin libBtHrmPlugin.so. It won’t work if it is placed anywhere else. Now you can see why I had some useful scripts in my Momentics plugin project to be able to copy a new version of the plugin to this location whenever I’d re-built it.
I placed the C# BtHrmPlugin.cs file containing the C# class BtHrmPlugin in the folder above outlined in RED. There are no constraints on where you can place this. You’ll notice that there is also, outlined in GREEN, a Game Object also called “BtHrmPlugin”. This is important. In order to attach the C# plugin script to the game play scene you need to attach the script to a Game Object. Here we’re closing the loop on a point I made earlier. Do you recall that I noted that the first parameter on the UnitySendMessage() function identified a Game Object that the message should be sent to? Well, here’s confirmation of that! The plugin send its messages to a Game Object called “BtHrmPlugin”.
So, we’ve come full circle regarding the plugin and how it works. My simple test application is in the file BtHrmGuiHandler.cs as shown below.
Below is a snapshot of the application running using a real heart rate monitor! The large number is my instantaneous heart rate while testing this application. You can see that I happened to be using a “Zephyr” Heart Monitor, which I had already paired to my Z30, although any others that support the Bluetooth SIG Heart Rate Profile will also work.
The buttons at the top of the screen control the ability to start and stop monitoring through the plugin, and the area at the bottom of the screen represents a log showing the operation of the plugin and the data that it’s receiving from the heart rate monitor.
Summary
I hope that you’ve found this description of my journey through this simple Unity 3D plugin useful. The plugin itself and the associated simple Unity application are available on GitHub.
It would be nice to see a real game exploiting Bluetooth Low Energy accessories. Watch this space!
Additional Resources
- BlackBerry 10 Heart Rate Monitor Unity 3D plugin and sample Unity 3D application described in GitHub
- Video showing the BlackBerry 10 Heart Rate Monitor sample Unity 3D application described in this article
- Bluetooth Special Interest Group (SIG) Heart Rate Profile
- If you want to know more about Bluetooth Low Energy then look here at the index of material produced by Martin Woolley and I
- Bluetooth Low Energy Primer for Developers
- BlackBerry 10 Heart Rate Monitor application sample in GitHub | http://devblog.blackberry.com/2014/02/diary-of-a-unity-3d-newbie-bluetooth-low-energy-plugins/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+blackberry%2FCAxx+%28BlackBerry+Developer+Blog%29 | CC-MAIN-2017-26 | refinedweb | 2,667 | 58.32 |
PRESTO is not something new: its basic ideas are presupposed in a lot of people’s thinking about the web, and many people have given names to various parts, but I don’t know that anyone has given a name to this package. In any case, this combination of ideas which seems to me to be the sweet spot of practicality for large public document sets seem to have escaped the way that we approach many problems and systems. However, the question I ask is “How else are you going to do it?”.
PRESTO is a combination of three ideas:
- Permanent URLs
- REST
- Object-oriented
Legal documents such as legislation have three characteristics: they are highly structured, they are highly voluminous, but they have highly varying value. So many documents do benefit from the classic SGML treatment, with semantic Full Monty markup, but many others are accessed so rarely there is little benefit in having high-level markup for them. And in fact many documents may be scanned images with no text at all, and full markup entails re-keying.
So what PRESTO does (and people familiar with SGML PUBLIC identifiers will get the drift, and even more so people familiary with ISO Topic Maps) is to say that there is a real importance in being able to have permanent names even for resource that don’t have really brilliant representation available.
In fact, the legal documents may not exist physically yt all: it may be a base document and an ammendment document. So we want a permanent URL for the idea of that document, and we want our system to deliver the best fit it can when we want to get the representation. And we want to allow multiple formats, because often the best representation may be client-dependent. !
Some people might understand it better if we say that PRESTO is about naming and structuring the configuration items for document sets, and forms a precondition for vendor-neutral implementations, and to support plurality. What PRESTO does is say that when we drill down into a document, we do not want to drill down using media-dependent or presentation-dependent accidents, but according to the editorial/rhetorical (i.e. “semantic”) substance.
So why do I say “How else are you going to do it?”
The reason is because)
What would a concrete example be? Lets say we are a government and we have adopted PRESTO so all our legislatation is online with these kinds of permanent URLs including every numbered thing inside the legislation. Then we want to be able ask “What other laws reference Part 4 of this Act?” In PRESTO, we say “OK, the object here is Part 4, so we want to extend the URL for Part 4 to add a name which means the list of references.” So we would have a URL like so that this gives a new URL, hierarchically based on the object it was dependent on. What we don’t do is (which is procedural/functional) and not (some people would think this is OK, I don’t have a particularly strong view at the.)
I guess.)
As I have mentioned, I don’t want to claim that PRESTO is remotely new. However, for many people it is not remotely obvious until pointed out. We need to learn from the WWW that making information available is the pre-condition to building more interesting higher-level systems. In fact, the WWW has demonstrated this twice: first with HTML where as we got more HTML we started to get more imaginative use of links, systems like Google for example; but second with XML where it was only when there started to be enough information available in XML that AJAX took off.
Now I am not saying that there is no room for content management systems. But the ephemeral details of how and where a document is stored or composed should be hidden from the user, for this kind of public information.
You might say “But doesn’t your best-fit” approach mean that the system cannot guarantee to provide enough information to allow the automatic construction of effective legislation from amendment instructions?” And my answer is “Sure!”: however all it means is that you can build amended document compilation systems only when you have the base and amendments in a suitable form, and the bottom line is that often you don’t. So you want an infrastructure that allows graceful degradation.
Anyone can make a perfect system with perfect data! PRESTO decouples representations from resource identification. The particular details of how to form the permanent URLs, which methods are available, etc are the next level of question.
Phil: I am removing your comment. I am not allowed to blog on this because of Standards Australia obligations. Accusing me of hiding when I cannot respond says everything about you that I need to know.
The SNAFU that you mention was caught, corrected and reported within hours and did not impact in any vote change, AFAIK. No-one was corrupted.
Dear Rick,
As you mentioned, there is nothing new in this idea, specially when we consider the legislation domain.
In Italy, a solution like this you point has been officially in use since 2001. I think it would be fair to provide your readers access to the base document of the project (in English):
I'm sure you will find there well established answers to your future questions.
By the way, it's also worthy to mention that several countries decided not to reinvent the wheel: based on the Italian experience (and in association with them) they are proposing the creation of a specific URN namespace for legal resources, the "urn:lex" namespace. Check:
Regards,
Fernando
UPDATE: Tim Berners-Lee recently released a page of thoughts on Lined Data Principles. The only significant distinction between it and PRESTO is that Tim is starting off "If you have some significant concepts or resources then give them URLS ets" while PRESTO starts with "You want to have URLs etc for all significant concepts or resources"
In other words, PRESTO is is not so much a general statement of principle, it is a program of action: it is not that when you have some high value or easy concepts or resources and then you give them URLs, but you systematically make sure you have URIs for *everything* significant at every significant level of granularity (and history). And in particular, for documents.
PRESTO is about emphasizing that the decision to have persistent URIs for everything significant is a decision, not something intrinsic to an information collection. This is rather different to the typical idea of ontologies, I gather: with ontologies there is some domain which people are already exploring or representing and they know that is what they want to do. In the case of legislation and other systematic document sets, it is clear at all in people's minds that having a way of naming everything that can be named is a useful and necessary starting point for reliable mashup and evolvable information systems.
very interesting.)
thanks for them | http://www.oreillynet.com/xml/blog/2008/02/presto_a_www_information_archi.html?CMP=OTC-TY3388567169&ATT=PRESTO-A+WWW+Information+Architecture+for+Legislation+and+Public+Information+systems | crawl-003 | refinedweb | 1,190 | 55.37 |
The topic you requested is included in another documentation set. For convenience, it's displayed below. Choose Switch to see the topic in its original location.
Microsoft.Phone.Wallet Namespace
[ This article is for Windows Phone 8 developers. If you’re developing for Windows 10, see the latest documentation. ]
The Microsoft.Phone.Wallet namespace provides types for interacting with the Wallet feature of the phone. Apps can add and manage items in the Wallet, define payment instruments and interact with NFC and the Secure Element on the phone.
Capabilities
If you use this API in your app, you must specify the following capabilities in the app manifest. Otherwise, your app might not work correctly or it might exit unexpectedly.
For more info, see App capabilities and hardware requirements for Windows Phone 8.
Reference
Other Resources
Show: | https://msdn.microsoft.com/en-us/library/microsoft.phone.wallet(v=vs.105).aspx | CC-MAIN-2017-09 | refinedweb | 136 | 51.04 |
QProcess doesn´t emit finished() when done
i always start my scripts with:
process->start("/bin/sh", QStringList << "/path/to/my script.sh");
when i try to start ls it does not work either.
Hi,
Did you check the output of the QProcess::error method ?
You should also check the standard error and output content. That should give you more clues about what is happening.
Hi,
i got
QObject::connect(copy, SIGNAL(errorOccurred(QProcess::ProcessError)), this, SLOT(errorFound(QProcess::ProcessError)));
copy is my Qprocess.
which was also never emmited. Is there any more error methods i could read?
This is how my code looks like.
header:
#ifndef UPDATE_HANDLER_H #define UPDATE_HANDLER_H #include <QtCore/QObject> #include <QProcess> #include <QFile> #include <QStringList> #include <QProcess> class UpdateHandler : public QObject { Q_OBJECT public: UpdateHandler(QObject* parent = nullptr); signals: void updateSuccess(); void updateFailed(); private slots: void fileSearch(); void fileCopied(int); private: QFile* file; QFile* updEnc; QStringList cpscript; QProcess* copy; QString filename; QFile log; }; #endif // UPDATE_HANDLER_H
UpdateHandler::UpdateHandler(QObject* parent) :QObject(parent) { updEnc = new QFile("/media/upStick/update.enc"); cpscript <<"/usr/bin/cpupdt.sh"; file = new QFile("/usr/update.enc"); copy = new QProcess(this); QObject::connect(copy, SIGNAL(finished(int)), this, SLOT(fileCopied(int))); QObject::connect(copy, SIGNAL(errorOccurred(QProcess::ProcessError)), this, SLOT(errorFound(QProcess::ProcessError))); } void UpdateHandler::fileSearch(){ if(updEnc->exists()){ qDebug()<< "Updatefile found on Stick"; qDebug()<< "copying file to /usr"; copy->start("/bin/sh", cpscript); }else{ qWarning()<< "no update found at UpdateStick/or no Stick insert"; emit updateFailed(); } } void UpdateHandler::fileCopied(int status){ qWarning()<< copy->errorString(); copy->close(); if(status==0){ if(file->exists()) { qDebug()<< "sucess", emit updateSuccess(); }else{ qWarning()<< "no update.enc found in /usr"; emit updateFailed(); } }else{ qWarning()<< "/usr/bin/cpupdt.sh crashed "; emit updateFailed(); } } void UpdateHandler::errorFound(QProcess::ProcessError e){ QString filename="/var/ziesel/update.txt"; QFile log( filename); qDebug()<< "Error during Update: "<< e ; if ( log.open(QIODevice::WriteOnly | QIODevice::Append | QIODevice::Unbuffered | QIODevice::Text) ) { QTextStream stream(&log); stream << "Error:" << e << endl; log.close(); } }
I know i have only finished(int) , instead of finished(int, QProcess::ExitStatus);
But i have qt 5.6 on my device and i wasn´t sure which to use, but i tried both.up->start("ifconfig", args);
When i comment this out everything works.
But i don´t know why this is happening.
I guess it's not likely but does your copy script depends on the can bus interface to be up ?
- ambershark Moderators last edited by
@LogiSch17 said in QProcess doesn´t emit finished() when done:0->start("ifconfig", args);
When i comment this out everything works.
But i don´t know why this is happening.
That's interesting... Does your
canupfinish and get cleaned up properly before you are trying with the next QProcess? Just curious, it's not like you can't have multiple QProcesses at any given time.
Are these QProcesses in different threads?
Do you get the signals from the canup process properly?
Hi, sry have been sick.
First, no the copy script doesm´t depend on the can bus interface.
And second:
For the can up , i have never tried to get the finished signal. But i tried
canup->wait ForFinished();
Which seems to work cause there is no delay during the can initialisation.
And yes the QProcesses are in different threads, or in different QObjects.
So if you ensure can0 is finished before starting the next QProcess, everything is running fine ?
- ambershark Moderators last edited by
@LogiSch17 So what if after you start your process you immediately do
copy->waitForFinished(). Will that ever return?
@SGaist No even if i wait i have the can0-> waitForFinished(), all the following QProcesses are not returning.
do i have to close something?
@ambershark I allready tried but, it stays for the default 30000msecs, but afterwards nothing has been returned.
Did you check the content of the standard error and standard output of all your QProcess ? There might be a clue there to what is happening.
yes i did. got nothing.
Currently i removed the ifconfig QProccess from my code and run it as script in init.d before running my programm.
Now everything works fine.
Look´s like QProccess has problems with the ifconfig command.
Can you provide a minimal code sample that triggers that ?
i got something like
QStringList arguments; arguments << "can0" << "up"; QProcess::execute("ifconfig", arguments );
but i had it allready with finished() and errorOccured() signals conected to slots -> same result.
I would say my buildroot generated OS makes the problem.
Really surprising. If you call any other command, do you have the same problem ?
No never seen the problem before , and i have a lot of qproccess in my code. From shutdown , reboot, mv , cp to a lot of scripts .
Is it only on your device or can you reproduce that on your desktop machine ?
On my desktop machine it works fine. But i have no CAN device their.
So i returns that can0 device not exists and goes on normaly. But the the other proccesses are working afterwards.
Since the can bus is a network device, did you try calling
ifconfig upon another device ? That might help narrow down the problem.
No didn´t tried yet. I have it fixed now by running a startup script . Which does exatly the same.
But i will have a look at this problem again when i have some more time. I will let you know if i find out something.
@LogiSch17 I had a similar problem in that no QProcess::finished() signals were being emitted. The problem was SOLVED by not catching the Unix signal SIGCHLD, which implies that this signal is being used by QProcess to communicate between parent and child processes. Maybe this was your problem too.
signal SIGCHLD, which implies that this signal is being used by QProcess to communicate between parent and child processes.
SIGCHLDis precisely how the
QProcessknows when the child has exited/terminated, so that
QProcess::finished()can be emitted. | https://forum.qt.io/topic/76137/qprocess-doesn-t-emit-finished-when-done/22 | CC-MAIN-2019-47 | refinedweb | 983 | 58.69 |
This class defines a simple messaging protocol for communicating with an industrial robot controller. More...
#include <simple_message.h>
This class defines a simple messaging protocol for communicating with an industrial robot controller.
The protocol meets the following requirements:
1. Format should be simple enough that code can be shared between ROS and the controller (for those controllers that support C/C++). For those controllers that do not support C/C++, the protocol must be simple enough to be decoded with the limited capabilities of the typical robot programming language. A corollary to this requirement is that the protocol should not be so onerous as to overwhelm the limited resources of the robot controller
2. Format should allow for data streaming (ROS topic like)
3. Format should allow for data reply (ROS service like)
4. The protocol is not intended to encapsulate version information It is up to individual developers to ensure that code developed for communicating platforms does not have any version conflicts (this includes message type identifiers).
Message Structure
THIS CLASS IS NOT THREAD-SAFE
Definition at line 160 of file simple_message.h.
Constructs an empty message.
Definition at line 53 of file simple_message.cpp.
Destructs a message.
Definition at line 57 of file simple_message.cpp.
Gets message type(see CommType)
Definition at line 242 of file simple_message.h.
Returns a reference to the internal data member.
Definition at line 270 of file simple_message.h.
Gets length of message data portion.
Definition at line 263 of file simple_message.h.
Gets size of message header in bytes(fixed)
Definition at line 221 of file simple_message.h.
Gets size of message length member in bytes (fixed)
Definition at line 228 of file simple_message.h.
Gets message type(see StandardMsgType)
Definition at line 235 of file simple_message.h.
Gets message length (total size, HEADER + data)
Definition at line 256 of file simple_message.h.
Gets reply code(see ReplyType)
Definition at line 249 of file simple_message.h.
Initializes a fully populated simple message.
Definition at line 70 of file simple_message.cpp.
Initializes a simple message with an emtpy data payload.
Definition at line 63 of file simple_message.cpp.
Initializes a simple message from a generic byte array. The byte array is assumed to hold a valid message with a HEADER and data payload.
Definition at line 82 of file simple_message.cpp.
Sets communications type.
Definition at line 328 of file simple_message.h.
Sets data portion.
Definition at line 128 of file simple_message.cpp.
Sets message type.
Definition at line 321 of file simple_message.h.
Sets reply code.
Definition at line 335 of file simple_message.h.
Populates a raw byte array with the message. Any data stored in the passed in byte array is deleted.
Definition at line 113 of file simple_message.cpp.
performs logical checks to ensure that the message is fully defined and adheres to the message conventions.
Definition at line 134 of file simple_message.cpp.
Communications type(see CommType)
Definition at line 292 of file simple_message.h.
Message data portion.
Definition at line 302 of file simple_message.h.
sizeof(industrial::shared_types::shared_int) + sizeof(industrial::shared_types::shared_int) + sizeof(industrial::shared_types::shared_int)
Size(in bytes) of message header (fixed)
Definition at line 307 of file simple_message.h.
Size (in bytes) of message length parameter (fixed)
Definition at line 314 of file simple_message.h.
Message type(see StandardMsgType)
Definition at line 287 of file simple_message.h.
Reply code(see ReplyType)
Definition at line 297 of file simple_message.h. | http://docs.ros.org/en/hydro/api/simple_message/html/classindustrial_1_1simple__message_1_1SimpleMessage.html | CC-MAIN-2021-43 | refinedweb | 574 | 52.56 |
Paradigm.
At least one thing became clear at the Desktop Matters conference last week. The way that we write Swing applications is changing for the better. A few other things that became clear are that Chet has a great sense of humor to go along with a lot of patience for friendly ribbing; Hans really is a nice guy; and Romain has a startling talent for video clairvoyance (Chet, that race car demo was very...um, quaint).
Almost gone are the days when every new Swing programmer will be left on their own standing in front of the huge maze of Swing with only the "Abandon hope all ye who enter" sign for company. Those of us who still remember our first Swing program probably remember something like this:>
public class MyFirstReallyCoolSwingApp extends JFrame {
public static void main( String[] args ) {
myApp = new MyFirstReallCoolSwingApp();
...
}
}
EDT? Isn't that a home pregnancy test or something?
So much for the old paradigm. A paradigm is only worth about 20 cents, anyway.
Alrighty, moving right along then.
These days there is, and has been for some time now, actually, a new life and energy behind desktop Java. Personally, I think this resurgance can be traced pretty easily if we look back now. I think we can thank Eclipse for showing the way. It came along at just the right time and provided us not only with the great Java development tool but also with the prototypical example of a great Java desktop application. Of course Eclipse simply could not have been so successful without the improvements made to the Java platform in general.
That brings us to another thing that became clear to me at Desktop Matters: there are a lot of very smart people working at Sun. Okay, I haven't actually met very many of them, but this is certainly true of the Swing team.
I am extrapolating from there.
A short while later, Netbeans 5.0 put the lie to the "Swing is slow" line. It became abundantly clear to anyone who was really paying attention that desktop Java was an increasing priority at Sun. The release of all those desktop-related improvements in Java 6 was yet another data point in the rise of Swing's stock. And after hearing the Swing team talk about their plans for the future, it looks to me like things are still trending upwards.
It is not clear to me what the business case is for this renewed commitment Sun has for desktop Java. What return does Sun expect on their investment? It certainly seems like there has been more corporate support but perhaps what we're seeing is simply the result of the energy and dedication of a small but talented team of Swing engineers? As just an outsider looking in, I won't pretend to know. But I'm also not going to look a gift horse in the mouth because, darn it, I really enjoy Swing programming.
I love the flexibility of Swing. I love the power of Java2D. And there is some great work going on in SwingLabs. Now I'm not a Java2D wizard like Chris, I can't design ultra cool interfaces like Romain, and I certainly can't time things as well as Chet, but I do know that Swing will be a lot nicer to work with very soon thanks to JSRs 295 and 296 along with the support for them that is going into Netbeans. And this will apply to novice and experienced programmers alike.
This year looks like it will be a continuation of the rise in excitement surrounding desktop Java. With the release of the Filthy Rich Clients book and the Netbeans RCP book as well as the upcoming focus on media, 3D, animation, and deployment in JDK 7, I just can't wait to see what we'll be talking about at next year's Desktop Matters conference.
And one last thing. Since we were on the subject of my interface design impairment, do you suppose there is any chance of Romain putting together a UI design video podcast. You know, kind of like "Romain's Eye for the Programmer Guy" or something? Maybe that's not a good title. The pronunciation of the last word would always be in question.
- Login or register to post comments
- Printer-friendly version
- diverson's blog
- 1481 reads | https://weblogs.java.net/blog/diverson/archive/2007/03/paradigm_swing.html | CC-MAIN-2015-22 | refinedweb | 733 | 70.23 |
This project uses your voice commands to Alexa and the NodeMCU board as an IR blaster to send remote infrared signals to your home TV or Set Top box.
How do we do it?
The system interface of our project is as below:
Figure 1
The project has three steps:
- Make the remote
- Add voice control to the remote
- Throw away all your remotes (Optional)
STEP 1: Make the remote
When we press a button on the remote, it sends some code through the infrared led to the device we control. A unique code is send for each button we press on the remote.
To make a remote we have to send the specific code of the remote through our IR blaster circuit and for that, we need to know the signal our TV and STB remotes send for different buttons.So we are going to first assemble an IR receiver using NodeMCU board and an IR revceiver LED.
The IR receiver circuit:
IR Receiver LED such as the VS1838B will do the job as well.
Figure: 2 IR Receiver Circuit
Programming the IR receiver:
There is a library for Arduino Sketch IRremote8266 to send and receive infrared signals with multiple protocols using an ESP8266 based board like NodeMCU. You can find the installation instruction in the above link.
Now, Open Arduino software and open “IRrecvDemo” sketch under File>Examples>IRremote8266
Upload the code to the board and now we are ready to get signals from any remote.
Once the sketch is uploaded to our NodeMCU board,
Go to Tools > Serial Monitor and press any button on your remote (TV or set top box) pointing towards the IR receiver.
This way you get the Hex codes for all the buttons on your remote and write it down somewhere.
You can repeat the same process to decode the signals from as many remote you want.
I have decoded the IR signal of my TV remote and my STB (Tata sky) remote.
Here is a list of codes I received from TV and STB remotes using the IR receiver.
Figure: 3 Infrared signals from TV and STB remotes
Note: I have only mentioned codes for important functions here.
Now we have the signal codes we are ready to send the signals to out TV or STB through NodeMCU.
With the IRremote8266 library, it is easy to send IR signal. To see how easy it is, Open Arduino software and open “IRsendDemo” sketch under File>Examples>IRremote8266
If you go through the Sketch code, you will find that there are different modes or protocols of sending the IR signal code. Most remotes support NEC protocol. To know about different IR protocols refer this link. For my Set Top Box (Tata Sky) I will use RC6 protocol and for the TV (Samsung) I will use the built-in “SAMSUNG” protocol.
For example:
irsend.sendRC6(0xC0000C,24); // Sends power on/off signal to Set Top box
irsend.sendSAMSUNG(0xE0E0F00F,32) // Sends power on/off signal to TV
The IR transmitter circuit:
Now we configure our NodeMCU board to act as an IR blaster and the circuit is below.
Figure: 4 IR Transmitter Circuit
I have used an IR transmitter LED from an old remote but you can use any standard 940nm IR LED and NPN transistor (2N3904)
STEP 2: Add voice control
Through sinric we can connect our ESP8266 or Arduino boards to Amazon Alexa or Google Home for free.
Here is the link to github page to find some examples to connect our ESP8266 NodeMCU board to Alexa through sinric.com
To enable Alexa recognize our controllable devices we need to create our devices at sinric.com
When we create a controllable device at sinric.com and include the device ID in our code in ESP8266 board, we can access those devices through Alexa Smart home option. Once added, we can say something like “Alexa, Switch on Light 1” where “Light 1” is the name of the device we created at sinric.com and used the device ID in our Arduino code.
We are creating two devices at sinric.com as we want to control our TV and Set Top box.
Go to sinric.com
Login and you can see your dashboard something like shown below
Figure: 5 sinric.com dashboard
Copy the API key and save it somewhere for future reference.
Now click on “ADD” under “Smart Home Device”
In the pop up window type a name for your entertainment device.In this example I’ve named it “TV” obviously.You can write anything in the description and finally the most important thing is to assign our device a “Type”. If we assign it the type “Switch” we can only send on/off command. So we select “TV” as “Type” so that we can give it commands like “Change Channel or Mute TV etc.
Figure: 6 Add device at sinric.com
We will repeat the above process and add another device i.e. the Set Top box. We will select “TV” as device Type since we need channel and volume control as well. Here is how the dashboard looks now.
Do not forget to copy the device ID for both the devices as we need them in our code.
Figure: 6 Devices added
Once we have done creating the devices at sinric.com we can connect Alexa and Sinric by adding “Sinric” skill to our Alexa app.
- Go to Home > Skills section in your Alexa app and search for “Sinric” skill and enable it.
- Now Go to Home > Smart Home and then click “Add Device”.
- It will show all available devices (in our case two).
Assuming you have installed all the libraries required for IRremote8266 and sinric in Sketch referring to links given in this tutorial, we are ready to program our NodeMCU board.
Here is the sketch code for this project, which you can upload to the NodeMCU board
#include <Arduino.h> #include <ESP8266WiFi.h> #include <ESP8266WiFiMulti.h> #include <WebSocketsClient.h> #include <ArduinoJson.h> // IR part------------------------------------------------------------- #ifndef UNIT_TEST #include <Arduino.h> #endif #include <IRremoteESP8266.h> #include <IRsend.h> #define IR_LED 4 // ESP8266 GPIO pin to use. Recommended: 4 (D2). IRsend irsend(IR_LED); // Set the GPIO to be used to sending the message. ESP8266WiFiMulti WiFiMulti; WebSocketsClient webSocket; WiFiClient client; #define MyApiKey "xxxxxxxxxxxxxxxxxxxxxxxxxxxxx" // TODO: Change to your sinric API Key. Your API Key is displayed on sinric.com dashboard #define MySSID "your wifi ssid name" // TODO: Change to your Wifi network SSID #define MyWifiPassword "your wifi password" // TODO: Change to your Wifi network password #define API_ENDPOINT "" #define HEARTBEAT_INTERVAL 300000 // 5 Minutes uint64_t heartbeatTimestamp = 0; bool isConnected = false; void TogglePower(String deviceId) { if (deviceId == "xxxxxxxxxxxxxxxxxxxxxx") // TODO: Change to your device Id for TV { irsend.sendSAMSUNG(0xE0E040BF,32); // Send a power signal to Samsung TV. delay(500); } if (deviceId == "yyyyyyyyyyyyyyyyyyyyyyy") // TODO: Change to your device Id for Set Top Box { irsend.sendRC6(0xC0000C,24); // Send a power signal to tatasky. delay(500); } } void ToggleMute(String deviceId) { if (deviceId == "xxxxxxxxxxxxxxxxxxxxxxxxxx") // TODO: Change to your device Id for TV { irsend.sendSAMSUNG(0xE0E0F00F,32); // Send a power signal to Samsung TV. delay(500); } if (deviceId == "yyyyyyyyyyyyyyyyyyyyyyyyy") // TODO: Change to your device Id for Set Top Box { irsend.sendRC6(0xC0000D,24); delay(500); } } void webSocketEvent(WStype_t type, uint8_t * payload, size_t length) { switch(type) { case WStype_DISCONNECTED: isConnected = false; Serial.printf("[WSc] Webservice disconnected from sinric.com!\n"); break; case WStype_CONNECTED: { isConnected = true; Serial.printf("[WSc] Service connected to sinric.com at url: %s\n", payload); Serial.printf("Waiting for commands from sinric.com ...\n"); } break; case WStype_TEXT: { Serial.printf("[WSc] get text: %s\n", payload); DynamicJsonBuffer jsonBuffer; JsonObject& json = jsonBuffer.parseObject((char*)payload); String deviceId = json ["deviceId"]; String action = json ["action"]; if(action == "setPowerState") { String value = json ["value"]; if(value == "ON" || value == "OFF" ) { TogglePower(deviceId); } } else if(action == "ChangeChannel") { String ChannelName=json ["value"]["channelMetadata"]["name"]; String ChannelNumber=json ["value"]["channel"]["number"]; if(ChannelName=="national geographic"){ irsend.sendRC6(0xC00007,24); // Send IR code for remote button 7 delay(500); irsend.sendRC6(0xC00000,24); // Send IR code for remote button 0 delay(500); irsend.sendRC6(0xC00008,24); //Send IR code for remote button 8 delay(500); Serial.println("[WSc] channel: " + ChannelName); } if(ChannelName=="star movies"){ irsend.sendRC6(0xC00003,24); // Send IR code for remote button 3 delay(500); irsend.sendRC6(0xC00005,24); // Send IR code for remote button 5 delay(500); irsend.sendRC6(0xC00003,24); // Send IR code for remote button 3 delay(500); Serial.println("[WSc] channel: " + ChannelName); } } else if (action == "SetMute") { bool MuteAction=json ["value"]["mute"]; if(MuteAction==true || MuteAction==false){ ToggleMute(deviceId); } } else if (action == "test") { Serial.println("[WSc] received test command from sinric.com"); } } break; case WStype_BIN: Serial.printf("[WSc] get binary length: %u\n", length); break; } } void setup() { Serial.begin(115200); irsend.begin(); WiFiMulti.addAP(MySSID, MyWifiPassword); Serial.println(); Serial.print("Connecting to Wifi: "); Serial.println(MySSID); // Waiting for Wifi connect while(WiFiMulti.run() != WL_CONNECTED) { delay(500); Serial.print("."); } if(WiFiMulti.run() == WL_CONNECTED) { Serial.println(""); Serial.print("WiFi connected. "); Serial.print("IP address: "); Serial.println(WiFi.localIP()); } // server address, port and URL webSocket.begin("iot.sinric.com", 80, "/"); // event handler webSocket.onEvent(webSocketEvent); webSocket.setAuthorization("apikey", MyApiKey); // try again every 5000ms if connection has failed webSocket.setReconnectInterval(5000); // If you see 'class WebSocketsClient' has no member named 'setReconnectInterval' error update arduinoWebSockets } void loop() { webSocket.loop(); if(isConnected) { uint64_t now = millis(); // Send heartbeat in order to avoid disconnections during ISP resetting IPs over night. Thanks @MacSass if((now - heartbeatTimestamp) > HEARTBEAT_INTERVAL) { heartbeatTimestamp = now; webSocket.sendTXT("H"); } } }
Code Explanation:
When we ask Alexa to do something, Alexa processes the voice and passes response messages to our NodeMCU board through sinric.com. The message is a JSON formatted string called payload.
What we say to Alexa : Alexa, turn on TV
What Alexa replies: Ok
Message Received by our NodeMCU board (Payload):
{"deviceId":"5b2cbb1d77f4f95806b2dbd3","action":"setPowerState","value":"ON"}
So we send an IR signal (code) for switching on the device (with above device ID for example out TV) through the connected IR LED. The line of code is:
irsend.sendSAMSUNG(0xE0E040BF,32);
sendSAMSUNG() specific function provided by the IR library for Samsung TV. You can get similar functions for other TV models in the examples of IRremote8266 library.
E0E040BF is the Samsung TV remote IR code for power button
The power button of my Tata Sky remote will send the code C0000C
irsend.sendRC6(0xC0000C,24);
Here sendRC6 is the function for RC-6 IR protocol.
You can go through the sketch code and find out different signals we can send to out TV and STB.
You can even change the channel on the set Top Box by saying
Alexa, change channel to 708 on tatasky // channel number for National Geographic on my STB
Aleaxa will send the following payload to NodeMCU board through sinric.com
{"deviceId":"5b2cbb1d77f4f95806b2dbd3","action":"ChangeChannel","value":{"channel":{"number":"708"},"channelMetadata":{}}}
The code running on our NodeMCU board will read the JSON and send the channel number keys through the IR LED.
Similarly, we can say
Alexa, Change channel to national geographic on tatasky
Comment: Response received though chanel name comes under channelMetadata Payload{"deviceId":"5b2cbb1d77f4f95806b2dbd3","action":"ChangeChannel","value":{"channel":{},"channelMetadata":{"name":"national geographic"}}}
The code running on our NodeMCU board will read the JSON and send the corresponding channel number keys through the IR LED. This way we do not even have to remember the channel numbers for a channel because we have included the numbers in our code.
STEP 3: Test you new voice remote
That is it. You can now include codes for as many remote you have in your house and control them through Alexa device. | https://maker.pro/arduino/projects/voice-controlled-tv | CC-MAIN-2020-05 | refinedweb | 1,931 | 56.96 |
Our conversion from Angular to React
Did you notice we switched all our code from Angular to React last week?
*This the first post of 3 part series on how we converted on site to React.
Angular has gained a large amount of popularity in the JavaScript ecosystem in the last few years. The original version of Netlify.com was actually released as an Angular 1.3 app. Angular has gone through a number of changes in the last year including the release of a new Angular 2.0, which was quite different from the version we were using. We made the decision internaly to try React rather than attempt the migration path towards Angular 2.
Why React?
It’s no secret that React has dominated the JavaScript framework conversation in the past year. React itself is a library that enables developers to work with individual components. React also brings into light ideas that haven’t been popular for awhile, like markup with JSX.
At Netlify we have already noticed a benefit in development time and design collaborations, thanks to React.
// jsx sample render() { return ( <Form> <FormRow> <FormLabel /> <FormInput /> </FormRow> </Form> ); )
We unveiled our newly converted React UI last week and it went mostly unnoticed by our users, which is ideal.
This is the first time I have personally been on a team that has successful converted towards a new framework and here are steps we took to be successful with that.
1. Don’t Attempt Until Ready
If you are considering converting your code from Angular to React, I recommend you to wait until you are ready to slow down development. Remember — you’ll need to convert large amounts of code. This means no further upgrades to features and prioritizing bug fixes until the conversion is complete.
For a few months we rolled out some new features, including Slack Notifications and Deploy Previews, but only on the new React app. We were able to do this because we hid our new React UI behind an subdomain, which allowed us to constantly test out the UI in production without affecting the current user base.
The main thing that helped us during this transition is creating our app with the JAM stack philosophy (no more servers, host all your frontend on a CDN and use APIs for any moving parts). The functionality of our API never changed and allowed us to iterate on the UI while still staying open for business.
2. We Started New
The Netlify API is mature enough that it gave us the ability to start a brand new repo with brand new code.
The idea of updating code inline with your current repo is not ideal, so we avoided that all together. Nothing is scarier than using cutting edge technology while trying figure out bugs in different frameworks and syntaxes.
We were also able to grant access to specific users for testing out the new UI and receive immediate feedback on new changes and features without the worry of sending new bugs to production. This enabled us to really think through some changes and take advantage and change things we really didn’t like in the Angular app (i.e. the new Sign Up flow).
3. Invest in Learning
Set some time to not write code and just learn React. The library is generally small but ecosystem comes with the requirement to learn all, but not limited to Webpack, Mocha, Redux, etc.
Netlify even opted to send some of our developers to ReactJS training. In addtition to training we even hired the author of Pro React during the transition to React. Investing in knowledge helped save us from some new code spaghetti and gave us the confindence to work in a new framework.
So What’s New?
There are some more small changes we will be rolling out in the next few weeks that we are excited to announce, now that we have completed this upgrade.
Netlify is still the same place where you can deploy modern static websites with continuous deployment from a single click. Stay tuned for more posts on more specific new features and how we approached this conversion in code. Though this seems like a near perfect transition, there might be bugs hidden in the walls of our new app. If you see something, say something – If you find a bug please report it in the platform.
Find out how to convert your Angular Controllers to React Components
Add your thoughts in the comments | https://www.netlify.com/blog/2016/07/26/our-conversion-from-angular-to-react/ | CC-MAIN-2020-05 | refinedweb | 751 | 69.11 |
so I recently got my self an esp 8266 board and I was trying to do a project of convering it into a deather attacker. I followed steps from spacehuhn,s post at GitHub. after doing everything when I copied the code into adruino and started to upload it, adruino gave me the following error
Arduino: 1.8.2 (Windows 10), Board: "NodeMCU 0.9 (ESP-12 Module), 80 MHz, Serial, 115200, 4M (1M SPIFFS)"
"C:\Users\Dell\AppData\Local\Temp\Temp1_esp8266_deauther-master.zip\esp8266_deauther-master\esp8266_deauther\esp8266_deauther.ino:60:18: fatal error: data.h: No such file or directory
#include "data.h"
^
compilation terminated.
exit status 1
Error compiling for board NodeMCU 0.9 (ESP-12 Module).
This report would have more information with
"Show verbose output during compilation"
option enabled in File -> Preferences."
Can somebody please tell me what the problem is?
thanks in advance. | http://www.esp8266.com/viewtopic.php?f=8&p=66039&sid=330feed622870f813492a74b8b913c7a | CC-MAIN-2017-22 | refinedweb | 147 | 53.37 |
I'm trying to play some .flac files using PySide's Phonon module (on Mac if it makes a difference) but it's not an available mimetype for playback. Is there a way to enable this or a plugin I need to install?
Phonon does not directly support audio formats but uses the underlying OS capabilities. The answer therefore depends on if there is a service registered for the mime type
audio/flac. For me there is and here is a short example script to find out:
from PySide import QtCore from PySide.phonon import Phonon if __name__ == '__main__': app = QtCore.QCoreApplication([]) app.setApplicationName('test') mime_types = Phonon.BackendCapabilities.availableMimeTypes() print(mime_types) app.quit() | https://codedump.io/share/3aJoXeX5vw0v/1/is-it-possible-to-play-flac-files-in-phonon | CC-MAIN-2017-09 | refinedweb | 113 | 51.65 |
Extensions built with WebExtension APIs are designed to be compatible with Chrome and Opera extensions. As far as possible, extensions written for those browsers should run on Firefox with minimal changes.
However, there are significant differences between Chrome, Firefox, and Edge. In particular:
Support for JavaScript APIs differs across browsers. See Browser support for JavaScript APIs for more details.
Support for
manifest.jsonkeys differs across browsers. See the "Browser compatibility" section in the
manifest.jsonpage for more details.
Javascript APIs:
- In Chrome
- JavaScript APIs are accessed under the
chromenamespace. (cf. Chrome bug 798169)
- In Firefox and Edge
- They are accessed under the
browsernamespace.
Asynchronous APIs:
- In Chrome and Edge
- Asynchronous APIs are implemented using callbacks. (cf. Chrome bug 328932)
- In Firefox
- Asynchronous APIs are implemented using promises.
The rest of this page summarizes these and other incompatibilities.
JavaScript APIs
Callbacks and the chrome.* namespace
- In Chrome
- Extensions access privileged JavaScript APIs using the
chromenamespace.
chrome.browserAction.setIcon({path: "path/to/icon.png"});
- In Firefox (via the WebExtension API)
- The equivalent APIs are accessed using the
browsernamespace.
browser.browserAction.setIcon({path: "path/to/icon.png"});
Many of the APIs are asynchronous.
- In Chrome
- Asynchronous APIs use callbacks to return values, and
runtime.lastErrorto communicate errors.
function logCookie(c) { if (chrome.runtime.lastError) { console.error(chrome.runtime.lastError); } else { console.log(c); } } chrome.cookies.set( {url: ""}, logCookie );
- In Firefox (via the WebExtension API)
- Asynchronous APIs use promises to return values instead.
function logCookie(c) { console.log(c); } function logError(e) { console.error(e); } let setCookie = browser.cookies.set( {url: ""} ); setCookie.then(logCookie, logError);
Firefox supports both the chrome and browser namespaces
As a porting aid, the Firefox implementation of WebExtensions supports
chrome, using callbacks, as well as
browser, using promises. This means that many Chrome extensions will just work in Firefox without any changes.
Note: However, this is not part of the WebExtensions standard. and may not be supported by all compliant browsers.
If you choose to write your extension to use
browser and promises, then Firefox also provides a polyfill that will enable it to run in Chrome:.
Partially supported APIs
The page Browser support for JavaScript APIs includes compatibility tables for all APIs that have any support in Firefox. Where there are caveats around support for a given API item, this is indicated in these tables with an asterisk "*" and in the reference page for the API item, the caveats are explained.
These tables are generated from compatibility data stored as JSON files in GitHub.
The rest of this section describes compatibility issues that are not already captured in the tables.
notifications
For
notifications.create(), with
type "basic":
- In Firefox
iconUrlis optional.
- In Chrome
iconUrlis required.
When the user clicks on a notification:
- In Firefox
- The notification is cleared immediately.
- In Chrome
- This is not the case.
If you call
notifications.create()more than once in rapid succession:
- In Firefox, the notifications may not display at all. Waiting to make subsequent calls until within the
chrome.notifications.create()callback function is not a sufficient delay to prevent this.
proxy
Firefox's Proxy API followed a completely different design from Chrome's Proxy API.
- In Chrome
- An extension can register a PAC file, but can also define explicit proxying rules.
- In Firefox
- The proxy API only supports the PAC file approach (since this is also possible using extended PAC files).Note: Because this API is incompatible with Chrome's
proxyAPI, the Firefox proxy API is only available through the
browsernamespace.
tabs
When using
tabs.executeScript()or
tabs.insertCSS():
- In Firefox
- Relative URLs passed are resolved relative to the current page URL.
- In Chrome
- These URLs are resolved relative to the extension's base URL.
To work cross-browser, you can specify the path as an absolute URL, starting at the extension's root, like this:
/path/to/script.js
When :
- In Firefox
- Querying tabs by URL with
tabs.query()requires the
"tabs"permission.
- In Chrome
- querying tabs by URL with
tabs.query()It's possible without the
"tabs"permission, but will limit results to tabs whose URLs match host permissions.
When calling
tabs.remove():
- In Firefox
- The
tabs.remove()promise is fulfilled after the
beforeunloadevent.
- In Chrome
- The callback does not wait for
beforeunload.
webRequest
- In Firefox
- Requests can be redirected only if their original URL uses the
http:or
https:scheme.
- The
activeTabpermission does not allow intercepting network requests in the current tab. (See bug 1617479)
- Events are not fired for system requests (for example, extension upgrades or searchbar suggestions).
- From Firefox 57 onwards: Firefox makes an exception for extensions that need to intercept
webRequest.onAuthRequiredfor proxy authorization. See the documentation for
webRequest.onAuthRequired.
- If an extension wants to redirect a public (e.g., HTTPS) URL to an extension page, the extension's
manifest.jsonfile must contain a
web_accessible_resourceskey with the URL of the extension page.
Note: Any website may then link or redirect to that URL, and extensions should treat any input (POST data, for example) as if it came from an untrusted source, just as a normal web page should.
When using
browser.webRequest.*:
- In Firefox (starting from Firefox 52)
- Some of the
browser.webRequest.*APIs allow returning Promises that resolves
webRequest.BlockingResponseasynchronously.
- In Chrome
- Only
webRequest.onAuthRequiredsupports asynchronous
webRequest.BlockingResponsevia supplying
'asyncBlocking'.
windows
- In Firefox
onFocusChangedwill trigger multiple times for a given focus change.
Unsupported APIs
declarativeContent
- In Firefox
- Chrome's
declarativeContentAPI has not yet been implemented in Firefox.
- In addition, Firefox will not be supporting the
declarativeContent.RequestContentScriptAPI (which is rarely used, and is unavailable in stable releases of Chrome).
Miscellaneous incompatibilities
URLs in CSS
- In Firefox
- URLs in injected CSS files are resolved relative to the CSS file itself.
- In Chrome
- URLs in injected CSS files are resolved relative to the page they are injected into.
Additional incompatibilities
web_accessible_resources
- In Chrome
- When a resource is listed in
web_accessible_resources, it is accessible as
chrome-extension://«your-extension-id»/«path/to/resource». The extension ID is fixed for a given extension.
- In Firefox
- Resources are assigned a random UUID that changes for every instance of Firefox:
moz-extension://«random-UUID»/«path/to/resource». This randomness can prevent you from doing a few things, such as add your specific extension's URL to another domain's CSP policy.
Manifest "key" property
- In Chrome
- When working with an unpacked extension, the manifest may include a
"key"property to pin the extension ID across different machines. This is mainly useful when working with
web_accessible_resources.
- In Firefox
- Since Firefox uses random UUIDs for
web_accessible_resources, this property is unsupported.
Content script requests happen in the context of extension, not content page
- In Chrome
- When a content script makes a request (for example, using
fetch()) to a relative URL (like
/api), it will be sent to.
- In Firefox
- When a content script makes a request, you must provide absolute URLs.
Sharing variables between content scripts
- In Firefox
- You cannot share variables between content scripts by assigning them to
this.{variableName}in one script and then attempting to access them using
window.{variableName}in another. This is a limitation created by the sandbox environment in Firefox. This limitation may be removed, see bug 1208775.
Content script lifecycle during navigation
- In Chrome
- Content scripts are destroyed when the user navigates away from a web page. If the user then returns to the page through history, by clicking the back button, the content script is injected into the web page again.
- In Firefox
Content scripts remain injected in a web page after the user has navigated away, however, window object properties are destroyed. For example, if a content script sets
window.prop1 = "prop"and the user then navigates away and returns to the page
window.prop1is undefined. This issue is tracked in bug 1525400 .
To mimic the behavior of Chrome, listen for the pageshow and pagehide events. Then simulate the injection or destruction of the content script.
"per-tab" zoom behavior
See
tabs.ZoomSettingsScope
- In Chrome
- Zoom changes are reset on navigation; navigating a tab will always load pages with their per-origin zoom factors.
- In Firefox
The zoom level persists across page loads and navigation within the tab. in Chrome-based browsers
manifest.json keys
The main
manifest.json page includes a table describing browser support for
manifest.json keys. Where there are caveats around support for a given key, this is indicated in the table with an asterisk "*" and in the reference page for the key, the caveats are explained.
These tables are generated from compatibility data stored as JSON files in GitHub.
Native messaging. | https://developer.mozilla.org/fa/docs/Mozilla/Add-ons/WebExtensions/Chrome_incompatibilities | CC-MAIN-2020-34 | refinedweb | 1,421 | 51.14 |
Davanum Srinivas a écrit :
>
> Donald,
>
> As of right now, the code is sub-sub-sub-optimal as it loads each of the specified Logicsheets
in
> the XSP and also recursively loads all Logicsheet's referenced in those first set of
LogicSheet's.
> But now we know it can be done....Any thoughts on optimization? all the code is in addLogicSheet
> function of AbstractMarkupLanguage.java
>
> Thanks,
> dims
>
> --- Donald Ball <balld@webslingerZ.com> wrote:
> > On Thu, 12 Apr 2001, Davanum Srinivas wrote:
> >
> > > > * if you fail to declare a logicsheet namespace that's not used in your
> > > > source xsp page, but only in a logicsheet it calls, that logicsheet is
not
> > > > applied. i think this is a well known bug, is anyone working on it?
> > >
> > > Just fixed this. Please give it a shot.
> >
> > actually, i just found a bug in this... if i use elements in the esql
> > logicsheet both in my xsp page and in one of the logicsheets it calls, the
> > esql logicsheet is applied twice, causing compilation problems due to
> > duplicate method, variable, and inner class definitions. again, one
> > solution would be to pass a parameter to the logicsheet that indicates if
> > it's the first time it's been applied or not... can't decide if that's
> > optimal or not tho.
> >
> > - donald
> >
> >
> =====
> Davanum Srinivas, JNI-FAQ Manager
>
>
Dims, I looked at your changes at XSPMarkupLanguage et al, but didn't
understand how they solve Donald's problem. What I understand is that
there's a need for the logicsheet transformer (i.e. the XSL) to know
that it already has been applied in the generation process in order
avoid redeclaring some class-level elements (often implemented within a
<template match="xsp:page">).
If this is the problem, AbstractMarkupLanguage could give the logicsheet
a boolean parameter indicating if it's applied for the first time or
not. A logicsheet could then use it as follows :
<xsl:param
<xsl:template
<xsp:page>
<xsl:apply-templates
<xsl:if test="$
<xsp:structure>
<!-- inner classes, methods and class members declarations -->
</xsp:structure>
</xsl:if>
...
</xsp:page>
</xsl:template>
What do you think of it ?
--
Sylvain Wallez
Anyware Technologies -
---------------------------------------------------------------------
To unsubscribe, e-mail: cocoon-dev-unsubscribe@xml.apache.org
For additional commands, email: cocoon-dev-help@xml.apache.org | http://mail-archives.apache.org/mod_mbox/cocoon-dev/200104.mbox/%3C3AD6BA24.EB0A78BB@anyware-tech.com%3E | CC-MAIN-2016-40 | refinedweb | 375 | 56.25 |
Provided by: libscope-upper-perl_0.32-1_amd64
NAME
Scope::Upper - Act on upper scopes.
VERSION
Version 0.32
SYNOPSIS
"reap", "localize", "localize_elem", "localize_delete" and "WORDS" : package Scope; use Scope::Upper qw< reap localize localize_elem localize_delete :words >; sub new { my ($class, $name) = @_; localize '$tag' => bless({ name => $name }, $class) => UP; reap { print Scope->tag->name, ": end\n" } UP; } # Get the tag stored in the caller namespace sub tag { my $l = 0; my $pkg = __PACKAGE__; $pkg = caller $l++ while $pkg eq __PACKAGE__; no strict 'refs'; ${$pkg . '::tag'}; } sub name { shift->{name} } # Locally capture warnings and reprint them with the name prefixed sub catch { localize_elem '%SIG', '__WARN__' => sub { print Scope->tag->name, ': ', @_; } => UP; } # Locally clear @INC sub private { for (reverse 0 .. $#INC) { # First UP is the for loop, second is the sub boundary localize_delete '@INC', $_ => UP UP; } } ... package UserLand; { Scope->new("top"); # initializes $UserLand::tag { Scope->catch; my $one = 1 + undef; # prints "top: Use of uninitialized value..." { Scope->private; eval { require Cwd }; print $@; # prints "Can't locate Cwd.pm in @INC } # (@INC contains:) at..." require Cwd; # loads Cwd.pm } } # prints "top: done" "unwind" and "want_at" : package Try; use Scope::Upper qw<unwind want_at :words>; sub try (&) { my @result = shift->(); my $cx = SUB UP; # Point to the sub above this one unwind +(want_at($cx) ? @result : scalar @result) => $cx; } ... sub zap { try { my @things = qw<a b c>; return @things; # returns to try() and then outside zap() # not reached }; # not reached } my @stuff = zap(); # @stuff contains qw<a b c> my $stuff = zap(); # $stuff contains 3 "uplevel" : package Uplevel; use Scope::Upper qw<uplevel CALLER>; sub target { faker(@_); } sub faker { uplevel { my $sub = (caller 0)[3]; print "$_[0] from $sub()"; } @_ => CALLER(1); } target('hello'); # "hello from Uplevel::target()" "uid" and "validate_uid" : use Scope::Upper qw<uid validate_uid>; my $uid; { $uid = uid(); { if ($uid eq uid(UP)) { # yes ... } if (validate_uid($uid)) { # yes ... } } } if (validate_uid($uid)) { # no ... }
DESCRIPTION
This module lets you defer actions at run-time that will take place when the control flow returns into an upper scope. Currently, you can: · hook an upper scope end with "reap" ; · localize variables, array/hash values or deletions of elements in higher contexts with respectively "localize", "localize_elem" and "localize_delete" ; · return values immediately to an upper level with "unwind", "yield" and "leave" ; · gather information about an upper context with "want_at" and "context_info" ; · execute a subroutine in the setting of an upper subroutine stack frame with "uplevel" ; · uniquely identify contexts with "uid" and "validate_uid".
FUNCTIONS
In all those functions, $context refers to the target scope. You have to use one or a combination of "WORDS" to build the $context passed to these functions. This is needed in order to ensure that the module still works when your program is ran in the debugger. The only thing you can assume is that it is an absolute indicator of the frame, which means that you can safely store it at some point and use it when needed, and it will still denote the original scope. "reap" reap { ... }; reap { ... } $context; &reap($callback, $context); Adds a destructor that calls $callback (in void context) when the upper scope represented by $context ends. "localize" localize $what, $value; localize $what, $value, $context; Introduces a "local" delayed to the time of first return into the upper scope denoted by $context. $what can be : · A glob, in which case $value can either be a glob or a reference. "localize" follows then the same syntax as "local *x = $value". For example, if $value is a scalar reference, then the "SCALAR" slot of the glob will be set to $$value - just like "local *x = \1" sets $x to 1. · A string beginning with a sigil, representing the symbol to localize and to assign to. If the sigil is '$', "localize" follows the same syntax as "local $x = $value", i.e. $value isn't dereferenced. For example, localize '$x', \'foo' => HERE; will set $x to a reference to the string 'foo'. Other sigils ('@', '%', '&' and '*') require $value to be a reference of the corresponding type. When the symbol is given by a string, it is resolved when the actual localization takes place and not when "localize" is called. Thus, if the symbol name is not qualified, it will refer to the variable in the package where the localization actually takes place and not in the one where the "localize" call was compiled. For example, { package Scope; sub new { localize '$tag', $_[0] => UP } } { package Tool; { Scope->new; ... } } will localize $Tool::tag and not $Scope::tag. If you want the other behaviour, you just have to specify $what as a glob or a qualified name. Note that if $what is a string denoting a variable that wasn't declared beforehand, the relevant slot will be vivified as needed and won't be deleted from the glob when the localization ends. This situation never arises with "local" because it only compiles when the localized variable is already declared. Although I believe it shouldn't be a problem as glob slots definedness is pretty much an implementation detail, this behaviour may change in the future if proved harmful. "localize_elem" localize_elem $what, $key, $value; localize_elem $what, $key, $value, $context; Introduces a "local $what[$key] = $value" or "local $what{$key} = $value" delayed to the time of first return into the upper scope denoted by $context. Unlike "localize", $what must be a string and the type of localization is inferred from its sigil. The two only valid types are array and hash ; for anything besides those, "localize_elem" will throw an exception. $key is either an array index or a hash key, depending of which kind of variable you localize. If $what is a string pointing to an undeclared variable, the variable will be vivified as soon as the localization occurs and emptied when it ends, although it will still exist in its glob. "localize_delete" localize_delete $what, $key; localize_delete $what, $key, $context; Introduces the deletion of a variable or an array/hash element delayed to the time of first return into the upper scope denoted by $context. $what can be: · A glob, in which case $key is ignored and the call is equivalent to "local *x". · A string beginning with '@' or '%', for which the call is equivalent to respectively "local $a[$key]; delete $a[$key]" and "local $h{$key}; delete $h{$key}". · A string beginning with '&', which more or less does "undef &func" in the upper scope. It's actually more powerful, as &func won't even "exists" anymore. $key is ignored. "unwind" unwind; unwind @values, $context; Returns @values from the subroutine, eval or format context pointed by or just above $context, and immediately restarts the program flow at this point - thus effectively returning @values to an upper scope. If @values is empty, then the $context parameter is optional and defaults to the current context (making the call equivalent to a bare "return;") ; otherwise it is mandatory. The upper context isn't coerced onto @values, which is hence always evaluated in list context. This means that my $num = sub { my @a = ('a' .. 'z'); unwind @a => HERE; # not reached }->(); will set $num to 'z'. You can use "want_at" to handle these cases. "yield" yield; yield @values, $context; Returns @values from the context pointed by or just above $context, and immediately restarts the program flow at this point. If @values is empty, then the $context parameter is optional and defaults to the current context ; otherwise it is mandatory. "yield" differs from "unwind" in that it can target any upper scope (besides a "s///e" substitution context) and not necessarily a sub, an eval or a format. Hence you can use it to return values from a "do" or a "map" block : my $now = do { local $@; eval { require Time::HiRes } or yield time() => HERE; Time::HiRes::time(); }; my @uniq = map { yield if $seen{$_}++; # returns the empty list from the block ... } @things; Like for "unwind", the upper context isn't coerced onto @values. You can use the fifth value returned by "context_info" to handle context coercion. "leave" leave; leave @values; Immediately returns @values from the current block, whatever it may be (besides a "s///e" substitution context). "leave" is actually a synonym for "yield HERE", while "leave @values" is a synonym for "yield @values, HERE". Like for "yield", you can use the fifth value returned by "context_info" to handle context coercion. "want_at" my $want = want_at; my $want = want_at $context; Like "wantarray" in perlfunc, but for the subroutine, eval or format context located at or just above $context. It can be used to revise the example showed in "unwind" : my $num = sub { my @a = ('a' .. 'z'); unwind +(want_at(HERE) ? @a : scalar @a) => HERE; # not reached }->(); will rightfully set $num to 26. "context_info" my ($package, $filename, $line, $subroutine, $hasargs, $wantarray, $evaltext, $is_require, $hints, $bitmask, $hinthash) = context_info $context; Gives information about the context denoted by $context, akin to what "caller" in perlfunc provides but not limited only to subroutine, eval and format contexts. When $context is omitted, it defaults to the current context. The returned values are, in order : · (index 0) : the namespace in use when the context was created ; · (index 1) : the name of the file at the point where the context was created ; · (index 2) : the line number at the point where the context was created ; · (index 3) : the name of the subroutine called for this context, or "undef" if this is not a subroutine context ; · (index 4) : a boolean indicating whether a new instance of @_ was set up for this context, or "undef" if this is not a subroutine context ; · (index 5) : the context (in the sense of "wantarray" in perlfunc) in which the context (in our sense) is executed ; · (index 6) : the contents of the string being compiled for this context, or "undef" if this is not an eval context ; · (index 7) : a boolean indicating whether this eval context was created by "require", or "undef" if this is not an eval context ; · (index 8) : the value of the lexical hints in use when the context was created ; · (index 9) : a bit string representing the warnings in use when the context was created ; · (index 10) : a reference to the lexical hints hash in use when the context was created (only on perl 5.10 or greater). "uplevel" my @ret = uplevel { ...; return @ret }; my @ret = uplevel { my @args = @_; ...; return @ret } @args, $context; my @ret = &uplevel($callback, @args, $context); Executes the code reference $callback with arguments @args as if it were located at the subroutine stack frame pointed by $context, effectively fooling "caller" and "die" into believing that the call actually happened higher in the stack. The code is executed in the context of the "uplevel" call, and what it returns is returned as-is by "uplevel". sub target { faker(@_); } sub faker { uplevel { map { 1 / $_ } @_; } @_ => CALLER(1); } my @inverses = target(1, 2, 4); # @inverses contains (0, 0.5, 0.25) my $count = target(1, 2, 4); # $count is 3 Note that if @args is empty, then the $context parameter is optional and defaults to the current context ; otherwise it is mandatory. Sub::Uplevel also implements a pure-Perl version of "uplevel". Both are identical, with the following caveats : · The Sub::Uplevel implementation of "uplevel" may execute a code reference in the context of any upper stack frame. The Scope::Upper version can only uplevel to a subroutine stack frame, and will croak if you try to target an "eval" or a format. · Exceptions thrown from the code called by this version of "uplevel" will not be caught by "eval" blocks between the target frame and the uplevel call, while they will for Sub::Uplevel's version. This means that : eval { sub { local $@; eval { sub { uplevel { die 'wut' } CALLER(2); # for Scope::Upper # uplevel(3, sub { die 'wut' }) # for Sub::Uplevel }->(); }; print "inner block: $@"; $@ and exit; }->(); }; print "outer block: $@"; will print "inner block: wut..." with Sub::Uplevel and "outer block: wut..." with Scope::Upper. · Sub::Uplevel globally overrides the Perl keyword "caller", while Scope::Upper does not. A simple wrapper lets you mimic the interface of "uplevel" in Sub::Uplevel : use Scope::Upper; sub uplevel { my $frame = shift; my $code = shift; my $cxt = Scope::Upper::CALLER($frame); &Scope::Upper::uplevel($code => @_ => $cxt); } Albeit the three exceptions listed above, it passes all the tests of Sub::Uplevel. "uid" my $uid = uid; my $uid = uid $context; Returns an unique identifier (UID) for the context (or dynamic scope) pointed by $context, or for the current context if $context is omitted. This UID will only be valid for the life time of the context it represents, and another UID will be generated next time the same scope is executed. my $uid; { $uid = uid; if ($uid eq uid()) { # yes, this is the same context ... } { if ($uid eq uid()) { # no, we are one scope below ... } if ($uid eq uid(UP)) { # yes, UP points to the same scope as $uid ... } } } # $uid is now invalid { if ($uid eq uid()) { # no, this is another block ... } } For example, each loop iteration gets its own UID : my %uids; for (1 .. 5) { my $uid = uid; $uids{$uid} = $_; } # %uids has 5 entries The UIDs are not guaranteed to be numbers, so you must use the "eq" operator to compare them. To check whether a given UID is valid, you can use the "validate_uid" function. "validate_uid" my $is_valid = validate_uid $uid; Returns true if and only if $uid is the UID of a currently valid context (that is, it designates a scope that is higher than the current one in the call stack). my $uid; { $uid = uid(); if (validate_uid($uid)) { # yes ... } { if (validate_uid($uid)) { # yes ... } } } if (validate_uid($uid)) { # no ... }
CONSTANTS
"SU_THREADSAFE" True iff the module could have been built when thread-safety features.
WORDS
Constants "TOP" my $top_context = TOP; Returns the context that currently represents the highest scope. "HERE" my $current_context = HERE; The context of the current scope. Getting a context from a context For any of those functions, $from is expected to be a context. When omitted, it defaults to the current context. "UP" my $upper_context = UP; my $upper_context = UP $from; The context of the scope just above $from. If $from points to the top-level scope in the current stack, then a warning is emitted and $from is returned (see "DIAGNOSTICS" for details). "SUB" my $sub_context = SUB; my $sub_context = SUB $from; The context of the closest subroutine above $from. If $from already designates a subroutine context, then it is returned as-is ; hence "SUB SUB == SUB". If no subroutine context is present in the call stack, then a warning is emitted and the current context is returned (see "DIAGNOSTICS" for details). "EVAL" my $eval_context = EVAL; my $eval_context = EVAL $from; The context of the closest eval above $from. If $from already designates an eval context, then it is returned as-is ; hence "EVAL EVAL == EVAL". If no eval context is present in the call stack, then a warning is emitted and the current context is returned (see "DIAGNOSTICS" for details). Getting a context from a level Here, $level should denote a number of scopes above the current one. When omitted, it defaults to 0 and those functions return the same context as "HERE". "SCOPE" my $context = SCOPE; my $context = SCOPE $level; The $level-th upper context, regardless of its type. If $level points above the top-level scope in the current stack, then a warning is emitted and the top-level context is returned (see "DIAGNOSTICS" for details). "CALLER" my $context = CALLER; my $context = CALLER $level; The context of the $level-th upper subroutine/eval/format. It kind of corresponds to the context represented by "caller $level", but while e.g. "caller 0" refers to the caller context, "CALLER 0" will refer to the top scope in the current context. If $level points above the top-level scope in the current stack, then a warning is emitted and the top- level context is returned (see "DIAGNOSTICS" for details). Examples Where "reap" fires depending on the $cxt : sub { eval { sub { { reap \&cleanup => $cxt; ... } # $cxt = SCOPE(0) = HERE ... }->(); # $cxt = SCOPE(1) = UP = SUB = CALLER(0) ... }; # $cxt = SCOPE(2) = UP UP = UP SUB = EVAL = CALLER(1) ... }->(); # $cxt = SCOPE(3) = SUB UP SUB = SUB EVAL = CALLER(2) ... Where "localize", "localize_elem" and "localize_delete" act depending on the $cxt : sub { eval { sub { { localize '$x' => 1 => $cxt; # $cxt = SCOPE(0) = HERE ... } # $cxt = SCOPE(1) = UP = SUB = CALLER(0) ... }->(); # $cxt = SCOPE(2) = UP UP = UP SUB = EVAL = CALLER(1) ... }; # $cxt = SCOPE(3) = SUB UP SUB = SUB EVAL = CALLER(2) ... }->(); # $cxt = SCOPE(4), UP SUB UP SUB = UP SUB EVAL = UP CALLER(2) = TOP ... Where "unwind", "yield", "want_at", "context_info" and "uplevel" point to depending on the $cxt: sub { eval { sub { { unwind @things => $cxt; # or yield @things => $cxt # or uplevel { ... } $cxt ... } ... }->(); # $cxt = SCOPE(0) = SCOPE(1) = HERE = UP = SUB = CALLER(0) ... }; # $cxt = SCOPE(2) = UP UP = UP SUB = EVAL = CALLER(1) (*) ... }->(); # $cxt = SCOPE(3) = SUB UP SUB = SUB EVAL = CALLER(2) ... # (*) Note that uplevel() will croak if you pass that scope frame, # because it cannot target eval scopes.
DIAGNOSTICS
"Cannot target a scope outside of the current stack" This warning is emitted when "UP", "SCOPE" or "CALLER" end up pointing to a context that is above the top-level context of the current stack. It indicates that you tried to go higher than the main scope, or to point across a "DESTROY" method, a signal handler, an overloaded or tied method call, a "require" statement or a "sort" callback. In this case, the resulting context is the highest reachable one. "No targetable %s scope in the current stack" This warning is emitted when you ask for an "EVAL" or "SUB" context and no such scope can be found in the call stack. The resulting context is the current one.
EXPORT
The functions "reap", "localize", "localize_elem", "localize_delete", "unwind", "yield", "leave", "want_at", "context_info" and "uplevel" are only exported on request, either individually or by the tags ':funcs' and ':all'. The constant "SU_THREADSAFE" is also only exported on request, individually or by the tags ':consts' and ':all'. Same goes for the words "TOP", "HERE", "UP", "SUB", "EVAL", "SCOPE" and "CALLER" that are only exported on request, individually or by the tags ':words' and ':all'.
CAVEATS
It is not possible to act upon a scope that belongs to another perl 'stack', i.e. to target a scope across a "DESTROY" method, a signal handler, an overloaded or tied method call, a "require" statement or a "sort" callback. Be careful that local variables are restored in the reverse order in which they were localized. Consider those examples: local $x = 0; { reap sub { print $x } => HERE; local $x = 1; ... } # prints '0' ... { local $x = 1; reap sub { $x = 2 } => HERE; ... } # $x is 0 The first case is "solved" by moving the "local" before the "reap", and the second by using "localize" instead of "reap". The effects of "reap", "localize" and "localize_elem" can't cross "BEGIN" blocks, hence calling those functions in "import" is deemed to be useless. This is an hopeless case because "BEGIN" blocks are executed once while localizing constructs should do their job at each run. However, it's possible to hook the end of the current scope compilation with B::Hooks::EndOfScope. Some rare oddities may still happen when running inside the debugger. It may help to use a perl higher than 5.8.9 or 5.10.0, as they contain some context-related fixes. Calling "goto" to replace an "uplevel"'d code frame does not work : · for a "perl" older than the 5.8 series ; · for a "DEBUGGING" "perl" run with debugging flags set (as in "perl -D ...") ; · when the runloop callback is replaced by another module. In those three cases, "uplevel" will look for a "goto &sub" statement in its callback and, if there is one, throw an exception before executing the code. Moreover, in order to handle "goto" statements properly, "uplevel" currently has to suffer a run-time overhead proportional to the size of the callback in every case (with a small ratio), and proportional to the size of all the code executed as the result of the "uplevel" call (including subroutine calls inside the callback) when a "goto" statement is found in the "uplevel" callback. Despite this shortcoming, this XS version of "uplevel" should still run way faster than the pure-Perl version from Sub::Uplevel. Starting from "perl" 5.19.4, it is unfortunately no longer possible to reliably throw exceptions from "uplevel"'d code while the debugger is in use. This may be solved in a future version depending on how the core evolves.
DEPENDENCIES
perl 5.6.1. A C compiler. This module may happen to build with a C++ compiler as well, but don't rely on it, as no guarantee is made in this regard. XSLoader (core since perl 5.6.0).
SEE ALSO
"local" in perlfunc, "Temporary Values via local()" in perlsub. Alias, Hook::Scope, Scope::Guard, Guard. Sub::Uplevel. Continuation::Escape is a thin wrapper around Scope::Upper that gives you a continuation passing style interface to "unwind". It's easier to use, but it requires you to have control over the scope where you want to return. Scope::Escape.
AUTHOR
Vincent Pit "<vpit at cpan.org>". You can contact me by mail or on "irc.perl.org" (vincent).
BUGS
Please report any bugs or feature requests to "bug-scope-upper at rt.cpan.org", or through the web interface at <>. I will be notified, and then you'll automatically be notified of progress on your bug as I make changes.
SUPPORT
You can find documentation for this module with the perldoc command. perldoc Scope::Upper
ACKNOWLEDGEMENTS
Inspired by Ricardo Signes. The reimplementation of a large part of this module for perl 5.24 was provided by David Mitchell. His work was sponsored by the Perl 5 Core Maintenance Grant from The Perl Foundation. Thanks to Shawn M. Moore for motivation.
Copyright 2008,2009,2010,2011,2012,2013,2014,2015,2016,2017,2018,2019 Vincent Pit, all rights reserved. This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself. | http://manpages.ubuntu.com/manpages/eoan/man3/Scope::Upper.3pm.html | CC-MAIN-2020-29 | refinedweb | 3,669 | 61.16 |
Tooltips
The Tooltip component is a styled container for content that should be displayed when triggered by an OverlayTrigger or TooltipTrigger. It does not exhibit any dynamic behavior on its own.
TooltipTriggers are simpler to use, and their associated Tooltips are shown and hidden using CSS
visibility rules.
In contrast to OverlayTriggers, the markup always exists in the DOM.
TooltipTriggers are an easy way to create CSS-driven tooltips with the tooltip content created inline with the triggering element. The content of the tooltip is wrapped in a Tooltip component for ease of styling. Please note that the TooltipTrigger will add a lot of markup to the DOM if you are using it in a highly repeated layout.
ReferenceError: React is not defined
Since the tooltip property is of type Node, you may add markup to the tooltip, such as links.
ReferenceError: React is not defined
The trigger is manual, so the visibility of the tooltip is controlled by the display prop.
ReferenceError: React is not defined
Props
Tooltip props
TooltipTrigger props
Imports
Import React components (including CSS):
import {Tooltip, TooltipTrigger} from 'pivotal-ui/react/tooltip';
Import CSS only:
import 'pivotal-ui/css/tooltips'; | https://styleguide.pivotal.io/components/tooltips/ | CC-MAIN-2021-04 | refinedweb | 194 | 52.49 |
Some.
To avoid the fix-it-yourself scenario, you can thoroughly test any scripting interpreter you plan to support with your application. For each interpreter, ensure that the interpreter gracefully handles the most common usage scenarios, that big memory chunks don't leak when you hammer on the interpreter with long and demanding scripts, and that nothing unexpected happens when you put your program and scripting interpreters in the hands of demanding beta-testers. Yes, such up-front testing costs time and resources; nevertheless, testing is time well spent.
The solution: Keep it simple
If you must support scripting in your Java application, pick a scripting interpreter that best suits your application needs and customer base. Thusly, you simplify the interpreter-integration code, reduce customer support costs, and improve your application's consistency. The hard question is: if you must standardize on just one scripting language, which one do you choose?
I compared several scripting interpreters, starting with a list of languages including Tcl, Python, Perl, JavaScript, and BeanShell. Then, without doing a detailed analysis, I bumped Perl from consideration. Why? Because there isn't a Perl interpreter written in Java. If the scripting interpreter you choose is implemented in native code, like Perl, then the interaction between your application and the script code is less direct, and you must ship at least one native binary with your Java program for each operating system you care about. Since many developers choose Java because of the language's portability, I stay true to that advantage by sticking with a scripting interpreter that does not create a dependence on native binaries. Java is cross-platform, and I want my scripting interpreter to be also. In contrast, Java-based interpreters do exist for Tcl, Python, JavaScript, and BeanShell, so they can run in the same process and JVM as the rest of your Java application.
Based on those criteria, the scripting interpreter comparison list comprises:
- Jacl: The Tcl Java implementation
- Jython: The Python Java implementation
- Rhino: The JavaScript Java implementation
- BeanShell: A Java source interpreter written in Java
Now that we've filtered the scripting interpreter language list down to Tcl, Python, JavaScript, and BeanShell, that brings us to the first comparison criteria.
The first benchmark: Feasibility
For the first benchmark, feasibility, I examined the four interpreters to see if anything made them impossible to use. I wrote simple test programs in each language, ran my test cases against them, and found that each performed well. All worked reliably or proved easy to integrate with. While each interpreter seems a worthy candidate, what would make a developer choose one over another?
- Jacl: If you desire Tk constructs in your scripts to create user interface objects, look at the Swank project for Java classes that wrap Java's Swing widgets into Tk. The distribution does not include a debugger for Jacl scripts.
- Jython: Supports scripts written in the Python syntax. Instead of using curly braces or begin-end markers to indicate flow of control, as many languages do, Python uses indentation levels to show which blocks of code belong together. Is that a problem? It depends on you and your customers and whether you mind. The distribution does not include a debugger for Jython scripts.
- Rhino: Many programmers associate JavaScript with Webpage programming, but this JavaScript version doesn't need to run inside a Web browser. I found no problems while working with it. The distribution comes with a simple but useful script debugger.
- BeanShell: Java programmers will immediately feel at home with this source interpreter's behavior. BeanShell's documentation is nicely done, but don't look for a book on BeanShell programming at your bookstore -- there aren't any. And BeanShell's development team is very small, too. However, that's only a problem if the principals move on to other interests and others don't step in to fill their shoes. The distribution does not include a debugger for BeanShell scripts.
The second benchmark: Performance
For the second benchmark, performance, I examined how quickly the scripting interpreters executed simple programs. I didn't ask the interpreters to sort huge arrays or perform complex math. Instead, I stuck to basic, general tasks such as looping, comparing integers against other integers, and allocating and initializing large one- and two-dimensional arrays. It doesn't get much simpler than that, and these tasks are common enough that most commercial applications will perform them at one time or another. I also checked to see how much memory each interpreter required for instantiation and to execute a tiny script.
For consistency, I coded each test as similarly as possible in each scripting language. I ran the tests on a Toshiba Tecra 8100 laptop with a 700-MHz Pentium III processor and 256 MB of RAM. When invoking the JVM, I used the default heap size.
In the interest of offering perspective for how fast or slow these numbers are, I also coded the test cases in Java and ran them using Java 1.3.1. I also reran the Tcl scripts I wrote for the Jacl scripting interpreter inside a native Tcl interpreter. Consequently, in the tables below, you can see how the interpreters stack up against native interpreters.
What the numbers mean
Jython proves the fastest on the benchmarks by a considerable margin, with Rhino a reasonably close second. BeanShell is slower, with Jacl bringing up the rear.
Whether these performance numbers matter to you depends on the tasks you want to do with your scripting language. If you have many hundreds of thousands of iterations to perform in your scripting functions, then Jacl or BeanShell might prove intolerable. If your scripts run few repetitive functions, then the relative differences in speeds between these interpreters seem less important.
It's worth mentioning that Jython doesn't seem to have built-in direct support for declaring two-dimensional arrays, but this can be worked around by using an array-of-arrays structure.
Although it was not a performance benchmark, it did take me more time to write the scripts in Jython than for the others. No doubt my unfamiliarity with Python caused some of the trouble. If you are a proficient Java programmer but are unfamiliar with Python or Tcl, you may find it easier to get going writing scripts with JavaScript or BeanShell than you will with Jython or Jacl, since there is less new ground to cover.
The third benchmark: Integration difficulty
The integration benchmark covers two tasks. The first shows how much code instantiates the scripting language interpreter. The second task writes a script that instantiates a Java JFrame, populates it with a JTree, and sizes and displays the JFrame. Although simple, these tasks prove valuable because they measure the effort to start using the interpreter, and also how a script written for the interpreter looks when it calls Java class code.
Jacl
To integrate Jacl into your Java application, you add the Jacl jar file to your classpath at invocation, then instantiate the Jacl interpreter prior to executing a script. Here's the code to create a Jacl interpreter:
import tcl.lang.*; public class SimpleEmbedded { public static void main(String args[]) { try { Interp interp = new Interp(); } catch (Exception e) { } }
The Jacl script to create a JTree, put it in a JFrame, and size and show the JFrame, looks like this:
package require java set env(TCL_CLASSPATH) set mid [java::new javax.swing.JTree] set f [java::new javax.swing.JFrame] $f setSize 200 200 set layout [java::new java.awt.BorderLayout] $f setLayout $layout $f add $mid $f show
Jython
To integrate Jython with your Java application, add the Jython jar file to your classpath at invocation, then instantiate the interpreter prior to executing a script. The code that gets you this far is straightforward:
import org.python.util.PythonInterpreter; import org.python.core.*; public class SimpleEmbedded { public static void main(String []args) throws PyException { PythonInterpreter interp = new PythonInterpreter(); } }
The Jython script to create a JTree, put it in a JFrame, and show the JFrame is shown below. I avoided sizing the frame this time:
from pawt import swing import java, sys frame = swing.JFrame('Jython example', visible=1) tree = swing.JTree() frame.contentPane.add(tree) frame.pack()
Rhino
As with the other interpreters, you add the Rhino jar file to your classpath at invocation, then instantiate the interpreter prior to executing a script:
import org.mozilla.javascript.*; import org.mozilla.javascript.tools.ToolErrorReporter; public class SimpleEmbedded { public static void main(String args[]) { Context cx = Context.enter(); } }
The Rhino script to create a JTree, put it in a JFrame, and size and show the JFrame proves simple:
importPackage(java.awt); importPackage(Packages.javax.swing); frame = new Frame("JavaScript"); frame.setSize(new Dimension(200,200)); frame.setLayout(new BorderLayout()); t = new JTree(); frame.add(t, BorderLayout.CENTER); frame.pack(); frame.show(); | https://www.javaworld.com/article/2074156/java-app-dev/java-scripting-languages--which-is-right-for-you-.html | CC-MAIN-2018-47 | refinedweb | 1,478 | 52.39 |
Can I extract the underlying decision-rules (or 'decision paths') from a trained tree in a decision tree - as a textual list ?
something like:
"if A>0.4 then if B<0.2 then if C>0.8 then class='X'
I modified the code submitted by Zelazny7 to print some pseudocode:
def get_code(tree, feature_names): left = tree.tree_.children_left right = tree.tree_.children_right threshold = tree.tree_.threshold features = [feature_names[i] for i in tree.tree_.feature] value = tree.tree_.value def recurse(left, right, threshold, features, node): if (threshold[node] != -2): print "if ( " + features[node] + " <= " + str(threshold[node]) + " ) {" if left[node] != -1: recurse (left, right, threshold, features,left[node]) print "} else {" if right[node] != -1: recurse (left, right, threshold, features,right[node]) print "}" else: print "return " + str(value[node]) recurse(left, right, threshold, features, 0)
if you call
get_code(dt, df.columns) on the same example you will obtain:
if ( col1 <= 0.5 ) { return [[ 1. 0.]] } else { if ( col2 <= 4.5 ) { return [[ 0. 1.]] } else { if ( col1 <= 2.5 ) { return [[ 1. 0.]] } else { return [[ 0. 1.]] } } } | https://codedump.io/share/KGoaqMJr7C5W/1/how-to-extract-the-decision-rules-from-scikit-learn-decision-tree | CC-MAIN-2016-44 | refinedweb | 177 | 64.47 |
Created on 2009-05-22 08:18 by billm, last changed 2018-07-25 18:16 by taleinat.
The code for resource_setrlimit in Modules/resource.c does not handle
reference counting properly. The following Python code segfaults for me
on Ubuntu 8.10 in Python 2.5.2 and also a custom-built 2.6.1.
--
import resource
l = [0, 0]
class MyNum:
def __int__(self):
l[1] = 20
return 10
def __del__(self):
print 'byebye', self
l[0] = MyNum()
l[1] = MyNum()
resource.setrlimit(resource.RLIMIT_CPU, l)
--
The problem is that setrlimit gets its arguments by calling:
PyArg_ParseTuple(args, "i(OO):setrlimit",
&resource, &curobj, &maxobj)
The references curobj and maxobj are borrowed. The second argument can
be passed as a mutable list rather than a tuple, so it's possible to
update the list in the middle of setrlimit, causing maxobj to be
destroyed before setrlimit is done with it.
I've attached a patch that INCREFs both variables immediately after
parsing them to avoid this problem.
In my opinion it seems dangerous to allow format strings with the 'O'
specifier appearing in parentheses. You normally expect that objects
returned from PyArg_ParseTuple are pretty safe, but the fact that the
inner sequence may be mutable violates this assumption. Might it make
sense to ban this use case? I only found one other instance of it in the
Python source tree, inside ctypes. This one may also be a crashing
bug--I didn't look at it carefully enough.
That is a good point. IMHO we'll be fine with a warning in the docs,
and fixing our own two instances. Martin, what do you think?
IMO, any refcounting bug has the potential as a security risk. So I
think we should deprecate this with a warning, and eventually remove it,
as billm proposes.
It's probably debatable whether to backport the warning to 2.6 or
earlier; I think we shouldn't, as many applications are probably valid.
Actually, this can't be fixed without modifying C API methods PyArg_ParseTuple and PyArg_ParseTupleAndKeywords, because it's possible to make an object deallocated before PyArg_ParseTuple returns, so Py_INCREF immediately after parsing would be already too late.
Here are my test cases:
test-resource.py - in Modules/resource.c, and python-bug-01.patch won't work against it.
test-ctypes.py - in Modules/_ctypes/_ctypes.c.
test-functools.py - in Modules/_functoolsmodule.c (py3k only).
Let me summarize the issue: the PyArg_ParseTuple format code 'O' returns a borrowed reference. However, when the 'O' code appears inside parenthesis, there may not be an object to hold the reference to borrow from. This is what happens in the test-functools.py crasher: partial.__setstate__() takes a 4-tuple argument that is unpacked using a "(OOOO)" format. The test case passes an instance instead of a tuple that supports the sequence methods, but does not hold the reference to the "items" that its []-operator returns. This is not a problem at the top level because args argument to PyArg_ParseTuple is always a real tuple.
I think that rather than deprecating the use of 'O' format inside parentheses, "(..O..)" unpacking should reject to unpack arguments other than tuples or maybe lists.
Attached patch passes the regrtest and makes test-functools.py raise an exception rather than crash. The proposed change will make functions like partial.__setstate__ require tuple argument even though currently it would accept any container. This is not an issue with __setstate__ because it should only be called with arguments produced by __reduce__ and in the case of partial, __reduce__ produces state as a tuple. Other functions may need to be modified if they need to continue to accept arbitrary sequences.
Here is a patch which get rid of all three PyArg_ParseTuple usage with parsing nested sequences. Thanks Evgeny for reproducers.
Serhiy's patch looks good to me.
New changeset a4c85f9b8f58 by Serhiy Storchaka in branch '2.7':
Issue #6083: Fix multiple segmentation faults occured when PyArg_ParseTuple
New changeset 4bac47eb444c by Serhiy Storchaka in branch '3.2':
Issue #6083: Fix multiple segmentation faults occured when PyArg_ParseTuple
New changeset e0ee10f27e5f by Serhiy Storchaka in branch '3.3':
Issue #6083: Fix multiple segmentation faults occured when PyArg_ParseTuple
New changeset 3e3a7d825736 by Serhiy Storchaka in branch 'default':
Issue #6083: Fix multiple segmentation faults occured when PyArg_ParseTuple
I do not have possibility and desires blind-repair a test on alien platform, so just temporarily disable a new test in Lib/ctypes/test/test_returnfuncptrs.py on Windows. If someone has a desire to fix it fell free to do this.
I do not close this issue because committed patch only fix existing crashes in Python. There should be plenty of such bugs in third-party code. We have to deprecate this unsafe feature or reject any sequences except tuple as Alexander proposed.
The FreeBSD 6.4 bot is failing, too. Note that the other functions
in test_returnfuncptrs.py do this in order to get strchr():
dll = CDLL(_ctypes_test.__file__)
get_strchr = dll.get_strchr
get_strchr.restype = CFUNCTYPE(c_char_p, c_char_p, c_char)
strchr = get_strchr()
There are 6 different ways to get a function (see comment around PyCFuncPtr_new() in Modules/_ctypes/_ctypes.c). The other tests just use other ways.
I'm more carefully read ctype code and found my mistake. Need to import "my_strchr", and not "strchr".
FreeBSD 6.4 and Windows test failures was fixed in changesets 8fb98fb758e8 and ec70abe8c886.
Oh, I shouldn't close this until this dangerous feature will be deprecated.
Accepting an arbitrary sequence when "(...)" is used in the format string was introduced in changeset 0ef1071cb7fe.
> New changeset a4c85f9b8f58 by Serhiy Storchaka in branch '2.7':
Issue #6083: Fix multiple segmentation faults occured when PyArg_ParseTuple
This test has a problem: though it tests not the ability to set a CPU hard limit, it fails if the hard limit is limited. Perhaps, ignore any exception there? Could you please help me re-write it correctly, so that I can run it on gyle--ALT's builder host--successfully):
# Issue 6083: Reference counting bug
def test_setrusage_refcount(self):
try:
limits = resource.getrlimit(resource.RLIMIT_CPU)
except AttributeError:
self.skipTest('RLIMIT_CPU not available')
class BadSequence:
def __len__(self):
return 2
def __getitem__(self, key):
if key in (0, 1):
return len(tuple(range(1000000)))
raise IndexError
resource.setrlimit(resource.RLIMIT_CPU, BadSequence())
The failure:
[builder@team ~]$ python /usr/lib64/python2.7/test/test_resource.py
test_args (__main__.ResourceTest) ... ok
test_fsize_enforced (__main__.ResourceTest) ... ok
test_fsize_ismax (__main__.ResourceTest) ... ok
test_fsize_toobig (__main__.ResourceTest) ... ok
test_getrusage (__main__.ResourceTest) ... ok
test_setrusage_refcount (__main__.ResourceTest) ... ERROR
======================================================================
ERROR: test_setrusage_refcount (__main__.ResourceTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/lib64/python2.7/test/test_resource.py", line 117, in test_setrusage_refcount
resource.setrlimit(resource.RLIMIT_CPU, BadSequence())
ValueError: not allowed to raise maximum limit
----------------------------------------------------------------------
Ran 6 tests in 0.085s
FAILED (errors=1)
Traceback (most recent call last):
File "/usr/lib64/python2.7/test/test_resource.py", line 123, in <module>
test_main()
File "/usr/lib64/python2.7/test/test_resource.py", line 120, in test_main
test_support.run_unittest(ResourceTest)
File "/usr/lib64/python2.7/test/support/__init__.py", line 1577, in run_unittest
_run_suite(suite)
File "/usr/lib64/python2.7/test/support/__init__.py", line 1542, in _run_suite
raise TestFailed(err)
test.support.TestFailed: Traceback (most recent call last):
File "/usr/lib64/python2.7/test/test_resource.py", line 117, in test_setrusage_refcount
resource.setrlimit(resource.RLIMIT_CPU, BadSequence())
ValueError: not allowed to raise maximum limit
[builder@team ~]$
What does resource.getrlimit(resource.RLIMIT_CPU) return?
>>> import resource
>>> resource.getrlimit(resource.RLIMIT_CPU)
(7200, 7260)
The simplest way is to try passing the limit as a tuple
resource.setrlimit(resource.RLIMIT_CPU, (1000000, 1000000))
and skip the test if it failed.
Thanks! I also thought about this simplest way. What about this:
diff --git a/Python/Lib/test/test_resource.py b/Python/Lib/test/test_resource.py
index de29d3b..bec4440 100644
--- a/Python/Lib/test/test_resource.py
+++ b/Python/Lib/test/test_resource.py
@@ -102,16 +102,21 @@ class ResourceTest(unittest.TestCase):
# Issue 6083: Reference counting bug
def test_setrusage_refcount(self):
+ howmany = 1000000
try:
limits = resource.getrlimit(resource.RLIMIT_CPU)
except AttributeError:
self.skipTest('RLIMIT_CPU not available')
+ try:
+ resource.setrlimit(resource.RLIMIT_CPU, (howmany, howmany))
+ except _:
+ self.skipTest('Setting RLIMIT_CPU not possible')
class BadSequence:
def __len__(self):
return 2
def __getitem__(self, key):
if key in (0, 1):
- return len(tuple(range(1000000)))
+ return len(tuple(range(howmany)))
raise IndexError
resource.setrlimit(resource.RLIMIT_CPU, BadSequence())
What should I write instead of _?
And will the next call be effective (do anything), if we have already set the limit with the testing call?
LGTM.
> What should I write instead of _?
(ValueError, OSError)
> And will the next call be effective (do anything), if we have already set the limit with the testing call?
This doesn't matter. We test that it doesn't crash when parse arguments.
Ivan, can you supply a PR or would you like someone else to do so? | https://bugs.python.org/issue6083 | CC-MAIN-2019-30 | refinedweb | 1,470 | 52.15 |
From: cdp2582@hertz.njit.edu (Chris Peckham) Newsgroups: comp.protocols.tcp-ip.domains, comp.protocols.dns.bind Subject: comp.protocols.tcp-ip.domains Frequently Asked Questions (FAQ) (Part 1 of 2) Sender: cdp@chipmunk.iconnet.net Message-ID: <cptd-faq-1-918764317@njit.edu> Reply-To: cdp@intac.com (comp.protocols.tcp-ip.domains FAQ comments) Keywords: BIND,DOMAIN,DNS X-Posting-Frequency: posted during the first week of each month Date: Thu, 11 Feb 1999 20:18:01 GMT Posted-By: auto-faq 3.3 beta (Perl 5.004) Archive-name: internet/tcp-ip/domains-faq/part1 Note that this posting has been split into two parts because of its size. $Id: cptd-faq.bfnn,v 1.26 1999/02/11 20:01:58 cdp Exp cdp $ A new version of this document appears monthly. If this copy is more than a month old it may be out of date. This FAQ is edited and maintained by Chris Peckham, <cdp@intac.com>. The most recently posted version may be found for anonymous ftp from rtfm.mit.edu : /pub/usenet/news.answers/internet/tcp-ip/domains-faq It is also available in HTML from. If you can contribute any answers for items in the TODO section, please do so by sending e-mail to <cdp@intac.com> ! If you know of any items that are not included and you feel that they should be, send the relevant information to <cdp@intac.com>. =============================================================================== Index Section 1. TO DO / UPDATES Q1.1 Contributions needed Q1.2 UPDATES / Changes since last posting Modifying the Behavior of DNS with ndots Q5.24 Different DNS answers for same RR Section 7. ACKNOWLEDGEMENTS Q7.1 How is this FAQ generated ? Q7.2 What formats are available ? Q7.3 Contributors =============================================================================== Section 1. TO DO / UPDATES Q1.1 Contributions needed Q1.2 UPDATES / Changes since last posting ----------------------------------------------------------------------------- Question 1.1. Contributions needed Date: Mon Jan 18 22:57:01 EST 1999 * Additional information on the new TLDs * Expand on Q: How to serve multiple domains from one server * Q: DNS ports - need to expand/correct some issues ----------------------------------------------------------------------------- Question 1.2. UPDATES / Changes since last posting Date: Thu Feb 11 14:36:02 EST 1999 * DNS in firewalled and private networks - Updated with comment about hint file * host - Updated NT info * How do I register a domain ? - JP NIC * BIND and Y2K =============================================================================== ----------------------------------------------------------------------------- Question 2.1. What is this newsgroup ? Date: Thu Dec 1 11:08:28 EST 1994 comp.protocols.tcp-ip.domains is the usenet newsgroup for discussion on issues relating to the Domain Name System (DNS). This newsgroup is not for issues directly relating to IP routing and addressing. Issues of that nature should be directed towards comp.protocols.tcp-ip. ----------------------------------------------------------------------------- Question 2.2. More information Date: Fri Dec 6 00:41:03 EST 1996 You can find more information concerning DNS in the following places: * The BOG (BIND Operations Guide) - in the BIND distribution * The FAQ included with BIND 4.9.5 in doc/misc/FAQ * DNS and BIND by Albitz and Liu (an O'Reilly & Associates Nutshell handbook) * A number of RFCs (920, 974, 1032, 1034, 1101, 1123, 1178, 1183, 1348, 1535, 1536, 1537, 1591, 1706, 1712, 1713, 1912, 1918) * The DNS Resources Directory (DNSRD) * If you are having troubles relating to sendmail and DNS, you may wish to refer to the USEnet newsgroup comp.mail.sendmail and/or the FAQ for that newsgroup which may be found for anonymous ftp at rtfm.mit.edu : /pub/usenet/news.answers/mail/sendmail-faq * Information concerning some frequently asked questions relating to the Internet (i.e., what is the InterNIC, what is an RFC, what is the IETF, etc) may be found for anonymous ftp from ds.internic.net : /fyi/fyi4.txt A version may also be obtained with the URL gopher://ds.internic.net/00/fyi/fyi4.txt. * Information on performing an initial installation of BIND may be found using the DNS Resources Directory at * Three other USEnet newsgroups: * comp.protocols.dns.bind * comp.protocols.dns.ops * comp.protocols.dns.std ----------------------------------------------------------------------------- Question 2.3. What is BIND ? Date: Tue Sep 10 23:15:58 EDT 1996 From the BOG Introduction - The Berkeley Internet Name Domain (BIND) implements an Internet name server for the BSD operating system. The BIND consists of a server (or ``daemon'') and a resolver library. A name server is a network service that enables clients to name resources or objects and share this information with other objects in the network. This in effect is a distributed data base system for objects in a computer network.. ----------------------------------------------------------------------------- Question 2.4. What is the difference between BIND and DNS ? Date: Tue Sep 10 23:15:58 EDT 1996 (text provided by Andras Salamon) DNS is the Domain Name System, a set of protocols for a distributed database that was originally designed to replace /etc/hosts files. DNS is most commonly used by applications to translate domain names of hosts to IP addresses. that contain parts of the distributed database that is accessed by using the DNS protocols. In common usage, `the DNS' usually refers just to the data in the database. BIND (Berkeley Internet Name Domain) is an implementation of DNS, both server and client. Development of BIND is funded by the Internet Software Consortium and is coordinated by Paul Vixie. BIND has been ported to Windows NT and VMS, but is most often found on Unix. BIND source code is freely available and very complex; most of the development on the DNS protocols is based on this code; and most Unix vendors ship BIND-derived DNS implementations. As a result, the BIND name server is the most widely used name server on the Internet. In common usage, `BIND' usually refers to the name server that is part of the BIND distribution, and sometimes to name servers in general (whether BIND-derived or not). ----------------------------------------------------------------------------- Question 2.5. Where is the latest version of BIND located ? Date: Mon Sep 14 22:46:00 EDT 1998 This information may be found at. Presently, there are two 'production level' versions of BIND. They are versions 4 and 8. Version 4 is the last "traditional" BIND -- the one everybody on the Internet runs, except a few hundred sites running... Version 8 has been called "BIND-ng" (Next Generation). Many new features are found in version 8. BIND-8.1 has the following features: *. Bind version 8.1.2 may be found at the following location: * Source : /isc/bind/src/8.1.2/bind-8.1.2-src.tar.gz * Documentation : /isc/bind/src/8.1.2/bind-8.1.2-doc.tar.gz * Contributed packages : /isc/bind/src/8.1.2/bind-8.1.2-contrib.tar.gz At this time, BIND version 4.9.7 may be found for anonymous ftp from : /isc/bind/src/4.9.7/bind-4.9.7-REL.tar.gz Other sites that officially mirror the BIND distribution are * bind.fit.qut.edu.au : /pub/bind * : /pub/unix/tcpip/dns/bind * : /pub/mirrors/unix/bind * : /pub/mirrors/unix/bind * : /pub/Unix/dns/bind * : /pub/unix/dns/bind/beta You may need GNU zip, Larry Wall's patch program (if there are any patch files), and a C compiler to get BIND running from the above mentioned source. GNU zip is available for anonymous ftp from prep.ai.mit.edu : /pub/gnu/gzip-1.2.4.tar patch is available for anonymous ftp from prep.ai.mit.edu : /pub/gnu/patch-2.1.tar.gz A version of BIND for Windows NT is available for anonymous ftp from : /isc/bind/contrib/ntbind/ntdns497relbin.zip and : /isc/bind/contrib/ntbind/ntbind497rel.zip If you contact access@drcoffsite.com, he will send you information regarding a Windows NT/WIN95 bind port of 4.9.6 release. A Freeware version of Bind for NT is available at. ----------------------------------------------------------------------------- Question 2.6. How can I find the path taken between two systems/domains ? Date: Wed Jan 14 12:07:03 EST 1998 On a Unix system, use traceroute. If it is not available to you, you may obtain the source source for 'traceroute', compile it and install it on your system. One version of this program with additional functionality may be found for anonymous ftp from : /pub/network/traceroute.tar.Z Another version may be found for anonymous ftp from : /pub/net_tools/traceroute.tar NT/Windows 95 users may use the command TRACERT.EXE, which is installed with the TCP/IP protocol support. There is a Winsock utility called WS_PING by John Junod that provides ping, traceroute, and nslookup functionality. There are several shareware TCP/IP utilities that provide ping, traceroute, and DNS lookup functionality for a Macintosh: Mac TCP Watcher and IP Net Monitor are two of them. ----------------------------------------------------------------------------- Question 2.7. How do you find the hostname given the TCP-IP address ? Mon Jun 15 21:32:57 EDT 1998 For an address a.b.c.d you can always do: % nslookup > set q=ptr > d.c.b.a.in-addr.arpa. Most newer version of nslookup (since 4.8.3) will recognize an address, so you can just say: % nslookup a.b.c.d DiG will work like this also: % dig -x a.b.c.d dig is included in the bind distribution. host from the bind distribution may also be used. On a Macintosh, some shareware utilities may be used. IP Net Monitor has a very nice NS Lookup feature, producing DiG-like output; Mac TCP Watcher just has a simple name-to-address and address-to-name translator. ----------------------------------------------------------------------------- Question 2.8. How do I register a domain ? Date: Thu Feb 11 14:51:50 EST 1999 Procedures for registering a domain name depend on the top level domain (TLD) to which the desired domain name will belong, i.e. the rightmost suffix of the desired domain name. See the answer to "Top level domains" question in the DEFINITIONS SECTION of this FAQ. Although domain registration may be performed by a direct contact with the appropriate domain registration authorities (domain name registrars), the easiest way to do it is to talk to your Internet Service Providers. They can submit a domain registration request on your behalf, as well as to set up secondary DNS for your domain (or both DNS servers, if you need a domain name for Web hosting and/or mail delivery purposes only). In the case where the registration is done by the organization itself, it still makes the whole process much easier if the ISP is approached for secondary (see RFC 2182) servers _before_ the InterNIC is approached for registration. In any case, you will need at least two domain name servers when you register your domain. Many ISP's are willing to provide primary and/or secondary name service for their customers. If you want to register a domain name ending with .COM, .NET, .ORG, you'll want to take a look to the InterNIC: * -> Registration Services * internic.net : /templates/domain-template.txt * gopher://rs.internic.net/ Please note that the InterNIC charges a fee for domain names in the "COM", "ORG", and "NET". More information may be found from the Internic at. Note that InterNIC doesn't allocate and assign IP numbers any more. Please refer to the answer to "How do I get my address assigned from the NIC?" in this section. Registration of domain names ending with country code suffixes (ISO 3166 - .FR, .CH, .SE etc.) is being done by the national domain name registrars (NICs). If you want to obtain such a domain, please refer to the following links: Additional domain/whois information may be found: * * * * * * * whois.apnic.net * whois.nic.ad.jp (with /e at the end of query for English) * sipb.mit.edu : /pub/whois/whois-servers.list * Many times, registration of a domain name can be initiated by sending e-mail to the zone contact. You can obtain the contact in the SOA record for the country, or in a whois server: $ nslookup -type=SOA fr. origin = ns1.nic.fr mail addr = nic.nic.fr ... The mail address to contact in this case is 'nic@nic.fr' (you must substitute an '@' for the first dot in the mail addr field). An alternate method to obtain the e-mail address of the national NIC is the 'whois' server at InterNIC. You may be requested to make your request to another email address or using a certain information template/application. You may be requested to make your request to another email address or using a certain information template/application. Please remember that every TLD registrar has its own registration policies and procedures. ----------------------------------------------------------------------------- Question 2.9. How can I change the IP address of our server ? Date: Wed Jan 14 12:09:09 EST 1998 (From Mark Andrews) Before the move. * Ensure you are running a modern nameserver. BIND 4.9.6-P1 or 8.1.1 are good choices. * Inform all your secondaries that you are going to change. Have them install both the current and new addresses in their named.boot's. * Drop the ttl of the A's associated with the nameserver to something small (5 min is usually good). * Drop the refresh and retry times of the zone containing the forward records for the server. * Configure the new reverse zone before the move and make sure it is operational. * On the day of the move add the new A record(s) for the server. Don't forget to have these added to parent domains. You will look like you are multihomed with one interface dead. Move the machine after gracefully terminating any other services it is offering. Then, * Fixup the A's, ttl, refresh and retry counters. (If you are running an all server EDIT out all references to the old addresses in the cache files). * Inform all the secondaries the move is complete. * Inform the parents of all zones you are primary of the new NS/A pairs for the relevant zones. If you're changing the address of a server registered with the InterNIC, you also need to submit a Modify Host form to the InterNIC, so they will update the glue records on the root servers. It can take the InterNIC a few days to process this form, and the old glue records have 2-day TTL's, so this transition may be problematic. * Inform all the administrators of zones you are secondarying that the machine has moved. * For good measure update the serial no for all zones you are primary for. This will flush out old A's. ----------------------------------------------------------------------------- Question 2.10. Issues when changing your domain name Date: Sun Nov 27 23:32:41 EST 1994 If you are changing your domain name from abc.foobar.com to foobar.net, the forward zones are easy and there are a number of ways to do it. One way is the following: Have a single db file for the 2 domains, and have a single machine be the primary server for both abc.foobar.com and foobar.net. To resolve the host foo in both domains, use a single zone file which merely uses this for the host: foo IN A 1.2.3.4 Use a "@" wherever the domain would be used ie for the SOA: @ IN SOA (... Then use this pair of lines in your named.boot: primary abc.foobar.com db.foobar primary foobar.net db.foobar The reverse zones should either contain PTRs to both names, or to whichever name you believe to be canonical currently. ----------------------------------------------------------------------------- Question 2.11. How memory and CPU does DNS use ? Date: Fri Dec 6 01:07:56 EST 1996 It can use quite a bit ! The main thing that BIND needs is memory. It uses very little CPU or network bandwidth. The main considerations to keep in mind when planning are: * How many zones do you have and how large are they ? * How many clients do you expect to serve and how active are they ? As an example, here is a snapshot of memory usage from CSIRO Division of Mathematics and Statistics, Australia Named takes several days to stabilize its memory usage. Our main server stabalises at ~10Mb. It takes about 3 days to reach this size from 6 M at startup. This is under Sun OS 4.1.3U1. As another example, here is the configuration of ns.uu.net (from late 1994): ns.uu.net only does nameservice. It is running a version of BIND 4.9.3 on a Sun Classic with 96 MB of RAM, 220 MB of swap (remember that Sun OS will reserve swap for each fork, even if it is not needed) running Sun OS 4.1.3_U1. Joseph Malcolm, of Alternet, states that named generally hovers at 5-10% of the CPU, except after a reload, when it eats it all. ----------------------------------------------------------------------------- Question 2.12. Other things to consider when planning your servers Date: Mon Jan 2 14:24:51 EST 1995 When making the plans to set up your servers, you may want to also consider the following issues: A) Server O/S limitations/capacities (which tend to be widely divergent from vendor to vendor) B) Client resolver behavior (even more widely divergent) C) Expected query response time D) Redundancy E) Desired speed of change propagation F) Network bandwidth availability G) Number of zones/subdomain-levels desired H) Richness of data stored (redundant MX records? HINFO records?) I) Ease of administration desired J) Network topology (impacts reverse-zone volume) Assuming a best-possible case for the factors above, particularly (A), (B), (C), (F), (G) & (H), it would be possible to run a 1000-node domain using a single lowly 25 or 40 MHz 386 PC with a fairly modest amount of RAM by today's standards, e.g. 4 or 8 Meg. However, this configuration would be slow, unreliable, and would provide no functionality beyond your basic address-to-name and name-to-address mappings. Beyond that baseline case, depending on what factors listed above, you may want look at other strategies, such splitting up the DNS traffic among several machines strategically located, possibly larger ones, and/or subdividing your domain itself. There are many options, tradeoffs, and DNS architectural paradigms from which to choose. ----------------------------------------------------------------------------- Question 2.13. Reverse domains (IN-ADDR.ARPA) and their delegation Date: Mon Jun 15 23:28:47 EDT 1998 (The following section was contributed by Berislav Todorovic.) Reverse domains (subdomains of the IN-ADDR.ARPA domain) are being used by the domain name service to perform reverse name mapping - from IP addresses to host names. Reverse domains are more closely related to IP address space usage than to the "forward" domain names used. For example, a host using IP address 10.91.8.6 will have its "reverse" name: 6.8.91.10.IN-ADDR.ARPA, which must be entered in the DNS, by a PTR record: 6.8.91.10.in-addr.arpa. IN PTR myserver.mydomain.com. In spite of the fact that IP address space is not longer divided into classes (A, B, C, D, E - see the answer to "What is CIDR?" in the DEFINITIONS section), the reverse host/domain names are organized on IP address byte boundaries. Thus, the reverse host name 6.8.91.10.IN-ADDR.ARPA may belong to one of the following reverse domains, depending on the address space allocated/assigned to you and your DNS configuration: (1) 8.91.10.in-addr.arpa -> assigned one or more "C class" networks (IP >= /24) (2) 91.10.in-addr.arpa -> assigned a whole "B class" 10.91/16 (IP = /16) (3) ISP dependent -> assigned < "C class" - e.g. 10.91.8/26 (IP < /24) No matter what is your case (1, 2 or 3) - the reverse domain name must be properly delegated - registered in the IN-ADDR.ARPA zone. Otherwise, translation IP -> host name will fail, which may cause troubles when using some Internet services and accessing some public sites. To register your reverse domain, talk to your Internet service provider, to ensure proper DNS configuration, according to your network topology and address space assigned. They will point you to a further instance, if necessary. Generally speaking, while forward domain name registration is a matter of domain name registrars (InterNIC, national NICs), reverse domain name delegation is being done by the authorities, assigning IP address space - Internet service providers and regional Internet registries (see the answer to "How do I get my address assigned from the NIC?" in this section). Important notes: (1) If you're assigned a block or one or more "Class C" networks, you'll have to maintain a separate reverse domain zone file for each "Class C" from the block. For example, if you're assigned 10.91.8/22, you'll have to configure a separate zone file for 4 domains: 8.91.10.in-addr.arpa 9.91.10.in-addr.arpa 10.91.10.in-addr.arpa 11.91.10.in-addr.arpa and to delegate them further in the DNS (according to the advice from your ISP). (2) If you're assigned a whole "B class" (say, 10.91/16), you're in charge for the whole 91.10.IN-ADDR.ARPA zone. See the answer to "How do I subnet a Class B Address?" in the CONFIGURATION section. (3) If you're assigned only a portion of a "C class" (say, 10.91.8.0/26) see the answer to "Subnetted domain name service" question in the CONFIGURATION section. For more information on reverse domain delegations see: * * * : /apnic/docs/in-addr-request ----------------------------------------------------------------------------- Question 2.14. How do I get my address assigned from the NIC ? Date: Mon Jun 15 22:48:24 EDT 1998 IP address space assignment to end users is no longer being performed by regional Internet registries (InterNIC, ARIN, RIPE NCC, APNIC). If you need IP address space, you should make a request to your Internet service provider. If you already have address space and need more IP numbers, make a request to your ISP again and you may be given more numbers (different ISPs have different allocation requirements and procedures). If you are a smaller ISP - talk to your upstream ISP to obtain necessary numbers for your customers. If you change the ISP in the future, you MAY have to renumber your network. See RFC 2050 and RFC 2071 for more information on this issue. Currently, address space is being distributed in a hierarchical manner: ISPs assign addresses to their end customers. The regional Internet registries allocate blocks of addresses (usually sized between /19 (32 "C class") and /16 (a "B class")) to the ISPs. Finally - IANA (Internet Assigned Number Authority) allocates necessary address space (/8 ("A class") sized blocks) to the regional registries, as the need for address space arises. This hierarchical process ensures more efficient routing on the backbones (less traffic caused by routing information updates, better memory utilization in backbone routers etc.) as well as more rational address usage. If you are an ISP, planning to connect yourself to more than one ISP (i.e. becoming multi-homed) and/or expecting to have a lot of customers, you'll have to obtain ISP independent address space from a regional Internet registry. Depending on your geographical locations, you can obtain such address blocks (/19 and larger blocks) from: * RIPE NCC () -> Europe, North Africa and Middle East * ARIN () -> North and South America, Central Africa * APNIC () -> Asian and Pacific region While the regional registries do not sell address space, they do charge for their services (allocation of address space, reverse domain delegations etc.) ----------------------------------------------------------------------------- Question 2.15. Is there a block of private IP addresses I can use? Date: Sun May 5 23:02:49 EDT 1996 Yes there is. Please refer to RFC 1918: 1918 Address Allocation for Private Internets. Y. Rekhter, B. Moskowitz, D. Karrenberg, G. de Groot, & E. Lear. February 1996. (Format: TXT=22270 bytes) RFC 1918 documents the allocation of the following addresses for use by ``private internets'': 10.0.0.0 - 10.255.255.255 172.16.0.0 - 172.31.255.255 192.168.0.0 - 192.168.255.255 ----------------------------------------------------------------------------- Question 2.16. Does BIND cache negative answers (failed DNS lookups) ? Date: Mon Jan 2 13:55:50 EST 1995 Yes, BIND 4.9.3 and more recent versions will cache negative answers. ----------------------------------------------------------------------------- Question 2.17. What does an NS record really do ? Date: Wed Jan 14 12:28:46 EST 1998 The NS records in your zone data file pointing to the zone's name servers (as opposed to the servers of delegated subdomains) don't do much. They're essentially unused, though they are returned in the authority section of reply packets from your name servers. However, the NS records in the zone file of the parent domain are used to find the right servers to query for the zone in question. These records are more important than the records in the zone itself. However, if the parent domain server is a secondary or stub server for the child domain, it will "hoist" the NS records from the child into the parent domain. This frequently happens with reverse domains, since the ISP operates primary reverse DNS for its CIDR block and also often runs secondary DNS for many customers' reverse domains. Caching servers will often replace the NS records learned from the parent server with the authoritative list that the child server sends in its authority section. If the authoritative list is missing the secondary servers, those caching servers won't be able to look up in this domain if the primary goes down. After all of this, it is important that your NS records be correct ! ----------------------------------------------------------------------------- Question 2.18. DNS ports Date: Wed Jan 14 12:31:39 EST 1998 The following table shows what TCP/UDP ports bind before 8.x DNS uses to send and receive queries: Prot Src Dst Use udp 53 53 Queries between servers (eg, recursive queries) Replies to above tcp 53 53 Queries with long replies between servers, zone transfers Replies to above udp >1023 53 Client queries (sendmail, nslookup, etc ...) udp 53 >1023 Replies to above tcp >1023 53 Client queries with long replies tcp 53 >1023 Replies to above Note: >1023 is for non-priv ports on Un*x clients. On other client types, the limit may be more or less. BIND 8.x no longer uses port 53 as the source port for recursive queries. By defalt it uses a random port >1023, although you can configure a specific port (53 if you want). Another point to keep in mind when designing filters for DNS is that a DNS server uses port 53 both as the source and destination for its queries. So, a client queries an initial server from an unreserved port number to UDP port 53. If the server needs to query another server to get the required info, it sends a UDP query to that server with both source and destination ports set to 53. The response is then sent with the same src=53 dest=53 to the first server which then responds to the original client from port 53 to the original source port number. The point of all this is that putting in filters to only allow UDP between a high port and port 53 will not work correctly, you must also allow the port 53 to port 53 UDP to get through. Also, ALL versions of BIND use TCP for queries in some cases. The original query is tried using UDP. If the response is longer than the allocated buffer, the resolver will retry the query using a TCP connection. If you block access to TCP port 53 as suggested above, you may find that some things don't work. Newer version of BIND allow you to configure a list of IP addresses from which to allow zone transfers. This mechanism can be used to prevent people from outside downloading your entire namespace. ----------------------------------------------------------------------------- Question 2.19. What is the cache file Date: Fri Dec 6 01:15:22 EST 1996 From the "Name Server Operations Guide" 6.3. Cache Initialization 6.3.1. root.cache The name server needs to know the servers that are the authoritative name servers for the root domain of the network. To do this we have to prime the name server's cache with the addresses of these higher authorities. The location of this file is specified in the boot file. ... ----------------------------------------------------------------------------- Question 2.20. Obtaining the latest cache file Date: Fri Dec 6 01:15:22 EST 1996 If you have a version of dig running, you may obtain the information with the command dig @a.root-servers.net. . ns A perl script to handle some possible problems when using this method from behind a firewall and that can also be used to periodically obtain the latest cache file was posted to comp.protocols.tcp-ip.domains during early October, 1996. It was posted with the subject "Keeping db.cache current". It is available at. The latest cache file may also be obtained from the InterNIC via ftp or gopher: ; This file is made available by InterNIC registration services ; under anonymous FTP as ; file /domain/named.root ; on server ; -OR- under Gopher at RS.INTERNIC.NET ; under menu InterNIC Registration Services (NSI) ; submenu InterNIC Registration Archives ; file named.root ----------------------------------------------------------------------------- Question 2.21. Selecting a nameserver/root cache Date: Mon Aug 5 22:54:11 EDT 1996 Exactly how is the a root server selected from the root cache? Does the resolver attempt to pick the closest host or is it random or is it via sortlist-type workings? If the root server selected is not available (for whatever reason), will the the query fail instead of attempting another root server in the list ? Every recursive BIND name server (that is, one which is willing to go out and find something for you if you ask it something it doesn't know) will remember the measured round trip time to each server it sends queries to. If it has a choice of several servers for some domain (like "." for example) it will use the one whose measured RTT is lowest. Since the measured RTT of all NS RRs starts at zero (0), every one gets tried one time. Once all have responded, all RTT's will be nonzero, and the "fastest server" will get all queries henceforth, until it slows down for some reason. To promote dispersion and good record keeping,. ----------------------------------------------------------------------------- Question 2.22. Domain names and legal issues Date: Mon Jun 15 22:15:32 EDT 1998 A domain name may be someone's trademark and the use of a trademark without its owner's permission may be a trademark violation. This may lead to a legal dispute. RFC 1591 allows registration authorities to play a neutral role in domain name disputes, stating that: In case of a dispute between domain name registrants as to the rights to a particular name, the registration authority shall have no role or responsibility other than to provide the contact information to both parties. The InterNIC's current domain dispute policy (effective February 25, 1998) is located at: Other domain registrars have similar domain dispute policies. The following information was submitted by Carl Oppedahl <oppedahl@patents.com> : If the jealous party happens to have a trademark registration, it is quite likely that the domain name owner will lose the domain name, even if they aren't infringing the trademark. This presents a substantial risk of loss of a domain name on only 30 days' notice. Anyone who is the manager of an Internet-connected site should be aware of this risk and should plan for it. See "How do I protect myself from loss of my domain name?" at. For an example of an ISP's battle to keep its domain name, see. A compendium of information on the subject may be found at. ----------------------------------------------------------------------------- Question 2.23. Iterative and Recursive lookups Date: Wed Jul 9 22:05:32 EDT 1997 Q: What is the difference between iterative and recursive lookups ? How do you configure them and when would you specify one over the other ? A: (from an answer written by Barry Margolin) In an iterative lookup, the server tells the client "I don't know the answer, try asking <list of other servers>". In a recursive lookup, the server asks one of the other servers on your behalf, and then relays the answer back to you. Recursive servers are usually used by stub resolvers (the name lookup software on end systems). They're configured to ask a specific set of servers, and expect those servers to return an answer rather than a referral. By configuring the servers with recursion, they will cache answers so that if two clients try to look up the same thing it won't have to ask the remote server twice, thus speeding things up. Servers that aren't intended for use by stub resolvers (e.g. the root servers, authoritative servers for domains). Disabling recursion reduces the load on them. In BIND 4.x, you disable recursion with "options no-recursion" in the named.boot file. ----------------------------------------------------------------------------- Question 2.24. Dynamic DNS Mon Jan 18 20:31:58 EST 1999 Q: Bind 8 includes some support for Dynamic DNS as specified in RFC 2136. It does not currently include the authentication mechanism that is described in RFC 2137, meaning that any update requests received from allowed hosts will be honored. Could someone give me a working example of what syntax nsupdate expects ? Is it possible to write an update routine which directs it's update to a particular server, ignoring what the DNS servers are the serving NS's? A: You might check out Michael Fuhr's Net::DNS Perl module, which you can use to put together dynamic update requests. See for additional information. Michael posted a sample script to show how to use Net::DNS: #!/usr/local/bin/perl -w use Net::DNS; $res = new Net::DNS::Resolver; $res->nameservers("some-nameserver.foo.com"); $update = new Net::DNS::Update("foo.com"); $update->push("update", rr_del("old-host.foo.com")); $update->push("update", rr_add("new-host.foo.com A 10.1.2.3")); $ans = $res->send($update); print $ans ? $ans->header->rcode : $res->errorstring, "\n"; Additional information for Dynamic DNS updates may be found at. ----------------------------------------------------------------------------- Question 2.25. What version of bind is running on a server ? Date: Mon Mar 9 22:15:11 EST 1998 On 4.9+ servers, you may obtain the version of bind running with the following command: dig @server.to.query txt chaos version.bind. and optionally pipe that into 'grep VERSION'. Please note that this will not work on an older nameserver. ----------------------------------------------------------------------------- Question 2.26. BIND and Y2K Date: Thu Feb 11 14:58:04 EST 1999 Is the "Y2K" problem an issue for bind ? You will find the Internet Software Consortium's comment on the "Y2K" issue at. =============================================================================== ----------------------------------------------------------------------------- Question 3.1. Utilities to administer DNS zone files Date: Tue Jan 7 00:22:31 EST 1997 There are a few utilities available to ease the administration of zone files in the DNS. Two common ones are h2n and makezones. Both are perl scripts. h2n is used to convert host tables into zone data files. It is available for anonymous ftp from : /published/oreilly/nutshell/dnsbind/dns.tar.Z makezones works from a single file that looks like a forward zone file, with some additional syntax for special cases. It is included in the current BIND distribution. The newest version is always available for anonymous ftp from : /pub/software/programs/DNS/makezones bpp is a m4 macro package for pre-processing the master files bind uses to define zones. Information on this package may be found at. More information on various DNS related utilities may be found using the DNS Resources Directory. ----------------------------------------------------------------------------- Question 3.2. DIG - Domain Internet Groper Date: Thu Dec 1 11:09:11 EST 1994 The latest and greatest, official, accept-no-substitutes version of the Domain Internet Groper (DiG) is the one that comes with BIND. Get the latest kit. ----------------------------------------------------------------------------- Question 3.3. DNS packet analyzer Date: Mon Jun 15 21:42:11 EDT 1998 There is a free ethernet analyzer called Ethload available for PC's running DOS. The latest filename is ETHLD200.ZIP. It understands lots of protocols including TCP/UDP. It'll look inside there and display DNS/BOOTP/ICMP packets etc. (Ed. note: something nice for someone to add to tcpdump ;^) ). Depending on the ethernet controller it's given it'll perform slightly differently. It handles NDIS/Novell/Packet drivers. It works best with Novell's promiscuous mode drivers. The current home page for Ethload is. ----------------------------------------------------------------------------- Question 3.4. host Date: Thu Feb 11 14:43:39 EST 1999 A section from the host man page: host looks for information about Internet hosts and domain names. It gets this information from a set of intercon- nected servers that are spread across the world. The infor- mation. 'host' is compatible with both BIND 4.9 and BIND 4.8 'host' may be found in contrib/host in the BIND distribution. The latest version always available for anonymous ftp from : /pub/network/host.tar.Z It may also be found for anonymous ftp from : /networking/ip/dns/host.tar.Z Programs with some of the functionality of host for NT may be found at under "Network Tools, DNS Lookup Utilities". ----------------------------------------------------------------------------- Question 3.5. How can I use DNS information in my program? Date: Fri Feb 10 15:25:11 EST 1995 It depends on precisely what you want to do: * Consider whether you need to write a program at all. It may well be easier to write a shell program (e.g. using awk or perl) to parse the output of dig, host or nslookup. * If all you need is names and addresses, there will probably be system routines 'gethostbyname' and 'gethostbyaddr' to provide this information. * If you need more details, then there are system routines (res_query and res_search) to assist with making and sending DNS queries. However, these do not include a routine to parse the resulting answer (although routines to assist in this task are provided). There is a separate library available that will take a DNS response and unpick it into its constituent parts, returning a C structure that can be used by the program. The source for this library is available for anonymous ftp at hpux.csc.liv.ac.uk : /hpux/Networking/Admin/resparse-1.2 ----------------------------------------------------------------------------- Question 3.6. A source of information relating to DNS Mon Jan 18 20:35:49 EST 1999 You may find utilities and tools to help you manage your zone files (including WWW front-ends) in the "tools" section of the DNS resources directory: Two that come to mind are MIT's WebDNS and the University of Utah tools. There are also a number of commercial IP management tools available. Data Communications had an article on the subject in Sept/Oct of 1996. The tools mentioned in the article and a few others may be found at the following sites: * IP Address management, * IP-Track, * NetID, * QIP, * UName-It, * dnsboss, =============================================================================== ? ----------------------------------------------------------------------------- Question 4.1. TCP/IP Host Naming Conventions Date: Mon Aug 5 22:49:46 EDT 1996 One guide that may be used when naming hosts is RFC 1178, "Choosing a Name for Your Computer", which is available via anonymous FTP from : /rfc/rfc1178.txt RFCs (Request For Comments) are specifications and guidelines for how many aspects of TCP/IP and the Internet (should) work. Most RFCs are fairly technical documents, and some have semantics that are hotly contested in the newsgroups. But a few, like RFC 1178, are actually good to read for someone who's just starting along a TCP/IP path. ----------------------------------------------------------------------------- Question 4.2. What are slaves and forwarders ? Date: Mon Jan 18 22:14:30 EST 1999 Parts of this section were contributed by Albert E. Whale. "forwarders" is a list of NS records that are _prepended_ to a list of NS records to query if the data is not available locally. This allows a rich cache of records to be built up at a centralized location. This is good for sites that have sporadic or very slow connections to the Internet. (demand dial-up, for example) It's also just a good idea for very large distributed sites to increase the chance that you don't have to go off to the Internet to get an IP address. (sometimes for addresses across the street!) If you have a "forwarders" line, you will only consult the root servers if you get no response from the forwarder. If you get a response, and it says there's no such host, you'll return that answer to the client -- you won't consult the root. The "forwarders" statement is found in the /etc/named.boot file which is read each time DNS is started. The command format is as follows: forwarders <IP Address #1> [<IP Address #2>, .... <IP Address #n>] The "forwarders" line specifies the IP Address(es) of DNS servers that accept queries from other servers. The "forwarders" command is used to cause a large site wide cache to be created on a master and reduce traffic over the network to other servers. It can also be used to allow DNS servers to answer Internet name queries which do not have direct access to the Internet. The forwarders command is used in conjunction with the traditional DNS configuration which requires that a NS entry be found in the cache file. The DNS server can support the forwarders command if the server is able to resolve entries that are not part of the local server's cache. "slave" modifies this to say to replace the list of NS records with the forwarders entry, instead of prepending to it. This is for firewalled environments, where the nameserver can't directly get out to the Internet at all. "slave" is meaningless (and invalid, in late-model BINDs) without "forwarders". "forwarders" is an entry in named.boot, and therefore applies only to the nameserver (not to resolvers). The "slave" command is usually found immediately following the forwarders command in the boot file. It is normally used on machines that are running DNS but do not have direct access to the Internet. By using the "forwarders" and "slave" commands the server can contact another DNS server which can answer DNS queries. The "slave" option may also be used behind a firewall where there may not be a network path available to directly contact nameservers listed in the cache. Additional information on slave servers may be found in the BOG (BIND Operations Guide) section 6.1.8 (Slave Servers). ----------------------------------------------------------------------------- Question 4.3. When is a server authoritative? Date: Mon Jan 2 13:15:13 EST 1995 In the case of BIND: * The server contains current data in files for the zone in question (Data must be current for secondaries, as defined in the SOA) * The server is told that it is authoritative for the zone, by a 'primary' or 'secondary' keyword in /etc/named.boot. * The server does an error-free load of the zone. ----------------------------------------------------------------------------- Question 4.4. My server does not consider itself authoritative ! Date: Mon Jan 2 13:15:13 EST 1995 The question was: What if I have set up a DNS where there is an SOA record for the domain, but the server still does not consider itself authoritative. (when using nslookup and set server=the correct machine.) It seems that something is not matching up somewhere. I suspect that this is because the service provider has not given us control over the IP numbers in our own domain, and so while the machine listed has an A record for an address, there is no corresponding PTR record. With the answer: That's possible too, but is unrelated to the first question. You need to be delegated a zone before outside people will start talking to your server. However, a server can still be authoritative for a zone even though it hasn't been delegated authority (it's just that only the people who use that as their server will see the data). A server may consider itself non-authoritative even though it's a primary if there is a syntax error in the zone (see the list in the previous question). ----------------------------------------------------------------------------- Question 4.5. NS records don't configure servers as authoritative ? Date: Fri Dec 6 16:13:34 EST 1996 Nope, delegation is a separate issue from authoritativeness. You can still be authoritative, but not delegated. (you can also be delegated, but not authoritative -- that's a "lame delegation") ----------------------------------------------------------------------------- Question 4.6. underscore in host-/domainnames Date: Sat Aug 9 20:30:37 EDT 1997 The question is "Are underscores are allowed in host- or domainnames" ? RFC 1033 allows them. RFC 1035 doesn't. RFC 1123 doesn't. dnswalk complains about them. Which RFC is the final authority these days? Actually RFC 1035 deals with names of machines or names of mail domains. i.e "_" is not permitted in a hostname or on the RHS of the "@" in local@domain. Underscore is permitted where ever the domain is NOT one of these types of addresses. In general the DNS mostly contains hostnames and mail domainnames. This will change as new resource record types for authenticating DNS queries start to appear. The latest version of 'host' checks for illegal characters in A/MX record names and the NS/MX target names. After saying all of that, remember that RFC 1123 is a Required Internet Standard (per RFC 1720), and RFC 1033 isn't. Even RFC 1035 isn't a required standard. Therefore, RFC 1123 wins, no contest. From RFC 1123, Section 2 described by Dave Barr in RFC1912:). Finally, one more piece of information (From Paul Vixie):>] There has been a recent update on this subject which may be found in : /internet-drafts/draft-andrews-dns-hostnames-03.txt. An RFC Internet standards track protocol on the subject "Clarifications to the DNS Specification" may be found in RFC 2181. This updates RFC 1034, RFC 1035, and RFC 1123. ----------------------------------------------------------------------------- Question 4.7. How do I turn the "_" check off ? Date: Mon Nov 10 22:54:54 EST 1997 In the 4.9.5-REL and greater, you may turn this feature off with the option "check-names" in the named boot file. This option is documented in the named manual page. The syntax is: check-names primary warn ----------------------------------------------------------------------------- Question 4.8. What is lame delegation ? Date: Tue Mar 11 21:51:21 EST 1997 Two things are required for a lame delegation: * A nameserver X is delegated as authoritative for a zone. * Nameserver X is not performing nameservice for that zone. Try to think of a lame delegation as a long-term condition, brought about by a misconfiguration somewhere. Bryan Beecher's 1992 LISA paper on lame delegations is good to read on this. The problem really lies in misconfigured nameservers, not "lameness" brought about by transient outages. The latter is common on the Internet and hard to avoid, while the former is correctable. In order to be performing nameservice for a zone, it must have (presumed correct) data for that zone, and it must be answering authoritatively to resolver queries for that zone. (The AA bit is set in the flags section) The "classic" lame delegation case is when nameserver X is delegated as authoritative for domain Y, yet when you ask X about Y, it returns non-authoritative data. Here's an example that shows what happens most often (using dig, dnswalk, and doc to find). Let's say the domain bogus.com gets registered at the NIC and they have listed 2 primary name servers, both from their *upstream* provider: bogus.com IN NS ns.bogus.com bogus.com IN NS upstream.com bogus.com IN NS upstream1.com So the root servers have this info. But when the admins at bogus.com actually set up their zone files they put something like: bogus.com IN NS upstream.com bogus.com IN NS upstream1.com So your name server may have the nameserver info cached (which it may have gotten from the root). The root says "go ask ns.bogus.com" since they are authoritative This is usually from stuff being registered at the NIC (either nic.ddn.mil or rs.internic.net), and then updated later, but the folks who make the updates later never let the folks at the NIC know about it. ----------------------------------------------------------------------------- Question 4.9. How can I see if the server is "lame" ? Date: Mon Sep 14 22:09:35 EDT 1998 Go to the authoritative servers one level up, and ask them who they think is authoritative, and then go ask each one of those delegees if they think that they themselves are authoritative. If any responds "no", then you know who the lame delegation is, and who is delegating lamely to them. You can then send off a message to the administrators of the level above. The 'lamers' script from Byran Beecher really takes care of all this for you. It parses the lame delegation notices from BIND's syslog and summarizes them for you. It may be found in the contrib section of the latest BIND distribution. The latest version is included in the BIND distribution. If you want to actively check for lame delegations, you can use 'doc' and 'dnswalk'. You can check things manually with 'dig'. The InterNIC recently announced a new lame delegation that will be in effect on 01 October, 1996. Here is a summary: * After receipt/processing of a name registration template, and at random intervals thereafter, the InterNIC will perform a DNS query via UDP Port 53 on domain names for an SOA response for the name being registered. * If the query of the domain name returns a non-authoritative response from all the listed name servers, the query will be repeated four times over the next 30 days at random intervals approximately 7 days apart, with notification to all listed whois and nameserver contacts of the possible pending deletion. If at least one server answers correctly, but one or more are lame, FYI notifications will be sent to all contacts and checking will be discontinued. Additionally, e-mail notices will be provided to the contact for the name servers holding the delegation to alert them to the "lame" condition. Notifications will state explicitly the consequences of not correcting the "lame" condition and will be assigned a descriptive subject as follows: Subject: Lame Delegation Notice: DOMAIN_NAME The notification will include a timestamp for when the query was performed. * If, following 30 days, the name servers still provide no SOA response, the name will be placed in a "hold" status and the DNS information will no longer be propagated. The administrative contact will be notified by postal mail and all whois contacts will be notified by e-mail, with instructions for taking corrective action. * Following 60 days in a "hold" status, the name will be deleted and made available for re-registration. Notification of the final deletion will be sent to the name server and domain name contacts listed in the NIC database. ----------------------------------------------------------------------------- Question 4.10. What does opt-class field in a zone file do? Date: Thu Dec 1 11:10:39 EST 1994 This field is the address class. From the BOG - ...is the address class; currently, only one class is supported: IN for internet addresses and other internet information. Limited support is included for the HS class, which is for MIT/Athena ``Hesiod'' information. ----------------------------------------------------------------------------- Question 4.11. Top level domains Date: Mon Jun 15 22:25:57 EDT 1998 RFC 1591 defines the term "Top Level Domain" (TLD) as:. The unnamed root-level domain (usually denoted as ".") is currently being maintained by the Internet Assigned Number Authority (IANA). Beside that, IANA is currently in charge for some other vital functions on the Internet today, including global distribution of address space, autonomous system numbers and all other similar numerical constants, necessary for proper TCP/IP protocol stack operation (e.g. port numbers, protocol identifiers and so on). According to the recent proposals of the US Government, better known as "Green Paper": IANA will gradually transfer its current functions to a new non-profit international organization, which won't be influenced exclusively by the US Government. This transfer will occur upon the final version of the "Green Paper" has been issued. Currently, the root zone contains five categories of top level domains: (1) World wide gTLDs - maintained by the InterNIC: - COM - Intended for commercial entities - companies, corporations etc. - NET - Intended for Internet service providers and similar entities. - ORG - Intended for other organizations, which don't fit to the above. (2) Special status gTLDs - EDU - Restricted to 4 year colleges and universities only. - INT - Intended for international treaties and infrastructural databases. (3) US restricted gTLDs - GOV - Intended for US Government offices and agencies. - MIL - Intended for the US military. (4) ISO 3166 country code TLDs (ccTLDs) - FR, CH, SE etc. (5) Reverse TLD - IN-ADDR.ARPA. Generic TLDs COM, NET, ORG and EDU are currently being maintained by the InterNIC. IANA maintains INT and IN-ADDR.ARPA. The US Government and US Army maintain their TLDs independently. The application form for the EDU, COM, NET, ORG, and GOV domains may be found for anonymous ftp from: internic.net : /templates/domain-template.txt The country code domains (ISO 3166 based - example, FR, NL, KR, US) are each organized by an administrator for that country. These administrators may further delegate the management of portions of the naming tree. These administrators are performing a public service on behalf of the Internet community. The ISO-3166 country codes may be found for anonymous ftp from: * : /in-notes/iana/assignments/country-codes * : /iso3166-codes More information about particular country code TLDs may be found at: * * * * * sipb.mit.edu : /pub/whois/whois-servers.list Contrary to the initial plans, stated in the RFC 1591, not to include more TLDs in the near future, some other forums don't share that opinion. The International Ad Hoc Committee (IAHC) ({) was was selected by the IAB, IANA, ITU, INTA, WIPO, and ISOC to study and recommend changes to the existing Domain Name System (DNS). The IAHC recommended the following regarding TLD's on February 4, 1997: In order to cope with the great and growing demand for Internet addresses in the generic top level domains, the generic Top Level Domain (gTLD) MoU calls for the establishment of seven new gTLDs in addition to the existing three. These will be .FIRM, .STORE, .WEB, .ARTS, .REC, .NOM and .INFO. In addition, the MoU provides for the setting up of an initial 28 new registrars around the world four from each of seven world regions. More registrars will be added as operational and administrative issues are worked out. Registrars will compete on a global basis, and users will be able shop around for the registrar which offers them the best arrangement and price. Users will also be able to change registrar at any time while retaining the same domain address, thus ensuring global portability. The full text of the recommendation may be found at:. Beside IAHC, several other forums have been created, by people willing to change the current addressing structure in the global network. Some of them may be found at: * * * You may participate in one of the discussions on iTLD proposals at * To sign up: * Old postings: ----------------------------------------------------------------------------- Question 4.12. US Domain Date: Mon Jun 15 22:25:57 EDT 1998 Information on the US domain registration services may be found at. The application form for the US domain may be found: * for anonymous ftp from internic.net : /templates/us-domain-template.txt * A WWW interface to a whois server for the US domain may be found at. This whois server may be used with the command % whois -h nii-server.isi.edu k12.ks.us OR % whois k12.ks.us@nii-server.isi.edu (depending on your version of whois). ----------------------------------------------------------------------------- Question 4.13. Classes of networks Date: Sun Feb 9 22:36:21 EST 1997 The usage of 'classes of networks' (class A, B, C) are historical and have been replaced by CIDR blocks on the Internet. That being said... An Internet Protocol (IP) address is 32 bit in length, divided into two or three parts (the network address, the subnet address (if present), and the host address. The subnet addresses are only present if the network has been divided into subnetworks. The length of the network, subnet, and host field are all variable. There are five different network classes. The leftmost bits indicate the class of the network. # of # of bits in bits in network host Class field field Internet Protocol address in binary Ranges ============================================================================ A 7 24 0NNNNNNN.HHHHHHHH.HHHHHHHH.HHHHHHHH 1-127.x.x.x B 14 16 10NNNNNN.NNNNNNNN.HHHHHHHH.HHHHHHHH 128-191.x.x.x C 21 8 110NNNNN.NNNNNNNN.NNNNNNNN.HHHHHHHH 192-223.x.x.x D NOTE 1 1110xxxx.xxxxxxxx.xxxxxxxx.xxxxxxxx 224-239.x.x.x E NOTE 2 11110xxx.xxxxxxxx.xxxxxxxx.xxxxxxxx 240-247.x.x.x where N represents part of the network address and H represents part of the host address. When the subnet address is defined, the needed bits are assigned from the host address space. NOTE 1: Reserved for multicast groups - RFC 1112 NOTE 2: Reserved for future use 127.0.0.1 is reserved for local loopback. ----------------------------------------------------------------------------- Question 4.14. What is CIDR ? Date: Tue Nov 5 23:47:29 EST 1996 CIDR is "Classless Inter-Domain Routing (CIDR). From RFC 1517: ...Classless Inter-Domain Routing (CIDR) attempts to deal with these problems by defining a mechanism to slow the growth of routing tables and reduce the need to allocate new IP network numbers. Much more information may be obtained in RFCs 1467, 1517, 1518, 1520; with primary reference 1519. Also please see the CIDR FAQ at * * * ----------------------------------------------------------------------------- Question 4.15. What is the rule for glue ? Date: Mon Sep 14 22:04:42 EDT 1998 A glue record is an A record for a name that appears on the right-hand side of a NS record. So, if you have this: sub.foobar.com. IN NS dns.sub.foobar.com. dns.sub.foobar.com. IN A 1.2.3.4 then the second record is a glue record (for the NS record above it). You need glue records when -- and only when -- you are delegating authority to a nameserver that "lives" in the domain you are delegating *and* you aren't a secondary server for that domain. In other words, in the example above, you need to add an A record for dns.sub.foobar.com since it "lives" in the domain it serves. This boot strapping information is necessary: How are you supposed to find out the IP address of the nameserver for domain FOO if the nameserver for FOO "lives" in FOO? If you have this NS record: sub.foobar.com. IN NS dns.xyz123.com. you do NOT need a glue record, and, in fact, adding one is a very bad idea. If you add one, and then the folks at xyz123.com change the address, then you will be passing out incorrect data. Also, unless you actually have a machine called something.IN-ADDR.ARPA, you will never have any glue records present in any of your "reverse" files. There is also a sort of implicit glue record that can be useful (or confusing :^) ). If the parent server (abc.foobar.com domain in example above) is a secondary server for the child, then the A record will be fetched from the child server when the zone transfer is done. The glue is still there but it's a little different, it's in the ip address in the named.boot line instead of explicitly in the data. In this case you can leave out the explicit glue A record and leave the manually configured "glue" in just the one place in the named.boot file. RFC 1537 says it quite nicely:. In response to a question on glue records, Mark Andrews stated the following: BIND's current position is somewhere between the overly restrictive position given above and the general allow all glue position that prevailed in 4.8.x. BIND's current break point is below the *parent* zone, i.e. it allows glue records from sibling zones of the zone being delegated. The following applies for glue Below child: always required Below parent: often required Elsewhere: seldom required The main reason for resticting glue is not that it in not required but that it is impossible to track down *bad* glue if you allow glue that falls into "elsewhere". Ask UUNET or any other large provider the problems that BIND 4.8.x general glue rules caused. If you want to examine a true data virus you need only look at the A records for ns.uu.net. The "below parent" and "below child" both allow you to find bad glue records. Below the parent has a bigger search space to that of below the child but is still managable. It is believed that the elsewhere cases are sufficiently rare that they can be ignored in practice and if detected can be worked around by creating be creating A records for the nameservers that fall into one of the other two cases. This requires resolvers to correctly lookup missing glue and requery when they have this glue. BIND does *not* do this correctly at present. ----------------------------------------------------------------------------- Question 4.16. What is a stub record/directive ? Date: Mon Nov 10 22:45:33 EST 1997 Q: What is the difference, or advantages, of using a stub record versus using an NS record and a glue record in the zone file? Cricket Liu responds, "Stub" is a directive, not a record (well, it's a directive in BIND 4; in BIND 8, it's an option to the "zone" statement). The stub directive configures your name server to do a zone transfer just as a secondary master name server would, but to use just the NS records. It's a convenient way for a parent name server to keep track of the servers for subzones. and Barry Margolin adds, Using stub records ensures that the NS records in the parent will be consistent with the NS records in the child. If you have to enter NS records manually, you run the possibility that the child will change his servers without telling you. Then you'll give out incorrect delegation information, possibly resulting in the infamous "lame delegation". The remainder of the FAQ is in the next part (Part 2 of 2). | http://www.faqs.org/faqs/internet/tcp-ip/domains-faq/part1/ | crawl-001 | refinedweb | 10,492 | 64.2 |
Ajax and JSF, Joined At Last
Join the DZone community and get the full member experience.Join For Free
In part four of this series on JavaServer Faces (JSF) 2.0 features contributed by Red Hat, or in which Red Hat developers played a significant role, co-author Dan Allen and I are going to focus on the new Ajax functionality in JSF 2.0. We’ll go over the examples and explain the inspirations. As with the rest of this series, we’ll also give you insights and explanations you may not find in other articles or blog entries.
Editor's Note: JSF 2.0 is available in AS6 M1, and will be supported by Red Hat in the JBoss Enterprise Application Platform in the near future.
Where shall we begin with JSF and Ajax? There's certainly a history there that reads like a romance novel. You find JSF and think you're in love. It adds structure and stability to the crazy web development scene. But it's missing something you crave: Ajax. All is not lost, though, because you find a rich component library on the side, like RichFaces, that satisfies your hunger. Sure, the arrangement is cushy, but you feel like you are using it for its Ajax. Then one day, JSF 2 is released, and you are swept off your feet. True happiness at last. Not only does the new JSF have Ajax built in, it wants you to see other component libraries too. You're no longer stuck with a single relationship! A fantasy come true.
Why Ajax now and not before?
Although Ajax is ubiquitous today, and you could not imagine the Web without it, it was all very new and revolutionary when the first two JSF specifications were published. Before the Ajax component libraries started to crop up, using Ajax with JSF probably meant using JSF incorrectly.
Early JSF adopters found themselves in a real fix when Ajax started to hit big. The model just didn't seem to support it. Those developers struggled trying to figure out how to make JSF work with Ajax.
Perhaps you can recall crafting JavaScript functions that would emulate a JSF form submission and then stitch the page updates back into the page manually. On postback, JSF would be utterly confused because the rendered view didn't match the component tree (and new form elements were appearing). It's one of those "I'll show you my scary JSF Ajax code if you show me yours." scenarios. Here are some examples:
invokeAction : function(targetDoc) {
this.mergeValues(targetDoc);
JSFUtils.invokeActionOnComponent(
targetDoc.getElementById("workspaceForm:addLineItem"));
},
<script type="text/javascript">
new Ajax.Autocompleter(
'workspaceForm:worksheet:#{index}:name',
'nameOptions_#{index}',
#{fn:encodeUrl("/ajax/auto_complete_values.jsf")}',
{paramName: 'q', onHide: function(element, update) {
new Effect.DropOut(update, {duration: 0.3}) }})
</script>
<h:inputText
/**
* Gather up form for sending over AJAX.
* 1. create hidden field with name of submitted button (to make JSF happy)
* 2. find all submit buttons and blank out the names (to prevent
* JSF from being confused)
* 3. use Form.serialize() to gather values and submit the Ajax request
*
* For links, might have to add to form to prevent a regular submit:
* onsubmit = function() { return false; }
* then run onclick(), then run Form.serialize()
*/
Clearly there was a need for some cleaner solutions that fit more naturally with JSF, rather than hacks that worked around the model. Those same developers were soon flooded with options for mashing Ajax and JSF together (see). Some great ideas sprung up, like Ajax4jsf/RichFaces, ICEFaces and ADF Faces/Trinidad. While there were a plethora of solutions, it seemed like everyone was going about it a different way, which led to stovepipes: pick one library and you are stuck with it as the view is likely tied to that Ajax mechanism..
The availability of so many JSF/Ajax frameworks is a good indication that JSF provides a strong foundation for Ajax. But libraries like Ajax4jsf had to wrap the request and mold an Ajax shell around it. Thankfully developers from many of the rich component libraries worked together on the JSF 2.0 expert group and the core Ajax functionality. Now developers can access basic Ajax functionality “out of the box” with JSF 2. The same core will allow component libraries to build from the foundation and create component libraries that can interoperate on the same page.
Now that Ajax is in JSF 2, how do I use it?
Enough history and introductions. This section will jump right to the point and show some examples of the new functionality.
Ajax support in JSF 2 is provided in two primary ways. The first is a JavaScript API: jsf.ajax.request(). This API provides a standard bridge for Ajax requests, and allows for a great deal of fine-grained control. The second is a new tag called <f:ajax>. With this tag you don’t need to worry about JavaScript at all Instead, you can use this tag to add Ajax behavior to your application declaratively.
JavaScript API
Imagine if you could download a library like Prototype and it just understood how to communicate with a JSF component tree. We'll, now you have one. With the new JavaScript API in JSF 2, you call the JavaScript methods directly that will trigger Ajax requests and partial lifecycle processing. This is great when you require JavaScript-level control over the Ajax requests or want to execute your own JavaScript around the request.
Here’s an example of using the JavaScript API to add an Ajax request to a standard command button:
<h:form>
<h:outputScript
<h:panelGrid columns=”1”>
<h:inputText <h:outputText
<h:commandButton
</panelGrid>
</h:form>
In this example when the command button is pressed an Ajax request will be sent, triggering partial view processing on the server. The name property of the UserBean will be updated, and only part of the component tree will be rendered in the response. When the request returns, the outtext component will be replaced in the client DOM and the client and server trees will remain in sync.
Let’s break down the important parts from the example above.
The first thing you might notice is the <h:outputScript ...> tag. This is one of a new set of tags in JSF 2 that handle resource loading (JavaScript, CSS, images, etc.). I’m not going to get into this in great detail, other than to say this tag registers the resource jsf.js and tells the JSF implementation to place the script link for it in the html <head> tag. In order to use the JavaScript libraries directly you must include this markup in your page. The jsf.js is a JavaScript file in JSF 2 that contains the standardized JavaScript APIs.
The real meat of this example is the implementation of the command button’s onclick event. This is the actual method call that will trigger Ajax events.
jsf.ajax.request(this, event, execute:'myinput',render:'outtext'});
This method takes 3 parameters; source, event, and options. For a full breakdown of this method please see the online JavaScript API documentation for the jsf.ajax namespace.
• source : The DOM element that triggered the Ajax request, typically this.
• event (optional): The DOM event that triggered this request. This can be used to retrieve additional meta-data about the event, such as whether the shift key was pressed.
• options (optional): This contains a set of name/value pairs from the following table
The two most important options are execute and render. These represent very important concepts in partial view processing support in JSF. Partial view processing is a new mechanism in JSF 2 that allows the JSF lifecycle to be run on one or more component subtrees. For the purposes of Ajax, this processing is split into two steps: execute (decode, validation, update model) and render.
When an Ajax request is sent to the server you rarely want the request to process all of the fields on the page. The execute attribute lets you tell JSF what components should be processed during the request. Only the component(s) identified will go through the validation, conversion, and update model phases. So it’s important to process any components that may impact your request. In our example above we want the myinput component processed on the server.
On the flip-side the render attribute tells JSF what part of the component tree should be rendered and replaced on the client when the response is returned. To the user this is the part of the browser that magically updates without the whole page refreshing. Like execute, you rarely want the entire page to be rendered. In our example, we only want the outtext component to rendered when the request is finished.
While the execute and render targets can be specified using component ids, there are also some new tokens that can be used as shortcuts for these attributes. These can be used for both execute and render instead of a list of component ids.
Let’s put this information to use and modify the previous example. If you are going through the trouble of calling JavaScript for your Ajax requests you might as well get the Ajax to trigger automatically when the user types. Below is the same example as above but without the button:
<h:form>
<h:outputScript
<h:panelGrid columns=”1”>
<h:inputText
<h:outputText id="outtext" value="#{userBean.name}"!/>
</panelGrid>
</h:form>
So now every time the user enters a character into the input text an Ajax request will be made to the server and when the request returns the outtext component will be rendered. Notice that we do not need to set an “execute” attribute. This is because the default value for execute is @this.
This JavaScript API does double duty. Most of the time, the API will be used by component libraries. But users can choose to use it to trigger Ajax requests of their own, as you have seen. This flexibility means you no longer have to step outside the JSF model to do custom Ajax. For example the ICEFaces library could use this to implement their Direct-to-DOM functionality. The flexibility and desire for component libraries to interoperate provided by this API is key.
I know what you are saying, this is great and all, but I really don’t want to mess with JavaScript. You are not the only one that feels that way. That's why JSF 2 offers a declarative solution out of the box in the form of <f:ajax>, which hides the use of this API.
<f:ajax>
As I hinted above, most users of standard JSF 2.0 will likely use the <f:ajax> tag so that they do not have worry about JavaScript and can still have fine-grain control over their page behavior. Anyone familiar with RichFaces and its <a4j:support> tag will see the obvious correlation in behavior. In fact, the support tag was a big driver and inspiration for the <f:ajax> tag. The RichFaces architect and JSF EG member Alex Smirnov had a large role in defining this tag and how it should work.
Below I’ll show you the same examples as above, but instead of using the JavaScript API we’ll use the <f:ajax> tag. As you will see it is usually much easier to attach Ajax functionality using this tag. First we’ll go over the example with the command button, with a twist.
<h:form>
<h:panelGrid columns=”1”>
<h:inputText <h:outputText
<h:commandButton
<f:ajax execute=”@form” render=”outtext”/>
</h:commandButton>
</panelGrid>
</h:form>
As you can see this got a lot cleaner, and the <f:ajax> tag is going to take care of all the heavy JavaScript. With this example I used the @form shortcut so that every component in the parent form will be processed on the server. As before we also specify that the outtext should be rendered back to the client. When the button is clicked the request will be fired, and the page updated as before.
In the next example we’ll take the button away and duplicate the second example from the JavaScript API section.
<h:form>
<h:panelGrid columns=”1”>
<h:inputText
<f:ajax event=”keyup” render=”outtext”/>
</h:inputText>
<h:outputText id="outtext" value="#{userBean.name}"!/>
</panelGrid>
</h:form>
So what is different here? First we don’t need to set an execute because the default is @this. We are also setting an event attribute for the tag. This is the client-side event of the parent component that the Ajax behavior will bind to and that will trigger the Ajax request. By default input components trigger <f:ajax> when the user changes the value and exits the field, but I wanted a more responsive UI so I choose to use keyup. Also note that you need to remove the on in front of any event you want to use.
The tag has more tricks up its sleeve though. You can wrap an <f:ajax> tag around multiple components and give Ajax behavior to all the children at once. All children will use their default settings and events unless you override them specifically. I’ll demonstrate this using an example from the spec:
<f:ajax>
<h:commandButton id=”button1”>
<h:commandButton id=”button2”>
<f:ajax event=”mouseover”/>
</h:commandButton>
</f:ajax>
In this example both buttons will have the default Ajax behaviors applied to them. In the case of commandButton this will be when they are clicked. However only the button2 component will also trigger an Ajax on mouseover. This can be useful when your want to apply the same behavior to a group of components without typing <f:ajax> for each tag.
So this is the quick and dirty tour of how to use the new Ajax features in JSF 2.0. The next section is going to go into more details on the partial view processing that makes this all possible.
Great, it’s here! Now how does it work?
In JSF 2, there is now an awareness in the JSF lifecycle of an Ajax request. A big part of this awareness is the PartialViewContext and tree visitor functionality. The PartialViewContext is responsible for capturing the state associated with an Ajax request. This includes what needs to be processed on the server, and what needs to be rendered back to the page. The tree visitor functionality is a new method, visitTree(), on the UIComponent class. The visitTree()method makes it possible to easily traverse a subset of the component tree within the context of the current request.
Once these were in place, the paradigm of a partial page update fit perfectly with JSF because there is a representation on the server of what is rendered on the screen. The page can be updated and JSF kept in the loop. A big part of adding this support involved leveraging the existing event handling mechanism in JSF and extending it all the way to the browser for Ajax based requests.
JSF has always had an "event-based" programming model, but earlier versions had a flaw in that it assumed that every event was to be handled by the server via a traditional POST operation. JSF 2.0 introduced a revamped approach that is fully aware that there are two sides to the equation, that not all events will be processed by the server and that a more efficient method of communication between the client and server can be used (Ajax) rather than a traditional POST. So it's smarter, leaner and even more like GUI frameworks (GUIs that invoke remote services, in fact).
Partial tree processing
When you define an execute value in the examples above you are telling JSF which segments of the server side component tree should run through the JSF request lifecycle or phases. The JSF component tree offers a strong fit for processing an Ajax request like this because components can be uniquely identified. This makes it easy to identify and process a single component or a whole sub-tree.
So when you set an execute parameter like this:
<f:ajax execute=”@form” .../>
Both the client side JavaScript API and the server side lifecycle know that only the components in the parent form should go through validation, conversion, and have there model updated. When actions are called, the values expressed in this way will be there waiting.
Partial page updates
The real magic of Ajax and the noticeable effect that end users ultimately see is that parts of their browser suddenly update without the whole page reloading. JSF 2.0 accomplishes this in a very similar way that RichFaces did in the JSF 1.2 days. Much like the execute attribute, the render attribute tells JSF which segments of the JSF component tree need to be processed (rendered) during the render response phase.
This is actually easier than it may sound because each component node of the tree can be asked to render itself including its children. This means during the render response phase the render attribute is examined. The identified components are found, and asked to render themselves and their children. These are then packaged up and sent back to the client.
Once on the client the JavaScript takes over and finds the corresponding DOM elements, conveniently named the same as their server side twin. The client side code then snips out the DOM elements including their children and replaces them with the new content from the response.
Components playing nice together
Throughout this article I’ve talked about interoperability between component sets. This in the long run will be the real crop of JSF 2.0. The common core of the JavaScript Ajax API is a large aspect of this. If all of the component libraries can agree and use a single core API we’ll go a long way towards true interoperability.
The different component library teams have already started discussions on how to implement, test, and showcase this ability. We are pushing for the different libraries to collaborate on a combined example application. Perhaps we pull a data table from RichFaces, a tree from IceFaces, a menu from ADF, and a collapsing panel from PrimeFaces. This type of application would really do several things. For one it would provide proof and an example to developers that JSF 2.0 component libraries really can work together. For another it will help us all to shake out the lumps and issues developers will run into with the specification.
This second point should not be underestimated and here is why. The items found this way represent areas of implementation that could cause component libraries to not function together, or point out areas in the spec that need more definition. What this means to the JSF developer is that the component libraries will be taking a lot of the risk out of JSF development for you. When this work is completed, and the various projects release their JSF 2.0 integrated component sets JSF developers will have one of the largest and most advanced component libraries available in web development.
Wrap Up
If you think all of this is neat, wait until you hear about the behavior framework that the <f:ajax> tag is built on. There is an underlying concept at work here that manifests itself in JSF 2.0 as a new concept of component behaviors. The interesting thing about behaviors is that they generalize the process by which functionality, both server-side and client-side , can be attached to components, and not just Ajax-related functionality. Once a component is capable of having a behavior attached to it, the door is open for having all sorts of new behaviors attached as well. The next article in this series will review the behavior framework in detail. Including how to create your own behavior, and some possible use-cases for this.
This behavior framework is an example of one of the extension points that have been baked into the JSF 2.0 specification. Its predecessor, JSF 1.2, also allowed frameworks like Facelets, Seam, and RichFaces to extend and improve functionality. Being able to support extensions is critical for the success of any technology. It allows for improvements from its users as technology and requirements change - as they always do, and JSF 2 is well positioned to support this growth.
Opinions expressed by DZone contributors are their own. | https://dzone.com/articles/ajax-jsf-joined | CC-MAIN-2022-05 | refinedweb | 3,421 | 62.88 |
thanks for your help
thanks alot
Need help in completing a complex program; Thanks a lot for your help
Need help in completing a complex program; Thanks a lot for your help ... it?
Thanks a ton for your help... no output. So please help me with this too.
And also, I am using Runtime function
Java
Thanks - Java Beginners
Thanks Hi,
thanks
This is good ok this is write code but i... either same page or other page.
once again thanks
hai... the problem...
state it correctly....
thanks and regards
prashu
util packages in java
util packages in java write a java program to display present date and after 25days what will be the date?
import java.util.*;
import java.text.*;
class FindDate{
public static void main(String[] args
i need your help - Java Interview Questions
i need your help Write a java program that:
i. Allows user to enter 2 numbers from the keyboard
ii. Uses a method to compare the two numbers... Number= " + larger);
}
}
Thanks
I need your help - Java Beginners
the file name is ApplicationDelivery.java
Your program must follow proper Java...I need your help For this one I need to create delivery class...(code+" "+delNo+" "+w+" "+fees);
}
}
Thanks
Thanks - Java Beginners
Thanks Hi,
Thanks for reply I m solve this problem
Hi ragini,
Thanks for visiting roseindia.net site | http://roseindia.net/tutorialhelp/allcomments/1076 | CC-MAIN-2014-15 | refinedweb | 224 | 73.68 |
Say Sayonara to sPAL!
When I teach my JSF crash course to my software engineering students,
everyone nods, works through the lab, and I don't hear any JSF issues from them
for a couple of weeks. Then they run into sPAL.
They'll have some link, usually in a data table, that needs to send some
information about itself to a managed bean. And they can't figure out how to do
it. I can never remember it either because it is so unintuitive, so I look it
up in Core JSF.
Before JSF 1.2, you had to smuggle the value into an attribute or (gasp) a
parameter child component:
<h:commandLink
<f:attribute
</h:commandLink>
Then you had to fish it out with a series of API call that only a mother
could love (and that tied your bean to the JSF API):
public class BackingBean {
public void doSomething(ActionEvent event) {
String rowId = (String) event.getComponent().getAttributes().get("rowId");
do something with rowId
}
}
Blecch.
This was slightly improved in JSF 1.2, with the addition of the sPAL tag:
<h:commandLink
<f:setPropertyActionListener
</h:commandLink>
The tag causes a property to be set in your managed bean. You are no longer
tied to the JSF API, but you must add a field and a property setter:
public class BackingBean {
private String rowId;
public void setId(String rowId) {
this.rowId = rowId;
}
public String doSomething() {
do something with rowId
return null;
}
}
Ugh.
As of today, the days of blecch and ugh are over. You can now specify
parameters in method expressions, like this:
<h:commandLink
Here is the backing bean:
public class BackingBean {
public String doSomething(String rowId) {
do something with rowId;
return null;
}
}
This fixesissue
1149 and is available in build 56 of Glassfish v3.
Sayonara sPAL!
- Login or register to post comments
- Printer-friendly version
- cayhorstmann's blog
- 6319 reads
Say Sayonara to sPAL!
by glouny - 2010-12-16 16:14
I think there are more improvements needed in this area, it would be nice if the PyServlet dynamically loads python libs. With GlassFish v3, if done correctly we can do these things thru Sniffer where such dynamically loaded modules are taken care. For now, if you are using GlassFish v3, you would 'asadmin redeploy calendar' or undeploy and deploy. detox foot patches
by luckem - 2009-07-27 11:15Sorry, didn't work for me too... I downloaded latest nightly GF v3 build and copied new el-impl.jar to modules/web directory (old jar was overwritten). I restarted server but it didn't make any change. Still get javax.el.ELResolver exception, when trying to use #{catalog.getDetail(row)}. Hope that only I missed something.
don't work for me on tomcat
by jboz - 2009-10-06 05:38don't work for me on tomcat with last jsf-api.jar, jsf-impl.jar, el-impl.jar ! :( who can help me ?
Have you download the latest
by aldowner - 2010-02-09 02:36Have you download the latest EL library and any update available? That's strange since mine is working well. ukukuk, what's that word means?
by cayhorstmann - 2009-07-26 12:25Carol--it requires the latest version of the EL library as well. In Glassfish, that is el-impl.jar.
by caroljmcdonald - 2009-07-26 08:32this did not work for me in JSF 2.0 Moharra beta released july 10, 2009
by varan - 2009-07-23 10:03Professor Horstmann: Not to beat a dead horse, but I think that students are rarely motivated to take the time to judge the suitability of what is taught to them- most of the them are primarily interested in just getting through the course. I still think that the academic setting is the ideal environment to train engineers who are unencumbered by older and out of date concepts and techniques. Otherwise, when they go to the industry they will perpetuate things that are not necessarily best suited for the task at hand. JSF is not the best framework for developing web applications.
by pjmlp - 2009-07-23 05:22It is nice to keep reading about EE6 and JSF2, but out on the real world where I work, projects are now starting to embrace JSF1.2. It will take years before we can even think about deploying JSF2 based applications.
by cayhorstmann - 2009-07-23 04:20varan: I don't think the students felt that anything was "inflicted" on them. This was in a software engineering course where they needed to write a simple web app. They already knew Java, and they picked up the basics of JSF in a few days. With JSF2 and EE6, there just isn't that much to it. They just used the standard tags--beauty wasn't a consideration. They had to learn a few annotations, EL, a bit of JPQL. Definitely a much smaller learning curve than, say, RoR. Wicket or ZK would also have been reasonable options for the course, but we enhanced an app that was written in JSF (and which I quickly rewrote into JSF2 before the course started). If you are still thinking of JSF1, I can understand your bafflement. But give JSF2 another try. You may not love it more than Wicket, ZK, or GWT, but it is a decent option with a low learning curve and a lot of support by multiple vendors.
by scotty69 - 2009-07-23 03:05varan, I also consider the uniform, Swing-like programming paradigm superior (though I'm preferring pure GWT). But a lot of young programmers out there are familiar and comfortable with traditional web programming (PHP, Rails, ASP, ...) without ever having programmed a single Swing-app (let alone GTK, Qt etc.) ;-) Anyway, sPAL was the biggest JSF (better: JSP-EL) mistake to happen. Thank god it's history.
by varan - 2009-07-22 22:28I am very curious about this. Perhaps the Sun people have a vested interest in protecting the JSF franchise. OK, not perhaps, but for sure. But why do you, as an academic, inflict JSF on the poor saps, when better alternatives like ZK/Wicket/IT Mill etc are available which do not require the students to worry about this business of tags etc while simultaneously providing a uniform programming paradigm that anyone who has done, say, Swing before can easily understand. This inattention to what is the easiest for the students to learn is quite baffling I must say.
by cayhorstmann - 2009-07-22 14:52Oops--it's a typo. I fixed it. Thanks!
by jhook - 2009-07-22 14:15Is that a typo in requiring a nested expression? seems as if it should instead be ${backingBean.doSomething(row.id)}
by luckem - 2009-07-28 10:13You're right, I should be more precise in describing my problem. I think my fault was, that I used Glassfish v3 Prelude. After installing GF v3 Preview version and updating it to latest nightly build everything "just worked". Thanks for help and sorry about confusing. Can't wait for "Core JavaServer Faces 2.0" :)
by cayhorstmann - 2009-07-28 05:45It's not easy to help with problem reports that say "I copied this JAR into an unspecified app server." Simply install GF v3 preview, have it update itself to version >= 56, and deploy your app with the action method parameter in that version of GF. That should work, and then you can backtrack from there. | https://weblogs.java.net/blog/cayhorstmann/archive/2009/07/say_sayonara_to.html | CC-MAIN-2014-10 | refinedweb | 1,238 | 72.76 |
Once you've started creating GNU Radio applications, you will
probably stumble upon some errors sooner or later. Here is some advice
on how to tackle those problems..
This is the most obvious and simple tool anyone should use. For
every block you write, add QA code as well. Test as many options as you
can think of which might cause trouble. Individual blocks should always
pass tests.
ctest -V
make test
print.
There's some tools you may use to inspect your code on a deeper level:.
Note that this tutorial assumes some familiarity with gdb.
To try this at home, make and install the gr-howto-write-a-block
module that comes with GNU Radio. Make sure that you can access the
module from Python by calling import howto.
import howto
This is the script we want to debug:
1 """ Testing GDB, yay """
2
3 import os
4 from gnuradio import gr
5 import howto
6
7 class SquareThat(gr.top_block):
8 def __init__(self):
9 gr.top_block.__init__(self, name="square_that")
10 self.src = gr.vector_source_f((1, 2, 3, 4, )*5)
11 self.sqr = howto.square2_ff()
12 self.sink = gr.vector_sink_f()
13 self.connect(self.src, self.sqr, self.sink)
14
15 def main():
16 """ go, go, go """
17 top_block = SquareThat()
18 top_block.run()
19
20 if __name__ == "__main__":
21 print 'Blocked waiting for GDB attach (pid = %d)' % (os.getpid(),)
22 raw_input ('Press Enter to continue: ')
23 main()
First of all, it helps if you compiled the howto module with debug symbols. CMake will do that for you if you invoke it with
$ cmake .. -DCMAKE_BUILD_TYPE=Debug
Now, all you have to do is start the script. Let's assume it's saved as test_gdb.py: .
sudo gdb
echo 0 > /proc/sys/kernel/yama/ptrace_scope
/etc/sysctl.d/10-ptrace.conf<void const*, std::allocator<void const*> >&, std::vector<void*, std::allocator<void*> >&)];
square2_ff_impl::work()
Note that GNU Radio is heavily multi-threaded, which can make usage of gdb quite complicated. The gdb command info threads will give you a list of active threads.
info threads
If your block is failing during QA, you don't have to install the
module. However, ctest buffers the output, so the line showing the PID
won't work. Instead, just put in the line that waits for the input:
1 if __name__ == '__main__':
2 raw_input ('Press Enter to continue: ')
3.
gdb -p 28518.
注:How to debug GNU Radio applications(原文出处,翻译整理仅供参考!)
Report Abuse|Powered By Google Sites | http://gnuradio.microembedded.com/tutorialsdebugging | CC-MAIN-2020-10 | refinedweb | 417 | 68.47 |
In this tutorial, you’ll learn how to read and write files in C++. This is a basic and easy-to-understand lesson in which we will cover the syntax and usage of file handling in C++ in great detail. We will also touch upon some examples which will help us understand the concepts in a practical setting.
Throughout this tutorial, we will use
fstream which is the standard C++ library for reading and writing to files.
File Handling Data Types
The standard
fstream library exposes three data types to deal with file handling;
ifstream,
ofstream and
fstream. The
ifstream data type facilitates in file handling by allowing to read information from a file. The
ofstream data type facilitates file handling by allowing to create and write information on a file. Lastly,
fstream is a general data type that has the overall capabilities of handling a file i.e. creating a file, reading from a file, and writing to a file.
Note that in order to use these data types, we must include both
iostream and
fstream as headers in the source file.
#include <iostream> #include <fstream>
File Handling Steps
Generally, there are three steps involved in order to perform an end to end execution of reading and/or writing to a file. In this section, we will discuss these steps in great detail including the syntax and support examples.
Opening a File
In order to perform read/write operations, we must open the file first. This allows the program to know exactly which file we need to currently work on. If we want to open the file for reading, we can use the
ifstream data type. For writing to a file, we can use the
ofstream data type. The safest option is to open the file using
fstream data type because we can use the same variable pointer for both reading and writing purposes.
The
fstream header provides a function called
open which opens the file for further usage. Following is the basic syntax to open a file in C++.
void open(const char* yourFilename, ios::openmode fileMode)
The above function takes two arguments. The first argument is the filename which we wish to use for reading/writing. The second argument is the mode that tells the program to open the file for a specific set of operational modes.
Let’s look at an example usage to properly understand it.
ifstream myFile; myFile.open("test.txt", ios::in);
You might be wondering what exactly
ios::in is! Well, it is the same operational mode which we discussed above. By using
ios::in, we have let the program know that the file
test.txt will be used only for input operations. In other words, we can only read from
test.txt using the
myFile variable.
We can open the same file using
fstream as well.
fstream myFile; myFile.open("test.txt", ios::in | ios::out);
This time we have two operational modes separated by a
| symbol. Other than the
ios::in, we have also included the
ios::out mode. It means that we can both read and write this time using the
myFile variable.
There are a variety of different operational modes. You may read further about them here.
File Reading and Writing
After opening the file, we are now allowed to perform read and/or write operations on that file depending upon whichever use case we want. In this subsection, we will take a look at both reading from a file and writing to a file. Let’s start by writing some dummy text to a file. Once we are done with writing, we will use the same file to read its contents and print them on the console.
Writing to a File
Look at the following code snippet which writes a piece of dummy text in the file
dummy.txt.
#include <fstream> #include <iostream> using namespace std; int main () { char text[150] = "Welcome to CodeZen tutorials!\nI am Emad Bin Abid; a Full Stack Developer and a Technical Blogger.\n"; // opening the file for writing fstream writeFile; writeFile.open("dummy.txt", ios::out); // writing to the file writeFile << text << endl; // freeing up memory by closing the file writeFile.close(); }
Pretty much self-explanatory, no? If you’re worried about that
writeFile.close() statement then hold on for a bit until we discuss about closing a file in the next subsection.
Now let’s use the same file
dummy.txt for reading and retrieve the text which we just wrote in the file.
Reading from a File
The following code snippet reads the file
dummy.txt line by line and prints the result on the console window.
#include <fstream> #include <iostream> using namespace std; int main () { string text; // opening the file for reading fstream readFile; readFile.open("dummy.txt", ios::in); // reading from the file while (getline(readFile, text)) { cout << text << endl; } // freeing up memory by closing the file readFile.close(); }
The above code piece will generate the following output in the console window.
Welcome to CodeZen tutorials! I am Emad Bin Abid; a Full Stack Developer and a Technical Blogger.
Closing a File
Although, when the program finishes its execution, it frees out the memory. But generally, it is recommended to manually free out the memory to avoid memory leakages and extra overhead on the program. We can release the memory occupied for file handling by closing the file using the
close function provided by
fstream header.
myFile.close();
Wrapping Up
That’s all about file handling in C++. I hope it was an easy tutorial to help you familiarise yourself with the basics of reading/writing operations of files in C++. We cannot assess our learning unless we practically implement what we learn so I highly recommend doing practical hands-on exercises related to file handling. Try out cool things on your own and share with us what interesting stuff have you come up with.
Feel free to ask your questions in the comments section below. If you wish to learn more about React, you can check out our collection of C++ tutorials. | https://codezen.io/how-to-read-and-write-files-in-c/ | CC-MAIN-2021-43 | refinedweb | 1,016 | 65.32 |
Parrot::Coroutine - A pure PIR implementation of coroutines
.sub onload :load load_bytecode 'Parrot/Coroutine.pbc' .end ## Recursive coroutine to enumerate tree elements. Each element that is ## not a FixedPMCArray is yielded in turn. .sub enumerate_tree .param pmc coro .param pmc tree_node .param int depth :optional .param int depth_p :opt_flag if depth_p goto have_depth depth = 0 have_depth: inc depth $I0 = isa tree_node, 'FixedPMCArray' if $I0 goto recur print "[leaf " print tree_node print "]\n" coro.'yield'(tree_node) .return () recur: ## Loop through array elements, recurring on each. .local int size, i i = 0 size = tree_node again: if i >= size goto done print "[recur: depth " print depth print ' elt ' print i print "]\n" $P1 = tree_node[i] enumerate_tree(coro, $P1, depth) inc i goto again done: .return () .end .sub print_tree .param pmc tree .local int coro_class, idx .local pmc coro .const 'Sub' coro_sub = "enumerate_tree" coro = new ['Parrot'; 'Coroutine'], coro_sub ($P0 :optional, $I0 :opt_flag) = coro.'resume'(coro, tree) idx = 0 loop: unless $I0 goto done print 'print_tree: ' print idx print ' => ' print $P0 print "\n" ($P0 :optional, $I0 :opt_flag) = coro.'resume'() goto loop done: .end
This object class provides an implementation of coroutines that is written in pure PIR using continuations.
This method is normally called via the
new op:
.local pmc coro .const 'Sub' coro_sub = "enumerate_tree" coro_class = get_class ['Parrot'; 'Coroutine'] coro = coro_class.'new'('initial_sub' => coro_sub)
Given a sub, it initializes a new
Parrot::Coroutine object.
Invoke the coroutine. The first time this is called on a new coroutine, the initial sub is invoked with the passed arguments. The second and subsequent times, the args are delivered as the result of the previous
yield operation.
If the coroutine subsequently yields, the values passed to the
yield method are returned as the values from
resume.
If the coroutine returns normally (i.e. from the original sub), then those values are passed returned from the
resume method, and the coroutine is marked as dead, in which case it is an error to attempt to resume it again.
Within the coroutine,
yield returns arbitrary values back to the caller, making it look like the values came from the last
resume call.
The next time the caller decides to resume the coroutine, the arguments passed to
resume are returned as the values from
yield.
Please report any others you find to
<parrot-dev@lists.parrot.org>. -- coroutines defined.
t/library/coroutine.t -- "same fringe" test case.
src/pmc/coroutine.pmc -- the
pmclass implementation. -- definition of the coroutine API for the Lua programming language, upon which the
Parrot::Coroutine API is based. -- Scheme tutorial chapter that introduces call/cc and uses it to solve "same fringe" via coroutines.
Bob Rogers
<rogers-perl6@rgrjr.dyndns.org>
Copyright (C) 2006-2008, Parrot Foundation. This program is free software. It is subject to the same license as The Parrot Interpreter. | http://search.cpan.org/~mstrout/Rakudo-Star-2012.08_001/rakudo-star/parrot/runtime/parrot/library/Parrot/Coroutine.pir | CC-MAIN-2013-48 | refinedweb | 465 | 68.36 |
Created on 2013-05-25 15:48 by brett.cannon, last changed 2013-06-18 19:49 by brett.cannon. This issue is now closed.
Is there a reason that is_package() is not defined for NamespaceLoader? If it's just an oversight then adding it would let -m would work with namespace packages. The other abstract methods on InspectLoader can also be implemented or raise ImportError as appropriate.
Just assign to me if you are okay with seeing this happen.
I think it's just an oversight.
New changeset ebec625b13f9 by Brett Cannon in branch 'default':
Issues #18058, 18057: Make importlib._bootstrap.NamespaceLoader
Brett, can these changes be merged into 3.3 also?
No because it would mean new functionality in a bugfix release. | http://bugs.python.org/issue18058 | CC-MAIN-2016-30 | refinedweb | 124 | 69.68 |
WRITE(2) System Calls WRITE(2)
write, writev - write output
; }; Each iovec entry specifies the base address and length of an area in memory from which data should be written. write operation should be retried when possible. If the file was opened with the GNO-specific flag O_TRANS, then newline translation will occur; any line feed (0x0a) character present in buf will be converted to a carridge return (0x0d) before the write is done. See also the section on BUGS, below.
Upon successful completion the number of bytes which were written is returned. Otherwise a -1 is returned and the global variable errno is set to indicate the error.
Write and write pro- cess's file size limit or the maximum file size. EFAULT Part of iov or data to be written to the file points out- side the process's allocated address space. EINVAL The pointer associated with d was negative. ENOSPC There is no free space remaining on the file system con- taining the file. EDQUOT The user's quota of disk blocks on the file system con- taining the file has been exhausted. EIO An I/O error occurred while reading from or writing to the file system. EAGAIN The file was marked for non-blocking I/O, and no data could be written immediately. In addition, writev may return one of the following errors: EINVAL Iovcnt was less than or equal to 0, or greater than UIO_MAXIOV. EINVAL One of the iov_len values in the iov array was negative. EINVAL The sum of the iov_len values in the iov array overflowed a 32-bit integer.
If the GNO-specific flag O_TRANS was specified when the descriptor d was opened, then buf may be modified by this call; the newline transla- tion is done in-place.
fcntl(2), lseek(2), open(2), pipe(2), select(2)
Write is expected to conform to IEEE Std 1003.1-1988 (POSIX).
The writev function call appeared in 4.2BSD. A write function call appeared in Version 6 AT&T UNIX. GNO 23 January 1997 WRITE(2) | http://www.gno.org/gno/man/man2/write.2.html | CC-MAIN-2017-43 | refinedweb | 346 | 72.76 |
unicode_scripts_blocks 0.4.0
unicode_scripts_blocks: ^0.4.0 copied to clipboard
Unicode Scripts and Blocks #
A tool for checking if a code unit belongs to a Unicode Script or Block
Notice: This library is deprecated! It will not be maintained and could be deleted at any time. Please use Unicode Data instead. #
Background #
Unicode code points are divided into code blocks that generally contains characters within the same or related writing systems. For example Basic Latin or Arabic. However, the complete character set needed for a writing system is often spread across a number of code blocks. This character set is referred to as a script. If you want to know what writing system a particular character belongs to, it is generally more accurate to use the Unicode script data rather than the block data. You can read more about the difference here.
This library provides a way to test whether a given code point belongs to some particular Unicode script or block. It was generated from the Unicode 12.0 Scripts.txt and Blocks.txt data files. This library is exhaustive in that it implements every script and block in those data files.
Usage #
A simple usage example:
import 'package:unicode_scripts_blocks/unicode_scripts_blocks.dart'; main() { // Unicode Block int space = 0x0020; bool isBasicLatin = UnicodeBlock.isBasicLatin(latinChar); // true // Unicode Script int thaiChar = 'ด'.codeUnitAt(0); bool isThai = UnicodeScript.isThai(thaiChar); // true }
Contributing #
Your help and pull requests are welcome.
Here are some known issues:
- Many of the single code point checks in
UnicodeScriptsare consecutive, meaning they could be consolidated into ranges, which would probably improve performance.
- The lookup algorithm is O(n). I'm sure this could be improved with a better data structure.
- There are tests for each code block and script but there isn't 100% code coverage. It would be good to at least test characters in scripts with code points higher than U+10000 to make sure they don't get accidentally omitted in future updates.
Features to add:
- Return the Block name or Script property value as a string when given a code point.
- Having a completely automatic code generator that takes the data file and produces the code would make updates for future version of Unicode much easier.
- The most recent version of the script and block data files can be found here: Scripts, Blocks. | https://pub.dev/packages/unicode_scripts_blocks | CC-MAIN-2021-39 | refinedweb | 389 | 66.64 |
I am following this tutorial video from freeCodeCamp called Object Oriented Programming with Python. That tutorial, or object oriented programming isn’t necessarily important to this topic. But in that tutorial there is a function that I still don’t understand why it works. Hopefully someone here can explain it in a way I understand.
def is_integer(num): # check if num is an integer. # Floats ending with point-zero are considered integers. if isinstance(num, float): return num.is_integer() elif isinstance(num, int): return True else: return False
If you run the code yourself, you will see that it works as expected. Inputs of
5 and
5.0 return
True while an input of
5.25 returns
False. But I don’t understand how this works. Even after stepping-through with the debugger, I don’t understand how this can possibly work. I am confused by the line
return num.is_integer().
- How is it that
num.is_integer()does not result in an error when
is_integer()is not a method of the float class?
- How does this return True for input of
5.2but not
5.0?
From what I understand that return statement should cause an infinite loop. to me it says “is num a float? Yes, then return the output of this function with this number. Is num a float? Yes…”
I understand how numbers like
5.0 and
5.6 cause
isinstance(num, float) to return True. I have no idea how then
return num.is_integer() somehow differentiates between
5.0 and
5.6. | https://forum.freecodecamp.org/t/i-dont-understand-this-integer-test-function/486986 | CC-MAIN-2022-05 | refinedweb | 256 | 71.51 |
Assuming we want to do print greetings as we did with
Building with two files, but in multiple languages, we can use a
common
main.c and use language specific file instead of file
greeting.c.
main.c and
greeting.h remains the same:
But now we have language specific greeting files:
#include <stdio.h> #include "greeting.h" void greeting() { printf ("Hello World!\n"); }
#include <stdio.h> #include "greeting.h" void greeting() { printf ("Bonjour le monde!\n"); }
To compile only English language specific file with GCC, the commands would be.
gcc main.c greeting_en.c -Wall -o greeting_en
As we did earlier, to execute the binary generated from the source file, the command would be:
./greeting_en
If everything went fine, the output would be:
Hello World!
To compile them at the same time for all languages with GCC, the commands would be.
gcc main.c greeting_en.c -Wall -o greeting_en gcc main.c greeting_fr.c -Wall -o greeting_fr gcc main.c greeting_es.c -Wall -o greeting_es
We can as well run all the generated binaries:
./greeting_en ./greeting_fr ./greeting_es
If everything went fine, the output would be:
Hello World! Bonjour le monde! Hola Mundo!
Here, we used a common files
main.c,
greeting.h and
isolated language specific stuff in other files
greeting_en.c,
greeting_fr.c and
greeting_es.c
To compile on windows with Microsoft Visual C Compiler cl.exe, the command would be
cl.exe main.c greeting_en.c /W4 /nologo /Fegreeting_en.exe cl.exe main.c greeting_fr.c /W4 /nologo /Fegreeting_fr.exe cl.exe main.c greeting_es.c /W4 /nologo /Fegreeting_es.exe
As seen in previous chapters, the compilation output would be very similar:
main.c greeting_en.c Generating Code... main.c greeting_fr.c Generating Code... main.c greeting_es.c Generating Code...
To run all the generated binaries, we can run them as follows:
call greeting_en.exe call greeting_fr.exe call greeting_es.exe
The output would be same as that we would have got when compiled with gcc:
Hello World! Bonjour le monde! Hola Mundo! | https://books.dehlia.in/c-cpp-sw-build-systems/basic-compilation/0030_languages/ | CC-MAIN-2021-31 | refinedweb | 336 | 71.51 |
16 July 2010 19:03 [Source: ICIS news]
TORONTO (ICIS news)--Germany’s train drivers union GDL will seek to avoid a strike, it said on Friday after a first round of wage bargaining with rail carrier Deutsche Bahn ended without results.
A 2007 train drivers strike disrupted chemical railcar shipments in ?xml:namespace>
The country’s 20,000 train drivers will be in a legal strike position after 31 July.
GDL head Claus Weselsky said he expected protracted talks, but added the union was not “bent on a strike”.
At stake was a 5% wage increase, according to the union.
Also, GDL was seeking a framework deal that covered all train drivers within Deutsche Bahn, including affiliates and subsidiaries, it said.
The union said that the railway carrier had already formed 17 separate limited liability affiliates and planned to form an additional 13. Said Weselsky: “Any deal must be applied throughout [Deutsche Bahn] group, without exception.”
The negotiations are expected to resume on 29 July.
In related news, German employers, including chemical employers, have voiced concern that the country could soon face more industrial disputes after the federal labour court said it would no longer abide by a principle - “Tarifeinheit” - that provides that there is only one collective agreement per plant.
The court’s ruling opened the door to separate collective agreements for various groups of workers - such as train drivers, air traffic controllers or doctors - within one plant or facility, they said. This raised fears that collective bargaining could become “balkanised”,. | http://www.icis.com/Articles/2010/07/16/9377285/germany-seeks-to-avoid-repeat-of-2007-rail-strike.html | CC-MAIN-2013-20 | refinedweb | 253 | 50.67 |
Ah, you found the line. I've been poking at it for a couple of days, and just found that line too, but from a different direction... It *is* a code bug, not a compiler bug. It's tricky though: numset_find_empty_cell realloc's numbers taken... *which can cause it to move*. So the original assignment would then be writing into the old memory address, if it looks up ns->numbers_taken before the call and then makes the call and then does the assignment... what made me see this was printing out the numset and seeing it go from 0 1 2 3 4 5 6 7 8 9 to 0 1 2 3 4 5 6 7 8 9 65529, when the assignment clearly *should* have been happenning. So, this might be *masked* on some platforms by compiler differences (though I'd have to dig into the ANSI spec and reread the stuff on sequence points to convince me the compiler's allowed to do it both ways - I *suspect* that any compiler that does the lookup after the call (such that it doesn't show the problem) actually has a bug.) This also tells me that I should have *started* this effort by firing up Electric Fence - it would have caught this, and caused it to segfault at the point of the assignment to old memory. (But since ratpoison wasn't crashing, I didn't suspect memory issues.) So, does that convince you to commit the change in that form? ps. Here are some helper functions I found useful; everywhere that had a "ns=%p", ns became a "ns=%ps(%s)", ns, nsname(ns), and debug_numset got called in a bunch of places. Now that you've nailed the problem you probably don't need them, though... static char * nsname (struct numset *ns) { if (ns == rp_window_numset) return "rp_window_numset"; if (ns == rp_frame_numset) return "rp_frame_numset"; /* various: rp_screen.frames_numset */ return "???"; } void debug_numset (struct numset *ns) { #ifdef DEBUG int i; printf("DN: ns=%p(%s) taken=%d max=%d\n", ns, nsname(ns), ns->num_taken, ns->max_taken); for (i = 0; i < ns->max_taken; i++) { if (i < ns->num_taken) printf(" %d", ns->numbers_taken[i]); else printf("(%d)", ns->numbers_taken[i]); } printf("[nt=%p]\n", ns->numbers_taken); #endif } On 11/25/05, Joshua Neuheisel <address@hidden> wrote: > On 11/23/05, address@hidden <address@hidden> wrote: > > > > "ratpoison -c windows" shows that I have 12 windows, two of which are > > numbered "10". If I select window 9 and go "next", I get one of > > them; if I select 0 and go "prev", I get the other. One is an xterm, > > the other is a dclock; they were all started sequentially using > > xtoolwait (thus rapidly, but in a well defined order.) > > > > First noticed it with 1.3.0-7 under ubuntu; that doesn't mean it > > wasn't happening under debian, but I hadn't *noticed* it there. Built > > from CVS a few days ago, with the latest ChangeLog entry being > > 2005-11-05, and it still happens the same way. > > > > Alright, I think I have some new info here. I was able to reliably > reproduce the problem on MacOS X Tiger with gcc version as such: > powerpc-apple-darwin8-gcc-4.0.0 (GCC) 4.0.0 20041026 (Apple Computer, Inc. > build 4061) > > To fix it, I changed line 79 in src/number.c from > > ns->numbers_taken[numset_find_empty_cell(ns)] = n; > > to > > int ec; > ec = numset_find_empty_cell(ns); > ns->numbers_taken[ec] = n; > > and the problem went away. When I stepped through the code, I saw that the > return value from numset_find_empty_cell was being discarded, and the > assigment was being ignored. Obviously, this is a compiler error. I was > not able to reproduce it on my i686/linux machine running gcc version: > gcc (GCC) 3.4.5 20051026 (prerelease). > I was also not able to reproduce it on my MacOS X with the gcc-3.3 version: > gcc-3.3 (GCC) 3.3 20030304 (Apple Computer, Inc. build 1809). > > What gcc version did the original poster use? Or was it another compiler? > > Joshua > > -- _Mark_ <address@hidden> <address@hidden> | http://lists.gnu.org/archive/html/ratpoison-devel/2005-11/msg00006.html | CC-MAIN-2015-48 | refinedweb | 679 | 71.85 |
eventstream-sqlite 1.0.0-alpha
simple persisted events
Eventstream is an embedded events storage component based on SQLite. It allows you to store events and query them based on time, making it trivial in storing data which is time based. Events are represented as tags with optional metadata attached, which allows you to model most data effectively.
from datetime import datetime, timedelta from eventstream import connect, Event t = connect(uri=':memory:') t.emit(Event('status.update'), 1) today = datetime.utcnow() yesterday = today - timedelta(days=1) t.find.since(yesterday)\ .to(today)\ .tagged('status.*')
Like Graphlite, Eventstream is thread safe. All emits are atomic and operate under a lock (if you want to store many events use the transaction-based emit_many instead). Eventstream also emphasizes on API usability and overall ease of use. You can query by time, tag, or node, but not the metadata./eventstream
- Downloads (All Versions):
- 0 downloads in the last day
- 0 downloads in the last week
- 0 downloads in the last month
- Author: Eugene Eeo
- Package Index Owner: eugene-eeo
- DOAP record: eventstream-sqlite-1.0.0-alpha.xml | https://pypi.python.org/pypi/eventstream-sqlite | CC-MAIN-2015-14 | refinedweb | 184 | 65.42 |
If you have struggled to find a javascript templating engine that is fast, familiar and extensible, then perhaps JSRazor is for you.
JSRazor converts an HTML template into a Javascript object that you can use to generate HTML on the client/browser, based on a model object which is usually passed back from your server as JSON.
As we grow JSRazor, we are finding it is a great way to define and package client side javascript controls, we can package all the templates and support code into a single minimized file, easily.
CubeBuild.JSRazor is available at. Issues can also be submitted there.
The repository contains 60 and growing unit tests.
CubeBuild.JSRazor is also available for .NET as a NuGET package CubeBuild.JSRazor
CubeBuild.JSRazor
Documentation can be found on our wiki at
When you move to a rich web application, using AJAX, from pages generated on the server, you'll miss the simplicity of a strong and easy templating language You can generate the HTML from the server for a fragment of the page, but that tends to bloat requests to a size that is much larger than you need.
You may also have tried a number of javascript templating engines. We have, and we found several problems common to "most" of them:
JSRazor is a custom syntax parser that understands HTML with Razor-like markup and generates javascript objects. The resulting javascript object contains a render method that takes a "Model" object and produces HTML output.
It is important to us that JSRazor also support the following:
In order to learn and use JSRazor, I have created a simple example, of around 60 lines of JSRazor template and Javascript that is able to render a calendar control to the browser, to show events for a month, it looks like this:
The use case for this calendar is for a client-side calendar that is updated based on an object passed back from a JSON call to any server, from the browser client, so you can skip through months quickly.
The JSON returned from the server will include the month and year to display, then a list of events with dates:
var Model = {" }
]
}
While you could build HTML on the fly in the browser, or build up objects using a javascript library like jQuery, both become difficult to maintain very quickly.
As an alternative we can do it with a JSRazor template in just 36 lines of Razor-like syntax and a couple of support functions that work out the weekdays:
@* Calendar.jshtml *@
<h1>@(["January", "February", "March", "April", "May", "June", "July", "August", "September", "October", "November", "December"][Model.Month]), @Model.Year</h1>
<table>
<tr>
<th>Sunday</th>
<th>Monday</th>
<th>Tuesday</th>
<th>Wednesday</th>
<th>Thursday</th>
<th>Friday</th>
<th>Saturday</th>
</tr>
@for (var week = 0; week <= 4; week++)
{
<tr>
@for (var day = 0; day <= 6; day++)
{
<td>
@this.DayNumber(Model.Month, Model.Year, week, day)
@this.ShowEvents(Model.Month, Model.Year, week, day, Model.Events)
</td>
}
</tr>
}
</table>
@helper ShowEvents(month, year, weekNumber, dayNumber, events)
{
/* Show events for the specific day, for events valid on that day */
var day = this.Today(month, year, weekNumber, dayNumber);
for (var i = 0; i < events.length; i++)
{
var event = events[i];
var thisDate = new Date(event.Date);
if (thisDate.getDate() == day && thisDate.getMonth() == month && thisDate.getFullYear() == year) {
<div>@event.Event</div>
}
}
}
JSRazor generates a single javascript object that contains two methods:
You may notice calls to this.Today, and this.DayNumber, two functions that are defined in a separate file and incorporated into the template as:
this.Today
this.DayNumber
/* Calendar.js */
Calendar.prototype.DayNumber = function (month, year, weekNumber, dayNumber) {
/* Return appropriate day number, or nothing for a day that is not valid in the month */
var actualDayNumber = this.Today(month, year, weekNumber, dayNumber);
if (actualDayNumber <= 0) { return ""; }
var availableDays = new Date(year, month + 1, 0).getDate();
if (actualDayNumber > availableDays) { return ""; }
return actualDayNumber;
}
Calendar.prototype.Today = function (month, year, weekNumber, dayNumber) {
/* Find the day number based on the cell references for a given month and year */
var firstDate = new Date(year, month, 1);
var dateOffset = firstDate.getDay();
return (weekNumber * 7) + dayNumber - dateOffset + 1;
}
As you would expect, you can use the JSRazor syntax to call:
For generation of the final javascript that is downloaded to the browser, you use CubeBuild.JSRazor.Command.
CubeBuild.JSRazor.Command Calendar.jshtml Calendar.js > Calendar_template.js
takes a list of files or directories (which it probes for all .jshtml and .js files) and it writes transformed templates and all javascript to standard output, so Calendar_template.js contains all the found source files.
CubeBuild.JSRazor.Web.MVC contains helpers for ASP.NET MVC that allow you to, by convention, locate the jshtml and javascript, then cache the generated template at runtime.
Create an action capable of returning the consolidated javascript similar to:
public ActionResult JSTmpl(string viewController, string viewAction)
{
return this.JSTemplate(viewController, viewAction);
}
By convention, this will look for javascript templates in a folder with the same name as the view, or in the shared folder, and stream them back as javascript, which is minimized using WebGrease for Release builds. The general structure for the calendar example is:
Views
Calendar - action view folder
Index.cshtml - razor server side template
Index - convention folder for JSRazor content
Calendar.js - support code for jsrazor template
Calendar.jshtml - jsrazor template
You can then reference the javascript via your special JSTmpl action using a <script/> tag, and you will get back the generated template and the javascript support code.
Any number of templates and support javascript files can be included in the folder, allowing you to separately define all the templates and code you need for the page in miltiple design time files.
The generated template code is used by:
Using JQuery, you might do the following:
$(function () {
var t = new Calendar(); // Create an instance of the template object
$("#calendar").html(t.render({" }
]
}));
});
In this example the model object is defined in code, it could just as easily be the result of an AJAX call to a server, which returns a JSON object.
If you are using JQuery for AJAX calls, the following jquery template plugin might help:
$.fn.postTemplate = function (url, data, template) {
$.each(this, function (nodeix, node) {
var target = node;
$.ajax({
url: url,
data: data,
type: 'POST',
cache: false,
dataType: 'json',
success: function (data) {
if (data.Success) {
var t = new template();
$(target).html(t.render(data));
if (t.OnRender) {
t.OnRender($(target));
}
}
else {
$(target).html(data.Message);
}
}
});
});
};
With this helper, you can post a call to the server, and place the returned model into the page using the template in one call:
$("#calendar").postTemplate("/calendar/get", { month : 0, year: 2013 }, Calendar);
This will get the JSON from the server, render it into the template, then call an OnRender function on the template to do any JQuery setup, passing in the JQuery container object. An example of an OnRender definition in a Javascript file separate to the template is:
Calendar.prototype.OnRender = function(elem) { $(elem).addClass("calendar"); });
Given a template is an object, we can refresh the content any time based on a new dataset, and with a simple JQuery add-in called bindTemplate you may bind changes on fields, or at the click of something, to a template.
As an example (found in the example project) consider a page that displays a dropdown of people, and allows you to select a person and view their details. The back end might look like this:
Person[] PersonList = {
new Person() {
ID = 1,
Name = new Name() { First = "Adrian", Last = "Holland"},
Address = new Address() {
Street = "Jackson Crt",
City = "Strathfieldsaye",
State = "Victoria",
Country = "Australia",
Postcode = "3553"
}
},
new Person(){
ID = 2,
Name = new Name() { First = "Sam", Last = "Taylor"},
Address = new Address() {
Street = "Pines Rd",
City = "Robe",
State = "South Australia",
Country = "Australia",
Postcode = "8343"
}
},
new Person() {
ID = 3,
Name = new Name() { First = "Greg", Last = "Jones"},
Address = new Address() {
Street = "Yates Blvd",
City = "Caroline Springs",
State = "Victoria",
Country = "Australia",
Postcode = "3345"
}
}
};
/// <summary>
/// Return a list of people and their ID's
/// </summary>
/// <returns></returns>
public ActionResult List()
{
return Json(new { Success = true, Items = PersonList.Select(p => new { p.ID, Name = p.Name }) });
}
/// <summary>
/// Return a single model object
/// </summary>
/// <param name="id"></param>
/// <returns></returns>
public ActionResult Object(int id)
{
return Json(new { Success = true, Person = PersonList.Where(p => p.ID == id).FirstOrDefault() });
}
public class Person
{
public int ID { get; set; }
public Name Name { get; set; }
public Address Address { get; set; }
}
public class Name
{
public string First { get; set; }
public string Last { get; set; }
}
public class Address
{
public string Street { get; set; }
public string City { get; set; }
public string State { get; set; }
public string Postcode { get; set; }
public string Country { get; set; }
}
That gives us some data to work with. Firstly we need a page to display the selector and the Person details, so here is a view Index.cshtml:
@{
ViewBag.Title = "Index";
}
@section head {
<script src=""></script>
@Html.IncludeJSTemplates()
}
<h2>Binding Example</h2>
<div id="personSelector"></div>
<fieldset>
<legend>Person</legend>
<div id="person"></div>
</fieldset>
<button id="adrian">Adrian</button>
My intent for these placeholders is:
The first point of interest is the dropdown selector, we could render that from the Razor page, but as we are using JSRazor, ill create a template that shows the selector on the client side:
@* Selector.jshtml *@
<select name="personID">
@foreach (var person in Model.Items) {
<option value="@person.ID">@person.Name.Last, @person.Name.First</option>
}
Once a person is selected, and indeed when the page is first displayed, we need to show the person as well, so here is a person template to display the person:
@* Person.jshtml *@
<div class="person">
<div class="field">
<span class="label">ID</span>
<span class="value">@Model.Person.ID</span>
</div>
<div class="field">
<span class="label">Name</span>
<span class="value">@Model.Person.Name.Last, @Model.Person.Name.First</span>
</div>
<div class="field">
<span class="label">Address</span>
<span class="value">@Model.Person.Address.Street</span>
<span class="value">@Model.Person.Address.City</span>
<span class="value">@Model.Person.Address.State, @Model.Person.Address.Postcode</span>
<span class="value">@Model.Person.Address.Country</span>
</div>
</div>
The only thing remaining is the javascript to wire them together, basically two lines (and one more I will describe later):
$(function () {
$("#personSelector").postTemplate("/binding/list", {}, Selector);
$("#adrian").bindTemplate("/binding/object", { id: 1 }, Person, "#person", "click");
});
Selector.prototype.OnRender = function (obj) {
obj.bindTemplate("/binding/object", { id: new Binding.Value("[name=personID]") }, Person, "#person");
}
The initial call to postTemplate will, when the page is loaded, post back to the server and get a list of people, and render the list using the Selector template.
OnRender for the Selector template needs then to wire up the binding, we call bindTemplate to do that, following is a description of each parameter:
So, to summarize, the actions of the binding will be:
If the field defined by the binding is changed, step 2 and 3 are called again, pulling the new Person down and displaying them.
bindTemplate supports two use cases:
The second call to bindTemplate is an example of binding to a click event rather than change, It says, when "#adrian" is clicked, post to "/binding/object" and get person with {id = 1} then display it using the Person template in the location "#person".
The following binding types are defined (with the JQuery equivalent code that derives the bound value):
The bindings field can accept any number of arguments, as an example you might use town and country in a service that returns postcodes, in this case your bindings might be:
{town: new Binding.Value("[name=town]"), country: new Binding.Value("[name=country"])}
This would result in the values of both fields being passed to the server, and a change in either field causing the update.
The full source for bindTemplate can be found in the example project attached, and in our repository.
As we have been using JSRazor, we are finding we can more easily push capability to the browser, so we are slowly moving from having single templates, to multiple templates for pages, which allows us to create a richer client experience.
We also find that using JSRazor to include the javascript on the page is more reliable than managing a number of links in the page, we just place all the javascript for the action into the JSRazor content folder, and it all gets to the page automatically, and is minimized automatically for us.
<a onclick="this.ShowEvents()">Events</a>
Template JS spans lines so FireBug debugging works as expected
This article, along with any associated source code and files, is licensed under The GNU General Public License (GPLv3)
@* Controls.jshtml *@
@helper TextBox(name, value) { <input type="text" id="@name" name="@name" value="@value"/> }
@helper Submit(text) {
text = text ? text : "Save";
<input type="submit" id="submit" value="@text"/>
}
@* Another.jshtml *@
<form>
@render Controls.TextBox("country", "Russia")
@render Controls.Submit()
</form>
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | http://www.codeproject.com/script/Articles/View.aspx?aid=521404 | CC-MAIN-2017-17 | refinedweb | 2,179 | 51.48 |
Python 3 introduced type annotation syntax. PEP 484 introduced a provisional module to provide these standard definitions and tools, along with some conventions for situations where annotations are not available.
Python is a dynamic language and follows gradual typing. When a static type checker runs a python code, the type checker considers code without type hints as
Any.
def print_name(name): print(name) planet: str = "earth"
In the above example, the
name argument type hint will be
Any since the type hint is missing while the type hint for the
planet variable is a string.
Gradual typing is still an emerging topic in Python, and there is a gap in resources to educate the Python developers about the utility and Python typing concepts.
On the surface, it looks easy to annotate the code. But the dynamic nature makes a certain part of the code harder to annotate. I have been using type-hints over the past three years and find it hard sometimes. A lot of new developers also face the same problem.
Koans
To make the learning easier, simpler, I have created a GitHub repository.
The repository contains the standalone python programs. The programs contain partial or no type hints. By adding new type hints and fixing the existing type hints, the learner will understand how type-checkers evaluate the types and what’s a difference in types at run-time.
Here is a simple demo to use of the command line.
Steps
- Clone the repository.
git clone git@github.com:kracekumar/python-typing-koans.git
- Install all the dependencies(advised to use Python Poetry, virtual env should also work.).
poetry install. It requires Python 3.9.4
- List all the koans using the command line program.
poetry run python cli.py list
- Pick up a file to learn.
- Run the file with the command line program.
poetry run python cli.py one koans/py/100-easy-variable-wrong-type.py
- Repeat the process till there are no type errors.
One central missing part is how the learner will know to fix the type errors?
The comments in the files carry the links to relevant concepts, which aids the learner in understanding the ideas to use.
Screenshots of a few koans
Topics
Python topics covered are
- Primitive Types
- dictionaries - dict/typedict
- Callables
- Design pattern - factory pattern, the builder pattern
- Decorators
- Type Alias
- Protocol, classes, objects
20 Python programs(koans) help the learner to understand gradual typing. The filenames indicate the learning level like
easy, medium, and hard.
The repository also contains Django and Django Rest Framework examples.
The Django koans teach the annotating
views, models, model methods, queryset methods like filter, all, annotate, aggregate, Q object etc..
The DRF koans teach how to annotate
DRF serializers and DRF Views.
If you face any issues while solving the koans, please open an issue in the Github repository; I’d happy to answer and explain the relevant concepts.
Links
- PEP 484 -
- Github Repository -
- Python typing documentation -
See also
- Profiling Django App
- Type Check Your Django Application
- Model Field - Django ORM Working - Part 2
- Structure - Django ORM Working - Part 1
- jut - render jupyter notebook in the terminal
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. | https://kracekumar.com/post/python-typing-koans/ | CC-MAIN-2022-40 | refinedweb | 539 | 57.16 |
4. Example - Tensor Renormalization Group (TRG)¶
TRG[1, 2, 3] is an tensor network algorithm for computing partition functions of 2D classical spin models, using real space renormalization. It is simple but quite powerful, and the basis for many more advanced algorithms.
In its simplest form it only requires a manipulating a few tensors, so does not require any of the
quimb functionality dealing with large and complex geometry networks. However, implementing it here does demonstrate:
the basic low-level tensor operations of contracting, decomposing and relabelling indices etc.
the more advanced feature of treating a small tensor network transparently as a ‘lazy’ tensor to enable more efficient iterative operations e.g.
4.1. Define the algorithm¶
The following function runs the entire algorithm and is pretty extensively commented:
[1]:
import quimb.tensor as qtn from autoray import do from math import log, log1p, cosh, sinh, cos, pi def TRG( beta, chi, iterations, j=1.0, h=0.0, cutoff=0.0, lazy=False, to_backend=None, progbar=False, **split_opts ): """Run the TRG algorithm on the square lattice. Parameters ---------- beta : float Inverse temperature. chi : int The maximum bond dimension. iterations : int The number of iterations, the overall effective lattice size is then ``(2**iterations, 2**iterations)``, with PBC. j : float, optional The coupling constant. h : float, optional The external field. cutoff : float, optional The cutoff for the bond truncations. lazy : bool, optional Whether to explicitly contract the effective site tensor at each iteration (``False``), or treat it lazily as the loop from the last iteration, allowing a more efficient iterative decomposition at large ``chi``. to_backend : callable, optional A function that takes a numpy array and converts it to the desired backend tensor. Returns ------- f : scalar The free energy per site. """ if lazy and cutoff == 0.0: # by default use a low-rank iterative decomposition split_opts.setdefault('method', 'svds') # setup the initial single site array, allowing custom backends t = qtn.tensor_gen.classical_ising_T_matrix(beta, j=j, h=h, directions='lrud') if to_backend is not None: t = to_backend(t) # This is the effective lattice # # u u # | | # l--A--r .. l--A--r # | | # d d # : : # u u # | | # l--A--r .. l--A--r # | | # d d # A = qtn.Tensor(t, ('d', 'l', 'u', 'r')) # track the very large overall scalar in log with this exponent = 0.0 if progbar: import tqdm its = tqdm.trange(2 * iterations) else: its = range(2 * iterations) for i in its: # split site tensor in two ways: # u u # | | # l--A--r -> l--AL~~b~~AU--r # | | # d d AL, AU = A.split( left_inds=['d', 'l'], get='tensors', bond_ind='b', max_bond=chi, cutoff=cutoff, **split_opts) # u u # | | # l--A--r -> l--BU~~b~~BL--r # | | # d d BU, BL = A.split( left_inds=['l', 'u'], get='tensors', bond_ind='b', max_bond=chi, cutoff=cutoff, **split_opts) # reindex to form a plaquette # u # l ~~BL--AL~~ # | | w/ inner loop indices: dp, lp, up, rp # ~~AU--BU~~ r # d AU.reindex_({'b': 'd', 'r': 'dp', 'u': 'lp'}) BL.reindex_({'b': 'l', 'd': 'lp', 'r': 'up'}) AL.reindex_({'b': 'u', 'l': 'up', 'd': 'rp'}) BU.reindex_({'b': 'r', 'u': 'rp', 'l': 'dp'}) # we can just form the TN of this loop and treat like a tensor A = (AU | BL | AL | BU) if not lazy: # ... or contract to dense A tensor explicitly A = A.contract() # bookeeping: move normalization into separate 'exponent' nfact = A.largest_element() A /= nfact exponent *= 2 # first account for lattice doubling in size exponent += do('log', nfact) # perform the final periodic trace mantissa = A.trace(['u', 'd'], ['l', 'r']) # combine with the separately tracked exponent logZ = do('log', mantissa) + exponent N = 2**(iterations * 2) return - logZ / (N * beta)
Note we are mostly just are manipulating a few objects at the
Tensor level. However, our main object
A can actually be a
TensorNetwork because many methods have exactly
the same signature and usage, specifically here:
4.2. Run the algorithm¶
We can run the function for pretty large
chi if we use this lazy iterative
feature, (which doesn’t affect accuracy):
[2]:
chi = 64 # the critical temperature is known analytically beta = log1p(2**0.5) / 2 f = TRG( beta=beta, chi=chi, iterations=16, # L = 2**16 lazy=True, # lazily treat loop TN as new tensor progbar=True, ) f
100%|███████████████████████████████████████████| 32/32 [01:33<00:00, 2.92s/it]
[2]:
-2.1096509887409742
4.3. Check against exact result¶
The exact free energy is also known analytically in the thermodynamic limit[4, 5], which we can compute here as a check:
[3]:
def free_energy_2d_exact(beta, j=1.0): from scipy.integrate import quad def inner1(theta1, theta2): return log( cosh(2 * beta * j)**2 - sinh(2 * beta * j) * cos(theta1) - sinh(2 * beta * j) * cos(theta2) ) def inner2(theta2): return quad( lambda theta1: inner1(theta1, theta2), 0, 2 * pi, )[0] I = quad(inner2, 0, 2 * pi)[0] return -(log(2) + I / (8 * pi**2)) / beta fex = free_energy_2d_exact(beta)
So our relative error is given by:
[4]:
err = 1 - f / fex err
[4]:
7.388294231969184e-08
4.4. Extensions¶
Which is pretty decent, though methods which take into account the environement when truncating can do even better. Things you might try: | https://quimb.readthedocs.io/en/latest/examples/ex_tn_TRG.html | CC-MAIN-2021-49 | refinedweb | 851 | 55.03 |
updated copyright years
\ paths.fs path file handling 03may97jaw \ Copyright (C) 1995,1996,1997,1998,2000,2003,2004,2005,2006,2007,2008. \ include string.fs [IFUNDEF] +place : +place ( adr len adr ) 2dup >r >r dup c@ char+ + swap move r> r> dup c@ rot + swap c! ; [THEN] [IFUNDEF] place : place ( c-addr1 u c-addr2 ) 2dup c! char+ swap move ; [THEN] Variable fpath ( -- path-addr ) \ gforth Variable ofile Variable tfile : os-cold ( -- ) fpath $init ofile $init tfile $init pathstring 2@ fpath only-path init-included-files ; \ The path Gforth uses for @code{included} and friends. : also-path ( c-addr len path-addr -- ) \ gforth \G add the directory @i{c-addr len} to @i{path-addr}. >r r@ $@len IF \ add separator if necessary s" |" r@ $+! 0 r@ $@ + 1- c! THEN r> $+! ; : clear-path ( path-addr -- ) \ gforth \G Set the path @i{path-addr} to empty. s" " rot $! ; : $@ ; : next-path ( addr u -- addr1 u1 addr2 u2 ) \ addr2 u2 is the first component of the path, addr1 u1 is the rest 0 $split 2swap ; : ; : pathsep? dup [char] / = swap [char] \ = or ; : need/ ofile $@ 1- + c@ pathsep? 0= IF s" /" ofile $+! THEN ; : extractpath ( adr len -- adr len2 ) BEGIN dup WHILE 1- 2dup + c@ pathsep? IF EXIT THEN REPEAT ; : remove~+ ( -- ) ofile $@ s" ~+/" string-prefix? IF ofile 0 3 $del THEN ; : expandtopic ( -- ) \ stack effect correct? - anton \ expands "./" into an absolute name ofile $@ s" ./" string-prefix? IF ofile $@ 1 /string tfile $! includefilename 2@ extractpath ofile $! \ care of / only if there is a directory ofile $@len IF need/ THEN tfile $@ over c@ pathsep? IF 1 /string THEN ofile $+!@ move r> endif endif + nip over - ; \ test cases: \ s" z/../../../a" compact-filename type cr \ s" ../z/../../../a/c" compact-filename type cr \ s" /././//./../..///x/y/../z/.././..//..//a//b/../c" compact-filename type cr : reworkdir ( -- ) remove~+ ofile $@ compact-filename nip ofile $!len ; : open-ofile ( -- fid ior ) \G opens the file whose name is in ofile expandtopic reworkdir ofile $@ r/o open-file ; : check-path ( adr1 len1 adr2 len2 -- fid 0 | 0 ior ) >r >r ofile $! need/ r> r> ofile $+! $! open-ofile dup 0= IF >r ofile $@ r> THEN EXIT ELSE r> -&37 >r path>string BEGIN next-path dup WHILE r> drop 5 pick 5 pick check-path dup 0= IF drop >r 2drop 2drop r> ofile $@ ; | http://www.complang.tuwien.ac.at/viewcvs/cgi-bin/viewcvs.cgi/gforth/kernel/paths.fs?view=auto&rev=1.37&sortby=rev&only_with_tag=MAIN | CC-MAIN-2013-20 | refinedweb | 377 | 84.68 |
zc.metarecipe 0.2.0
============
Buildout recipes provide reusable Python modules for common configuration tasks. The most widely used recipes tend to provide low-level functions, like installing eggs or software distributions, creating configuration files, and so on. The normal recipe framework is fairly well suited to building these general components.
Full-blown applications may require many, often tens, of parts. Defining the many parts that make up an application can be tedious and often entails a lot of repetition. Buildout provides a number of mechanisms to avoid repetition, including merging of configuration files and macros, but these, while useful to an extent, don’t scale very well. Buildout isn’t and shouldn’t be a programming language.
Meta-recipes allow us to bring Python to bear to provide higher-level abstractions for buildouts.
A meta-recipe is a regular Python recipe that primarily operates by creating parts. A meta recipe isn’t merely a high level recipe. It’s a recipe that defers most of it’s work to lower-level recipe by manipulating the buildout database.
Unfortunately, buildout doesn’t yet provide a high-level API for creating parts. It has a private low-level API which has been promoted to public (meaning it won’t be broken by future release), and it’s straightforward to write the needed high-level API, but it’s annoying to repeat the high-level API in each meta recipe.
This small package provides the high-level API needed for meta recipes and a simple testing framework. It will be merged into a future buildout release.
A presentation at PyCon 2011 described early work with meta recipes.
A simple meta-recipe example
Let’s look at a fairly simple meta-recipe example. First, consider a buildout configuration that builds a database deployment:
[buildout] parts = ctl pack [deployment] recipe = zc.recipe.deployment name = ample user = zope [ctl] recipe = zc.recipe.rhrc deployment = deployment chkconfig = 345 99 10 parts = main [main] recipe = zc.zodbrecipes:server deployment = deployment address = 8100 path = /var/databases/ample/main.fs zeo.conf = <zeo> address ${:address} </zeo> %import zc.zlibstorage <zlibstorage> <filestorage> path ${:path} </filestorage> </zlibstorage> [pack] recipe = zc.recipe.deployment:crontab deployment = deployment times = 1 2 * * 6 command = ${buildout:bin-directory}/zeopack -d3 -t00 ${main:address}
This buildout doesn’t build software. Rather it builds configuration for deploying a database configuration using already-deployed software. For the purpose of this document, however, the details are totally unimportant.
Rather than crafting the configuration above every time, we can write a meta-recipe that crafts it for us. We’ll use our meta-recipe as follows:
[buildout] parts = ample [ample] recipe = com.example.ample:db path = /var/databases/ample/main.fs
The idea here is that the meta recipe allows us to specify the minimal information necessary. A meta-recipe often automates policies and assumptions that are application and organization dependent. The example above assumes, for example, that we want to pack to 3 days in the past on Saturdays.
So now, let’s see the meta recipe that automates this:
import zc.metarecipe class Recipe(zc.metarecipe.Recipe): def __init__(self, buildout, name, options): super(Recipe, self).__init__(buildout, name, options) self.parse(''' [deployment] recipe = zc.recipe.deployment name = %s user = zope ''' % name) self['main'] = dict( recipe = 'zc.zodbrecipes:server', deployment = 'deployment', address = 8100, path = options['path'], **{ 'zeo.conf': ''' <zeo> address ${:address} </zeo> %import zc.zlibstorage <zlibstorage> <filestorage> path ${:path} </filestorage> </zlibstorage> '''} ) self.parse(''' [pack] recipe = zc.recipe.deployment:crontab deployment = deployment times = 1 2 * * 6 command = ${buildout:bin-directory}/zeopack -d3 -t00 ${main:address} [ctl] recipe = zc.recipe.rhrc deployment = deployment chkconfig = 345 99 10 parts = main ''')
The meta recipe just adds parts to the buildout. It does this by calling inherited __setitem__ and parse methods. The parse method just takes a string in ConfigParser syntax. It’s useful when we want to add static, or nearly static part data. The setitem syntax is useful when we have non-trivial computation for part data.
The order that we add parts is important. When adding a part, any string substitutions and other dependencies are evaluated, so the referenced parts must be defined first. This is why, for example, the pack part is added after the main part.
Note that the meta recipe supplied an integer for one of the options. In addition to strings, it’s legal to supply integer and unicode values.
Testing
Now, let’s test it. We’ll test it without actually running buildout. Rather, we’ll use a faux buildout provided by the zc.metarecipe.testing module.
>>> import zc.metarecipe.testing >>> buildout = zc.metarecipe.testing.Buildout()>>> _ = Recipe(buildout, 'ample', dict(path='/var/databases/ample/main.fs')) [deployment] name = ample recipe = zc.recipe.deployment user = zope [main] address = 8100 deployment = deployment path = /var/databases/ample/main.fs recipe = zc.zodbrecipes:server zeo.conf = <zeo> address ${:address} </zeo> <BLANKLINE> %import zc.zlibstorage <BLANKLINE> <zlibstorage> <filestorage> path ${:path} </filestorage> </zlibstorage> [ctl] chkconfig = 345 99 10 deployment = deployment parts = main recipe = zc.recipe.rhrc [pack] command = ${buildout:bin-directory}/zeopack -d3 -t00 ${main:address} deployment = deployment recipe = zc.recipe.deployment:crontab times = 1 2 * * 6
When we call our recipe, it will add sections to the test buildout and these are simply printed as added, so we can verify that the correct data was generated.
That’s pretty much it.
Changes
0.2.0 (2012-09-24)
- When setting option values, unicode and int values will be converted to strings. Other non-string values are rejected. Previously, it was easy to get errors from buildout when setting options with values read from ZooKeeper trees, which are unicode due to the use of JSON.
- Fixed: When using the meta-recipe parse method, the order that resulting sections were added was non-deterministic, due to the way ConfigParser works. Not sections are added to a buildout in sortd order, by section name.
0.1.0 (2012-05-31)
Initial release
- Downloads (All Versions):
- 0 downloads in the last day
- 0 downloads in the last week
- 91 downloads in the last month
- Author: Jim Fulton
- License: ZPL 2.1
- Package Index Owner: J1m
- DOAP record: zc.metarecipe-0.2.0.xml | https://pypi.python.org/pypi/zc.metarecipe/0.2.0 | CC-MAIN-2016-18 | refinedweb | 1,029 | 59.19 |
table of contents
NAME¶
bdflush - start, flush, or tune buffer-dirty-flush daemon
SYNOPSIS¶
#include <sys/kdaemon.h>
int bdflush(int func, long *address); int bdflush(int func, long data);
DESCRIPTION¶ Linux kernel source file fs/buffer.c.
RETURN VALUE¶
If func is negative or 0 and the daemon successfully starts, bdflush() never returns. Otherwise, the return value is 0 on success and -1 on failure, with errno set to indicate the error.
ERRORS¶
-.
VERSIONS¶
Since version 2.23, glibc no longer supports this obsolete system call.
CONFORMING TO¶
bdflush() is Linux-specific and should not be used in programs intended to be portable.
SEE ALSO¶
sync(1), fsync(2), sync(2)
COLOPHON¶
This page is part of release 5.10 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at | https://dyn.manpages.debian.org/unstable/manpages-dev/bdflush.2.en.html | CC-MAIN-2022-21 | refinedweb | 147 | 66.54 |
- How do I prevent showing dataframe index when I write it out as a table?
- Even after rounding in the dataframe, when I write out the table, it is still showing something like 11.5000 for some cells. How do I get rid of those?
Thanks in advance.
Thanks in advance.
import streamlit as st import pandas as pd import numpy as np df = pd.DataFrame( np.random.randn(50, 20), columns=('col %d' % i for i in range(20)) ) st.table(df.style.set_precision(2))
?
Best,
Fanilo
Hi,
Adding on @andfanilo’s answer,
For first, you could try two of these ugly hacks.
import pandas as pd import streamlit as st df = pd.DataFrame({"x": [1, 2, 3, 4], "y": list("abcd")}) # set first td and first th of every table to not display st.markdown(""" <style> table td:nth-child(1) { display: none } table th:nth-child(1) { display: none } </style> """, unsafe_allow_html=True) st.table(df)
or
import pandas as pd import streamlit as st df = pd.DataFrame({"x": [1, 2, 3, 4], "y": list("abcd")}) # set index to empty strings df.index = [""] * len(df) st.table(df)
Both approaches have their drawbacks,
For the second one You can try printing as type “str”, ( again just a hack )
import pandas as pd import streamlit as st df = pd.DataFrame({"x": [1.0, 2.13333333, 3.3, 4.2], "y": list("abcd")}) st.write(df.round(2).astype("str"))
Hope it helps !
Thank you so much @ash2shukla and @andfanilo. Both solutions work beautifully.
One additional question: when we display dataframe, which allows sorting. Is there a way enable filtering, something like Dash DataTable? If not, I really wish someone may have started a component for it. | https://discuss.streamlit.io/t/questions-on-st-table/6878 | CC-MAIN-2020-50 | refinedweb | 287 | 77.74 |
user configuration options in software
I am interested in your opinions on how to handle user options in software.
I see three options at the moment, but I am probably missing something.
I have a function foo() that opens a file, and then does something else. There is a seperate config class, which has a property filename, which contains the file I want to open. I know I could also use global variables or something like that, but I like the explicitness of using a module or a class.
Option1 (in python, should be self-explanatory):
def foo():
"""call as foo(). depends on config class being known/initialized"""
open(config.filename)
bar(config.someoption,config.otheroption)
Option2:
def foo(config):
"""call as foo(config)"""
open(config.filename)
bar(config.someoption,config.otheroption)
Option3:
def foo(filename,someoption,otheroption):
"""call as foo(config.filename,config.someoption,config.otheroption)""""
open(filename)
bar(someoption,otheroption)
I hope I explained my problem clearly. Are there any books that cover these kind of issues? Is it too simple?
Too many options
Monday, April 19, 2004
Is there some reason you don't want to make foo one of config's methods?
Anony Coward
Monday, April 19, 2004
Among those three options, I would definately choose option number 2. I also agree with the second poster.
Karin
Tuesday, April 20, 2004
I vote for option 3. I feel code should be configuration agnostic. I came to this conclusion when I wanted to grab some classes from a different project here at work to incorporate into my own project. They did exactly what I want, except they were tied to the configuration from that project. I had to hunt through the code to find all the configuration tie-ins before I could make it work for me.
Of course, depending on the scope and size of the project, the KISS principal can make a good argument for the other two options.
madking
Tuesday, April 20, 2004
how about option 4:
def foo(filename,someoption = config.someoption,otheroption =config.otheroption ):
"""call as foo()""""
open(filename)
bar(someoption,otheroption)
i guess this could work if your options are strings or ints.
just-thingking-aloud.
Tuesday, April 20, 2004
That'll only work if the function is defined after the config object is created, and the config.someoption and config.otheroption attributes have been set. Also won't work if you later change the values of config.someoption, etc.
David M. Cooke
Tuesday, April 20, 2004
I also like to have the code be totally independent of the configuration, so I lean towards option 3. But of course, this code is simplified, so I will get functions with many parameters.
As for making foo() a method of config: foo() actually is a method of another class, the code has many classes, and I would like to have the configuration information (read from registry/.ini file etc.) in one place.
It seems like there is not really one obvious way to do this.
Thanks for your input.
Too many options
Wednesday, April 21, 2004
When you have lots of parameters, you can always bundle them up into a containing class, to simplify method signatures.
Another option is to kind of combine options. If you have a package which requires configuration options, create a package configuration object. Classes inside this package can query this configuration object for their information rather than having lots of parameters to methods. The application which uses this package can use whatever configuration mechanism it wants, and then it pushes the configuration information into the package configuration object. This gives the seperation of concerns that I think you are looking for.
This does create an additional layer of indirection, so I would only use it where the design is large enough to warrant the additional complexity.
madking
Thursday, April 22, 2004
Recent Topics
Fog Creek Home | https://discuss.fogcreek.com/joelonsoftware4/default.asp?cmd=show&ixPost=133989&ixReplies=7 | CC-MAIN-2018-17 | refinedweb | 648 | 56.96 |
The QtSoapHttpTransport class provides a mechanism for transporting SOAP messages to and from other hosts using the HTTP protocol. More...
#include <QtSoapHttpTransport>
Inherits QObject..
QtSoapHttpTransport usage().toLatin1().constData() << endl; return; } const QtSoapType &returnValue = response.returnValue(); if (returnValue["temperature"].isValid()) { cout << "The current temperature is " << returnValue["temperature"].toString().toLatin1().constData() << " degrees Celcius." << endl; }
See also QtSoapMessage and QtSoapType.
Constructs a QtSoapHttpTransport object. Passes parent to QObject's constructor.
Destructs a QtSoapHttpTransport.
Returns the most recently received response SOAP message. This message could be a Fault message, so it is wise to check using QtSoapMessage::isFault() before processing the response.
Returns a pointer to the QNetworkAccessManager object used by this transport. This is useful if the application needs to connect to its signals, or set or read its cookie jar, etc.
Returns a pointer to the QNetworkReply object of the current (or last) request, or 0 if no such object is currently available.
This is useful if the application needs to access the raw header data etc.
This signal is emitted when a SOAP response is received from a remote peer.
See also getResponse().
This signal is emitted when a SOAP response is received from a remote peer. The received response is available in response. This signal is emitted in tandem with the argument-less responseReady() signal.
See also responseReady().
Sets the HTTP header SOAPAction to action.
Sets the host this transport should connect to. The transport mode will be HTTP, unless useSecureHTTP is set, in which case it will be HTTPS. This transport will connect to the well-known ports by default (80 for HTTP, 443 for HTTPS), unless a different, non-zero port is specified in port.
Submits the SOAP message request to the path path on the HTTP server set using setHost(). | http://doc.trolltech.com/solutions/4/qtsoap/qtsoaphttptransport.html | crawl-003 | refinedweb | 293 | 52.56 |
I am newbie to Python so I don't really understand this. It's some kind of Turing machine that should write binary number, but I can't figure out what's going on after these rules
from collections import defaultdict
import operator
# Binary counter
# (Current state, Current symbol) : (New State, New Symbol, Move)
rules = {
(0, 1): (0, 1, 1),
(0, 0): (0, 0, 1),
(0, None): (1, None, -1),
(1, 0): (0, 1, 1),
(1, 1): (1, 0, -1),
(1, None): (0, 1, 1),
}
# from here I don't really understand what's going on
def tick(state=0, tape=defaultdict(lambda: None), position=0):
state, tape[position], move = rules[(state, tape[position])]
return state, tape, position + move
system = ()
for i in range(255):
system = tick(*system)
if(system[2] == 0):
print(map(operator.itemgetter(1), sorted(system[1].items())))
It is a state machine. At each tick a new state is computed based on the old state and the contents of tape at 'tape position' in this line:
state, tape[position], move = rules[(state, tape[position])]
This statement is a destructuring assignment. The righthand side of the assignment will give you an entry of rules, which is a tuple of three elements. These three elements will be assigend to state, tape [position] and move respectively.
Another thing that might puzzle you is the line:
system = tick(*system)
especially the *.
In this line the (processor clock) tick function is called with the contents of tuple 'system' unpacked into separate parameters.
I hope this is clear enough, but the fact that you're interested in a Turing machine tells me that you've got something with computer programming... ;) | https://codedump.io/share/K2H2vqCRpoUD/1/can-someone-explain-to-me-this-turing-machine-code | CC-MAIN-2017-26 | refinedweb | 278 | 52.12 |
Hide Forgot
I would like to rebase sos package in RHEL7.7 to upstream 3.7 version planned to be released in Q1 2019 / before 7.7 devel freeze. Reasons:
- several features and adjutments are planned for the release that can be nontrivial to backport one after another (while the features are worth to offer to customers and CEE). So it is much easier to pack a stable upstream version than try to apply patches to downstream.
- there will be smaller "testing distance" between upstream and RHEL
- rebasing will simplify future downstream work, esp. later on when working on z-stream patches
Version-Release number of selected component (if applicable):
sos-3.6-*
How reproducible:
100%
Steps to Reproduce:
1. rpm -q sos
Actual results:
sos-3.6-*
Expected results:
sos-3.7-*
Additional info:
technically, FailedQA since:
# rpm -q sos
sos-3.7-1.el7.noarch
# sosreport
sosreport (version 3.6)
..
Trivial fix (just the 2nd part applies to downstream):
diff --git a/sos.spec b/sos.spec
index 95249670..68aedcfd 100644
--- a/sos.spec
+++ b/sos.spec
@@ -2,7 +2,7 @@
Summary: A set of tools to gather troubleshooting information from a system
Name: sos
-Version: 3.6
+Version: 3.7
Release: 1%{?dist}
Group: Applications/System
Source0:{version}.tar.gz
diff --git a/sos/__init__.py b/sos/__init__.py
index c436bd20..dfc7ed5f 100644
--- a/sos/__init__.py
+++ b/sos/__init__.py
@@ -25,7 +25,7 @@ if six.PY3:
else:
from ConfigParser import ConfigParser, ParsingError, Error
-__version__ = "3.6"
+__version__ = "3.7"
gettext_dir = "/usr/share/locale"
gettext_app = "sos"
will fix in a day or. | https://bugzilla.redhat.com/show_bug.cgi?id=1656812 | CC-MAIN-2020-10 | refinedweb | 267 | 63.46 |
#include <registers.h>
This class provides an accessor of fields contained in one or more consecutive UM7 registers. Each register is nominally a uint32_t, but XYZ vectors are sometimes stored as a pair of int16_t values in one register and one in the following register. Other values are stored as int32_t representation or float32s.
This class takes care of the necessary transformations to simplify the actual "business logic" of the driver.
Definition at line 93 of file registers.h.
Definition at line 96 of file registers.h.
This is ridiculous to have a whole source file for this tiny implementation, but it's necessary to resolve the otherwise circular dependency between the Registers and Accessor classes, when Registers contains Accessor instances and Accessor is a template class.
Definition at line 39 of file registers.cpp.
Number/address of the register in the array of uint32s which is shared with the UM7 firmware.
Definition at line 107 of file registers.h.
Length of how many sub-register fields comprise this accessor. Not required to stay within the bounds of a single register.
Definition at line 116 of file registers.h.
Definition at line 119 of file registers.h.
Width of the sub-register field, in bytes, either 2 or 4.
Definition at line 111 of file registers.h. | http://docs.ros.org/en/kinetic/api/um7/html/classum7_1_1Accessor__.html | CC-MAIN-2020-50 | refinedweb | 217 | 50.02 |
In this lab you'll learn how Firebase User Management works using Google Sign-In as an example. With the skills you learn, you'll be able to quickly move on to using other providers such as Facebook and Twitter.
Using Android Studio, create a new Android App.
Click File->New Project, and in the first dialog give your application a name and company domain. This will generate a package name. Make a note of this package name. You will need it later.
Click Next, and you'll be asked to select the form factors that your app will run on. Just keep the defaults and click Next.
On the next screen, you'll be asked to Add an Activity to Mobile. Pick ‘Empty Activity' as shown, and click Next.
The next screen asks you to Customize the Activity. Just keep the defaults and click ‘Finish'. You'll now have an empty app. The next step is to add the Firebase dependencies to this.
When using Android Studio for Android applications, dependency and library configuration is managed using gradle. You'll find that there are two build.gradle files that you have to manage, and it can often be confusing as to what goes where. In Android Studio, if you select the ‘Android' tab in the project explorer, you'll see a ‘Gradle Scripts' folder. Open this and you'll see both build.gradle files:
The selected one -- with ‘(Module: app)' listed after it, is typically referred to as the ‘app level' build.gradle, and the other is the ‘root' or ‘project' level one.
Open the app-level build.gradle file. At the bottom of it, you'll see a section called dependencies. Edit this to add the Firebase dependencies. When you're done it should look like this:
dependencies { compile fileTree(dir: 'libs', include: ['*.jar']) testCompile 'junit:junit:4.12' compile 'com.android.support:appcompat-v7:23.3.0' compile 'com.google.firebase:firebase-auth:9.0.0' compile 'com.google.android.gms:play-services-auth:9.0.0' } apply plugin: 'com.google.gms.google-services'
Android Studio will ask you to sync your files because they have been updated. This will give you an error if you do so. Don't worry about this, as there are more changes needed.
The next step is to open the ‘project-level' build.gradle file. You'll need to add a dependency to the google services. When you're done, it should look like this:
buildscript { repositories { jcenter() } dependencies { classpath 'com.android.tools.build:gradle:2.1.0' classpath 'com.google.gms:google-services:3.0.0' // NOTE: Do not place your application dependencies here; they belong // in the individual module build.gradle files } } allprojects { repositories { jcenter() } } task clean(type: Delete) { delete rootProject.buildDir }
If you sync now, there won't be any errors in synching, but you'll see in the messages window that there's a problem with a missing google-services.json file:
You'll get this in the next step -- when you configure your project in the Firebase Console.
When using Firebase in your apps, you need an associated project in the Firebase Console. This allows you to manage many things about your project, from analytics to data and more. When using Auth, the console is required to turn on the services that you want to use -- and in this lab you'll be using Google Sign-In, so you'll see how to activate that.
In Firebase Console, select ‘Create New Project' and you'll see this dialog:
Give the project a name, and then click the ‘Create Project' button. You'll be taken to the Firebase Overview screen:
At the top of this screen, you'll see options to add Firebase to Android / iOS and Web apps. Choose the ‘Add Firebase to your Android App'. You'll see the ‘Enter app details' screen:
You'll need to enter the package name for your app and the debug signing certificate's SHA-1. The package name is what you configured earlier on.
More details on the SHA.
.
Click ‘Add App', and on the next screen, you'll see that a file called ‘google-services.json' gets downloaded. Put this in your app folder as shown. In Android Studio 2, select the ‘Project Files' tab as shown here:
You'll see there's an app section, with an app folder in it. Drop the google-services.json onto that.
Click through the rest of the setup wizard, and you'll be returned to the overview screen. You'll see that your app has now been added to it:
On the left of the screen you'll see an ‘Auth' section. Select it. At the top of the screen there are a number of options. Ensure that ‘Sign In Method' is selected, and you'll see the list of providers:
Select the ‘Google' one, and click the button to enable it. Then press ‘Save'. You're now ready to begin coding the app.
Return to Android Studio. Now if you do a gradle sync, everything will work fine, showing that you've added all the necessary dependencies, and configured the back end on Firebase console.
Now let's start coding a simple sign-in app that uses all of this.
The first step is to edit your layout file to have a Google Sign-In button. In your res/layout folder you'll find activity_main.xml
Edit this file to add a Sign In button and some basic layout like.google.devrel.lmoroney.androidauthcodelab.MainActivity"> <LinearLayout android: <TextView android: <com.google.android.gms.common.SignInButton android: </LinearLayout> </RelativeLayout>
All you've done here is to replace the default ‘Hello World' text view with a Linear Layout containing that text view, and a SignInButton. You've also given the text view an id so it can be accessed in code.
The empty Main Activity that was created for you was declared as simply extending AppCompatActivity. To handle Auth, you'll need to use a GoogleApiClient, which requires you to declare that your class implements GoogleApiClient.OnConnectionFailedListener. In addition, as the buttons will use an OnClickListener, you need to implement the View.OnClickListener interface too.
So update your class declaration like this:
public class MainActivity extends AppCompatActivity implements GoogleApiClient.OnConnectionFailedListener, View.OnClickListener {
Android Studio will give you a red underline here. Don't worry about that for now -- it's just warning you that you haven't implemented some required overrides yet.
Next up you'll need to add some class-level variables that will be shared across the various functions you're writing. Below the class declaration and above the onCreate, add the following:
SignInButton signInButton; TextView statusTextView; GoogleApiClient mGoogleApiClient; private static final String TAG = "SignInActivity"; private static final int RC_SIGN_IN = 9001;
In your onCreate function, you'll next need to add the declarations of a GoogleSignInOptions and a GoogleApiClient. The options object is used to define the type of GoogleSignIn experience you want to access. You'll notice that it requests just the Email address of the user. This simplifies the sign in flow so that no further elevated permissions are required -- you're only accessing their email address. This is then used to construct the Google API Client, which is told to access the Google Sign In API.
Add these lines to the onCreate function:
GoogleSignInOptions gso = new GoogleSignInOptions.Builder(GoogleSignInOptions.DEFAULT_SIGN_IN) .requestEmail() .build(); mGoogleApiClient = new GoogleApiClient.Builder(this) .enableAutoManage(this , this) .addApi(Auth.GOOGLE_SIGN_IN_API, gso) .build();
You'll finish up your onCreate function by setting up the objects representing the user interface components -- the status text view and the sign in button:
statusTextView = (TextView) findViewById(R.id.status_textview); signInButton = (SignInButton) findViewById(R.id.sign_in_button); signInButton.setOnClickListener(this);
You'll notice that the Sign In Button sets its on click listener to this, so you'll need to implement the onClick override. Here's the code:
@Override public void onClick(View v){ switch(v.getId()){ case R.id.sign_in_button: signIn(); break; } }
When the user clicks anything in the activity this function will be called -- at present it just checks if the click was raised by the sign in button, and if it does, it calls the signIn() function. This will be red right now, because you haven't implemented it yet. Let's implement that next.
Here's the code:
private void signIn(){ Intent signInIntent = Auth.GoogleSignInApi.getSignInIntent(mGoogleApiClient); startActivityForResult(signInIntent, RC_SIGN_IN); }
This creates a new Intent called signInIntent, using the Google Sign In API. It then starts an activity for the result of that intent. When you run this code later, the effect of this will be that the account picker will be displayed.
When the user picks an account, an Activity Result will be generated. So the next step is to handle this Activity Result:
@Override public void onActivityResult(int requestCode, int resultCode, Intent data) { super.onActivityResult(requestCode, resultCode, data); if (requestCode == RC_SIGN_IN) { GoogleSignInResult result = Auth.GoogleSignInApi.getSignInResultFromIntent(data); handleSignInResult(result); } }
The Activity was generated with the code RC_SIGN_IN (we declared this earlier), so when we check the activity result, we want to check if it's the result for the activity with this code. It's possible to have multiple activities calling back, so the code is used to differentiate between them. When the request code matches, we know that the data returned from the activity will have a Sign In Result, so we can create a GoogleSignInResult object from it. We then handle that in handleSignInResult. You'll implement that next:
private void handleSignInResult(GoogleSignInResult result) { if (result.isSuccess()) { GoogleSignInAccount acct = result.getSignInAccount(); statusTextView.setText("Hello, " + acct.getDisplayName()); } else { } }
This gets the account details from the result, and pulls the user name from that, setting it to the contents of the text view.
There's one last thing you need to do, and that's implementing the OnConnectionFailedListener override, which is a requirement of using the GoogleApiClient.
@Override public void onConnectionFailed(ConnectionResult connectionResult) { // An unresolvable error has occurred and Google APIs (including Sign-In) will not // be available. Log.d(TAG, "onConnectionFailed:" + connectionResult); }
Now when you run your app, you'll see the following:
Clicking the Sign In button will call up the Account Picker:
Once you've chosen the picker, you will get signed in, and the user interface will update to welcome you based on the name associated with the account that you sign in with:
You've taken your first steps into the world of federated authentication and identity. Well done! Some next steps to consider: | https://codelabs.developers.google.com/codelabs/firebase-auth-android/index.html?index=..%2F..%2Findex | CC-MAIN-2017-13 | refinedweb | 1,756 | 57.37 |
Tech Off Thread4 posts
Forum Read Only
This forum has been made read only by the site admins. No new threads or comments can be added.
Namespaces in VB a question and and suggestion
Conversation locked
This conversation has been locked by the site admins. No new comments can be made.
Hello
In C# when you create a new class, the IDE inserts namespace for you. And the namespace is the folder structure in the solution. Is this possible to do in VB ?
I know what namespaces in VB is different then C#, but it should be possible to insert a parameter into the templates to do this ?
Anybody done this ?
For future versions of VB I really hope they will include this. We use namespaces in VB also
Regards
Stein
@steinvk: you can wrap a class in the namespace/end namespace construct. isn't that what you want?
You could update the wizards provided with Visual Studio.
Sorry if I was unclear
I want VB to behave like C# when I add a new class to my solution.
i.e I have a folder called Views in my solution.
When I add a new class to the Views folder I want VB to insert
namespace Views
public class MyNewClass
End Class
end namespace
This is what C# does when you add a new class to a C# solution.
I know that VB don't do this by default now and I haven't found a way to add this to the templates either
Regards
Stein | https://channel9.msdn.com/Forums/TechOff/Namespaces-in-VB-a-question-and-and-suggestion | CC-MAIN-2017-26 | refinedweb | 256 | 81.83 |
Soran
Can I use data annotation validation for database first application?
Soru
Tüm Yanıtlar
Hi mystique99;
I have not done this myself but there is a post in the Microsoft ASP.Net web site that show how this can be done. Please see link below.
Validation with the Data Annotation Validators (C#)
in the section called "Using Data Annotation Validators with the Entity Framework" near the bottom of the page.
Fernando (MCSD)
If a post answers your question, please click "Mark As Answer" on that post and "Mark as Helpful".
Hi mystique99,
You can use "Buddy Classes" to validate database first, you can refer this link here: 14:55
Hi mystique99,
=============================.
=============================
You can use Buddy classes in any applications. 19:26.
- Thanks for checking back with me. I understand the idea of "buddy class" and have tried it in a prototype. But somehow it didn't work for me so I am back to where I started. Would you be able to post a simple sample program to demonstrate? That will be very helpful.
Hi Alan,
I am not sure those examples help me. My application does not have an UI component, it is a web service. I would like to use data annotation to do entity validation. I did create a buddy class but it didn't seem to flag anything when I call saveChanges on the entity.
Where can I get an example? I can send you mine but I am not sure that I can upload my project here.
Thanks.
Mystique99
I think the following article does exactly what you need.
I created a sample based on what it says (did not use all the steps, but the most important ones and it worked. )
If you run into any problems (or if you are not familiar with .tt templates) please let me know. You can email me at juliako@microsoft.com and I can send you a project that I created with more explanations.
Thank you,
Julia
This posting is provided "AS IS" with no warranties, and confers no rights.
- Düzenleyen Julia KornichMicrosoft employee, Moderator 07 Mart 2012 Çarşamba 23:21
Hi,
If I remember a common issue is that the member in the buddy class must be also a property with the same name. Could it be that you created fields rather than properties in the buddy class ?
Else try to show the shortest example that doesn't work. It will be easier to see your code and find out what doesn't work rather than to imagine where you could have missed something. BTW you told you wanted a non MVC sample but we are still not sure what is your context (Silverlight with WCF or WebForms maybe ? If the later this is not yet available (it works now with DynamicData but if I remember it should come in 4.5))
Please always mark whatever response solved your issue so that the thread is properly marked as "Answered".
My project is a WCF service with no UI. I am using entity framework for the data access layer. I was hoping to use data annotation in Metatdata class to define validation rules. When calling SaveChanges on the entity (derived from ObjectContext) object, I was hoping to catch a specific exception other than Exception to handle validation violation. But it didn't work for me.
Here is my metadata class for the Product entity using AdventureWorks database. This follows the examples Julia pointed out above.
See any problems?
namespace EntityDAL { [MetadataType(typeof(ProductMetadata))] public partial class Product { private class ProductMetadata { [Range(0, Double.MaxValue, ErrorMessage = "ListPrice can't be smaller than zero!")] public decimal ListPrice { get; set; } [MaxLength(7)] public global::System.String Name { get; set; } } } } | https://social.msdn.microsoft.com/Forums/tr-TR/404c571d-767a-4998-b5d9-be54ccaf8d73/can-i-use-data-annotation-validation-for-database-first-application?forum=adodotnetentityframework | CC-MAIN-2015-32 | refinedweb | 619 | 66.44 |
/* Exported functions from emit-rtl_EMIT_RTL_H #define GCC_EMIT_RTL_H /* Set the alias set of MEM to SET. */ extern void set_mem_alias_set (rtx, HOST_WIDE_INT); /* Set the alignment of MEM to ALIGN bits. */ extern void set_mem_align (rtx, unsigned int); /* Set the expr for MEM to EXPR. */ extern void set_mem_expr (rtx, tree); /* Set the offset for MEM to OFFSET. */ extern void set_mem_offset (rtx, rtx); /* Set the size for MEM to SIZE. */ extern void set_mem_size (rtx, rtx); /* Return a memory reference like MEMREF, but with its address changed to ADDR. The caller is asserting that the actual piece of memory pointed to is the same, just the form of the address is being changed, such as by putting something into a register. */ extern rtx replace_equiv_address (rtx, rtx); /* Likewise, but the reference is not required to be valid. */ extern rtx replace_equiv_address_nv (rtx, rtx); #endif /* GCC_EMIT_RTL_H */ | http://opensource.apple.com//source/gcc/gcc-5490/gcc/emit-rtl.h | CC-MAIN-2016-44 | refinedweb | 137 | 73.07 |
Scott Dial wrote: > >> >> But doesn't do that for files, and couldn't we do something >> like zip? Of course, we'd want to >> do zip, too. That >> way leads to madness.... >> > > It would make more sense to register protocol handlers to this magical > unification of resource manipulation. But allow me to perform my first > channeling of Guido.. YAGNI. > I'm thinking that it was a tactical error on my part to throw in the whole "unified URL / filename namespace" idea, which really has nothing to do with the topic. Lets drop it, or start another topic, and let this thread focus on critiques of the path module, which is probably more relevant at the moment. -- Talin | https://mail.python.org/pipermail/python-dev/2006-October/069579.html | CC-MAIN-2018-05 | refinedweb | 117 | 72.05 |
process:
%time: Time the execution of a single statement
%timeit: Time repeated execution of a single statement for more accuracy
%prun: Run code with the profiler
%lprun: Run code with the line-by-line profiler
%memit: Measure the memory use of a single statement
%mprun: Run code with the line-by-line memory profiler
The last four commands are not bundled with IPython–you'll need to get the
line_profiler and
memory_profiler extensions, which we will discuss in the following sections.
%timeit:
%%timeit total = 0 for i in range(1000): for j in range(1000): total += i * (-1) ** j
1 loops, best of 3: 407 ms per loop
Sometimes repeating an operation is not the best option. For example, if we have a list that we'd like to sort, we might be misled by a repeated operation. Sorting a pre-sorted list is much faster than sorting an unsorted list, so the repetition will skew the result:
import random L = [random.random() for i in range(100000)] %timeit L.sort()
100 loops, best of 3: 1.9 ms per loop
For this, the
%time magic function may be a better choice. It also is a good choice for longer-running commands, when short, system-related delays are unlikely to affect the result.
Let's time the sorting of an unsorted and a presorted list:
import random L = [random.random() for i in range(100000)] print("sorting an unsorted list:") %time L.sort()
sorting an unsorted list: CPU times: user 40.6 ms, sys: 896 µs, total: 41.5 ms Wall time: 41.5 ms
print("sorting an already sorted list:") %time L.sort()
sorting an already sorted list: CPU times: user 8.18 ms, sys: 10 µs, total: 8.19 ms Wall time: 8.24 ms:
%%time total = 0 for i in range(1000): for j in range(1000): total += i * (-1) ** j
CPU times: user 504 ms, sys: 979 µs, total: 505 ms Wall time: 505 ms
For more information on
%time and
%timeit, as well as their available options, use the IPython help functionality (i.e., type
%time? at the IPython prompt).
%prun¶ {built-in method exec}
The result is a table that indicates, in order of total time on each function call, where the execution is spending the most time. In this case, the bulk of execution time is in the list comprehension inside
sum_of_lists.
From here, we could start thinking about what changes we might make to improve the performance in the algorithm.
For more information on
%prun, as well as its available options, use the IPython help functionality (i.e., type
%prun? at the IPython prompt).
%lprun¶:
$ pip install line_profiler
Next, you can use IPython to load the
line_profiler IPython extension, offered as part of this package:
%load_ext line_profiler
Now the
%lprun command will do a line-by-line profiling of any function–in this case, we need to tell it explicitly which functions we're interested in profiling:
%lprun -f sum_of_lists sum_of_lists(5000)
As before, the notebook sends the result to the pager, but it looks something like this:
Timer unit: 1e-06 s Total time: 0.009382 s File: <ipython-input-19-fa2be176cc3e> Function: sum_of_lists at line 1 Line # Hits Time Per Hit % Time Line Contents ============================================================== 1 def sum_of_lists(N): 2 1 2 2.0 0.0 total = 0 3 6 8 1.3 0.1 for i in range(5): 4 5 9001 1800.2 95.9 L = [j ^ (j >> i) for j in range(N)] 5 5 371 74.2 4.0 total += sum(L) 6 1 0 0.0 0.0 return total.
For more information on
%lprun, as well as its available options, use the IPython help functionality (i.e., type
%lprun? at the IPython prompt).
%memitand
%mprun¶
Another aspect of profiling is the amount of memory an operation uses.
This can be evaluated with another IPython extension, the
memory_profiler.
As with the
line_profiler, we start by
pip-installing the extension:
$ pip install memory_profiler
Then we can use IPython to load the extension:
_lists function, with one addition that will make our memory profiling results more clear:
%%file mprun_demo.py def sum_of_lists(N): total = 0 for i in range(5): L = [j ^ (j >> i) for j in range(N)] total += sum(L) del L # remove reference to L return total
Overwriting mprun_demo.py
We can now import the new version of this function and run the memory line profiler:
from mprun_demo import sum_of_lists %mprun -f sum_of_lists sum_of_lists(1000000)
The result, printed to the pager, gives us a summary of the memory use of the function, and looks something like this:
Filename: ./mprun_demo.py Line # Mem usage Increment Line Contents ================================================ 4 71.9 MiB 0.0 MiB L = [j ^ (j >> i) for j in range(N)] Filename: ./mprun_demo.py Line # Mem usage Increment Line Contents ================================================ 1 39.0 MiB 0.0 MiB def sum_of_lists(N): 2 39.0 MiB 0.0 MiB total = 0 3 46.5 MiB 7.5 MiB for i in range(5): 4 71.9 MiB 25.4 MiB L = [j ^ (j >> i) for j in range(N)] 5 71.9 MiB 0.0 MiB total += sum(L) 6 46.5 MiB -25.4 MiB del L # remove reference to L 7 39.1 MiB -7.4 MiB return total
Here the
Increment column tells us how much each line affects the total memory budget: observe that when we create and delete the list
L, we are adding about 25 MB of memory usage.
This is on top of the background memory usage from the Python interpreter itself.
For more information on
%memit and
%mprun, as well as their available options, use the IPython help functionality (i.e., type
%memit? at the IPython prompt). | https://nbviewer.org/github/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/01.07-Timing-and-Profiling.ipynb | CC-MAIN-2022-40 | refinedweb | 959 | 73.47 |
Is there a shortcut to programming mastery? A shortcut doesn't necessarily have to mean an easier way. (Because nothing that is easy is worth doing, right?)
But is there a recommended and efficient learning method to guide your passion for computers into results?
Pro programmers may have various opinions on this, but some things are constant. Hard work, motivation, focus, and lots of practice prevail over any shortcuts people might suggest. Nonetheless, the following tips may also help:
1. Get yourself to a computer.
You may feel impatient and eager to begin creating cool codes for games, web pages, and phone apps. You may have dreamed up designs and processes and methods, only to get disheartened by your lack of programming knowledge.
The only way you can move towards mastery is to get yourself to a computer and start learning. Or better still, complement your learning with pen and paper, because writing code without the support of automatic syntax highlighting will force you to pay particular attention to the syntax of the language you've chosen.
As long as you begin, you'll get there eventually.
2. Don't stress about what you don't understand.
It's easy to get bogged down by completely new concepts and jargon when you begin coding. There will be many things you won't understand. Terms like variables and classes will be entirely new to you. Cut yourself some slack; you're learning something new. Don't get stressed about what you don't understand.
Start learning by trying to think like a computer. It's not going to be easy. Computers have a little mystery about them, even for programmers who have been coding for years. It may take you some time to truly grasp how binaries work, or what an assembly language is, or why you need to compile code in some languages. It's all part of the learning process.
3. Spend all your free time on programming.
Some programmers say that to be proficient you need to spend at least 10,000 hours with code. That's a metaphorical number. You don't need to bring out your calendar and calculator. It only means that you need to spend as much time with code as you can. As with any new skill, the more time you spend with it, the better you'll get at it. Language syntaxes will become more familiar. You'll be able to think up different ways of solving problems. Try to keep a notebook with you, so you can jot down any code snippet that comes to mind on the fly.
4. Pacing doesn't matter.
You can learn to code at your own pace. There is no ideal time in which to learn how to code. Some people can take a couple of years to cover the same distance that others take in a few weeks, or even months. It doesn't matter how long you take. The key is to make sure you're making the most efficient use of your time by thinking as much as you can about code.
5. Join a programming group.
It is always useful to join a like-minded group of people with whom you can discuss your interests. In a mixed group, there will be some people who are ahead of you in learning, while others who are behind you. In helping out the latter, you'll brush up your concepts and discover new paradigms. Those who are ahead of you will be able to offer you tips. If you can move to Silicon Valley or a place where there are lots of programming jobs and everyone including the sparrows speak in code, you'll learn much faster!
6. Find resources and read the right books.
You may not know what books to buy or what resources to use. The popular programming languages have so much documentation built around them that it can feel frustrating not knowing where to begin.
The best book on the language you've chosen can come for a few dozen dollars. Remember, you may learn more from it than what several free websites can teach you. As long as you stick to the language you're learning and don't lose your motivation, you'll find that spending the money on those books will be worth it in the long run.
7. Find a mentor to lend a guiding hand.
Google can't give you all the answers you need when you come across problems you can't solve. You need to find some good programmers you can turn to, who will be able to explain in a few minutes what you'd need hours to decipher on your own. The possibility of meeting potential mentors is one of the reasons why joining meetups is a good idea. If you sign up for a college course, then reach out to tutors and professors whenever you feel the need. Most people as passionate about programming as you will be happy to help.
8. Try to grasp Object Oriented Programming.
One of the most important concepts of modern day programming is Object Oriented Programming. C++, Java, C are all examples of OOP programming. OOP makes programs easier to manage. It's the theory that powers the popular coding languages of the world. If you hear of terms like inheritance, objects, classes, etc. then the conversation is about OOP. Spend some time on clearing your OOP concepts, take the help of mentors, and you'll be able to join in the conversation.
9. Share your code with others.
Just as writing skills don't get better unless you've shared your work with other people, your coding skills won't quickly improve unless people offer feedback on your code. There are many ways of solving problems with code, and you'll never know more efficient ways unless programmers who are good at it tell you. Share your code, take constructive criticism, and you'll be well on your way to mastery.
10. Use GitHub for easier version control.
Many people don't think about version control until it is too late, and they have to change or replace entire blocks of code. Make version control a habit, since it's a necessity for every developer today. You can use GitHub to make version control easier. GitHub is a website and development platform that lets you collaborate with others, learn from them, and incorporate version control.
11. Reward yourself by working on cool projects.
You'll often have the urge to work on cool projects like games or code that doesn't directly relate to what you're learning. It's okay and even advisable to give in to these urges once in a while. They'll keep you motivated, and you'll have fun learning along the way.
Try not to fast track to projects without picking up the fundamentals of coding, however. You'll learn to code faster if you understand things like binaries, logic gates, loops, procedures, and other low-level things before you begin to build entire applications with superficial knowledge of syntax.
12. Make your code as readable as possible.
When you write code, even in the most abstract of languages like C++, try to keep the code readable. Others won't understand your code - and won't bother to try - if you don't learn to write clean code.
What does 'clean code' mean? Brackets on empty lines, consistent formatting, enough white spaces to make the code easy on the eye, comments preferably on new lines, and self-explanatory names are some of the features of clean code. But there is no single way of making code readable. Everyone has their preferences. Just make sure your code is as simple and elegant as possible.
Here is something to think about. English may seem most readable for a certain bit of code, such as for the problem:
Print consecutive integers from 1 to 9, each in a separate line
When you convert this into code, in Scala, it will read:
1 until 9 foreach println
In C++, the code will read:
#include <iostream>
using std::cout;
using std:end1;
int main()
{
//count
for (int i=0; i,9; i++)
{
cout<<i<<endl;
}
//Program end
return 0;
}
Which code is more readable?
13. Teach code as you learn.
There’s a beautiful concept that programmers know as rubber duck debugging. The concept describes the process of explaining your code to a yellow rubber duck like the ones you took into the bath as a child. While you’re going through the exercise, you may suddenly have a moment when the problem in your code becomes crystal clear!
When you verbalize code, you trigger a different part of your brain that lets you see the problem from a different perspective.
Know that if you can’t explain your code in simple terms, you don’t understand it clearly.
These tips should get you started on the road to code mastery. It’s not going to be an easy road, but it will often be fun, and the rewards will be proportionately great. You may even land a job when you’ve mastered programming after the proverbial 10,000 hours of hard work!
So, what are you waiting for? Get started on learning code today!
Be sure to come back to this article once you’ve mastered coding techniques, and let us know your experience on this journey in the comments below. | https://www.tr.freelancer.com/community/articles/13-tips-to-master-programming-faster | CC-MAIN-2017-39 | refinedweb | 1,591 | 73.17 |
A surface language for programming Stan models using python syntax
Project description
YAPS
Yaps is a new surface language for Stan. It lets
users write Stan programs using Python syntax. For example, consider the
following Stan program, which models tosses
x of a coin with bias
theta:
data { int<lower=0,upper=1> x[10]; } parameters { real<lower=0,upper=1> theta; } model { theta ~ uniform(0,1); for (i in 1:10) x[i] ~ bernoulli(theta); }
It can be rewritten in Python has follows:
import yaps from yaps.lib import int, real, uniform, bernoulli @yaps.model def coin(x: int(lower=0, upper=1)[10]): theta: real(lower=0, upper=1) <~ uniform(0, 1) for i in range(1,11): x[i] <~ bernoulli(theta)
The
@yaps.model decorator indicates that the function following it
is a Stan program. While being syntactically Python, it is
semantically reinterpreted as Stan.
The argument of the function corresponds to the
data block. The
type of the data must be declared. Here, you can see that
x is an
array of 10 integers between
0 and
1 (
int(lower=0, upper=1)[10]).
Parameters are declared as variables with their type in the body of
the function. Their prior can be defined using the sampling operator
<~ (or
is).
The body of the function corresponds to the Stan model. Python syntax
is used for the imperative constructs of the model, like the
for
loop in the example. The operator
<~ is used to represent sampling
and
x.T[a,b] for truncated distribution.
Note that Stan array are 1-based. The range of the loop is thus
range(1, 11),
that is 1,2, ... 10.
Other Stan blocks can be introduced using the
with syntax of Python.
For example, the previous program could also be written as follows:
@yaps.model def coin(x: int(lower=0, upper=1)[10]): with parameters: theta: real(lower=0, upper=1) with model: theta <~ uniform(0, 1) for i in range(1,11): x[i] <~ bernoulli(theta)
The corresponding Stan program can be displayed using the
print(coin)
Finally, it is possible to launch Bayesian inference on the defined model applied to some data. The communication with the Stan inference engine is based on on PyCmdStan.
flips = np.array([0, 1, 0, 0, 0, 0, 0, 0, 0, 1]) constrained_coin = coin(x=flips) constrained_coin.sample(data=constrained_coin.data)
Note that arrays must be cast into numpy arrays (see pycmdstan documentation).
After the inference the attribute
posterior of the constrained model is an object with fields for the latent model parameters:
theta_mean = constrained_coin.posterior.theta.mean() print("mean of theta: {:.3f}".format(theta_mean))
Yaps provides a lighter syntax to Stan programs. Since Yaps uses Python syntax, users can take advantage of Python tooling for syntax highlighting, indentation, error reporting, ...
Install
Yaps depends on the following python packages:
- astor
- graphviz
- antlr4-python3-runtime
- pycmdstan
To install Yaps and all its dependencies run:
pip install yaps
To install from source, first clone the repo, then:
pip install .
By default, communication with the Stan inference engine is based on PyCmdStan. To run inference, you first need to install CmdStan and set the CMDSTAN environment variable to point to your CmdStan directory.
export CMDSTAN=/path/to/cmdstan
Tools
We provide a tool to compile Stan files to Yaps syntax.
For instance, if
path/to/coin.stan contain the Stan model presented at the beginning, then:
stan2yaps path/to/coin.stan
outputs:
# ------------- # tests/stan/coin.stan # ------------- @yaps.model def stan_model(x: int(lower=0, upper=1)[10]): theta: real theta is uniform(0.0, 1.0) for i in range(1, 10 + 1): x[(i),] is bernoulli(theta) print(x)
Compilers from Yaps to Stan and from Stan to Yaps can also be invoked programmatically using the following functions:
yaps.from_stan(code_string=None, code_file=None) # Compile a Stan model to Yaps yaps.to_stan(code_string=None, code_file=None) # Compile a Yaps model to Stan
Documentation
The full documentation is available at. You can find more details in the following article:
@article{2018-yaps-stan, author = {Baudart, Guillaume and Hirzel, Martin and Kate, Kiran and Mandel, Louis and Shinnar, Avraham}, title = "{Yaps: Python Frontend to Stan}", journal = {arXiv e-prints}, year = 2018, month = Dec, url = {}, }
License
Yaps is distributed under the terms of the Apache 2.0 License, see LICENSE.txt
Contributions
Yaps is still at an early phase of development and we welcome contributions. Contributors are expected to submit a 'Developer's Certificate of Origin', which can be found in DCO1.1.txt.
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/yaps/ | CC-MAIN-2019-09 | refinedweb | 782 | 56.45 |
Color-based region growing segmentation
In this tutorial we will learn how to use the color-based region growing algorithm implemented in the
pcl::RegionGrowingRGB class.
This algorithm is based on the same concept as the
pcl::RegionGrowing that is described in the Region growing segmentation tutorial.
If you are interested in the understanding of the base idea, please refer to the mentioned tutorial.
There are two main differences in the color-based algorithm. The first one is that it uses color instead of normals. The second is that it uses the merging algorithm for over- and under- segmentation control. Let’s take a look at how it is done. After the segmentation, an attempt for merging clusters with close colors is made. Two neighbouring clusters with a small difference between average color are merged together. Then the second merging step takes place. During this step every single cluster is verified by the number of points that it contains. If this number is less than the user-defined value than current cluster is merged with the closest neighbouring cluster.
The code
This tutorial requires colored cloud. You can use this one.
Next what you need to do is to create a file
region_growing_rgb_segmentation.cpp in any editor you prefer and copy the following code inside of it:
The explanation
Now let’s study out what is the purpose of this code.
Let’s take a look at first lines that are of interest:
pcl::PointCloud <pcl::PointXYZRGB>::Ptr cloud (new pcl::PointCloud <pcl::PointXYZRGB>); if ( pcl::io::loadPCDFile <pcl::PointXYZRGB> ("region_growing_rgb_tutorial.pcd", *cloud) == -1 ) { std::cout << "Cloud reading failed." << std::endl; return (-1); }
They are simply loading the cloud from the .pcd file. Note that points must have the color.
pcl::RegionGrowingRGB<pcl::PointXYZRGB> reg;
This line is responsible for
pcl::RegionGrowingRGB instantiation. This class has two parameters:
- PointT - type of points to use(in the given example it is
pcl::PointXYZRGB)
- NormalT - type of normals to use. Insofar as
pcl::RegionGrowingRGBis derived from the
pcl::RegionGrowing, it can use both tests at the same time: color test and normal test. The given example uses only the first one, therefore type of normals is not used.
reg.setInputCloud (cloud); reg.setIndices (indices); reg.setSearchMethod (tree);
These lines provide the instance with the input cloud, indices and search method.
reg.setDistanceThreshold (10);
Here the distance threshold is set. It is used to determine whether the point is neighbouring or not. If the point is located at a distance less than the given threshold, then it is considered to be neighbouring. It is used for clusters neighbours search.
reg.setPointColorThreshold (6);
This line sets the color threshold. Just as angle threshold is used for testing points normals in
pcl::RegionGrowing
to determine if the point belongs to cluster, this value is used for testing points colors.
reg.setRegionColorThreshold (5);
Here the color threshold for clusters is set. This value is similar to the previous, but is used when the merging process takes place.
reg.setMinClusterSize (600);
This value is similar to that which was used in the Region growing segmentation tutorial. In addition to that, it is used for merging process mentioned in the beginning.
If cluster has less points than was set through
setMinClusterSize method, then it will be merged with the nearest neighbour.
std::vector <pcl::PointIndices> clusters; reg.extract (clusters);
Here is the place where the algorithm is launched. It will return the array of clusters when the segmentation process will be over.
Remaining lines are responsible for the visualization of the colored cloud, where each cluster has its own color. | http://pointclouds.org/documentation/tutorials/region_growing_rgb_segmentation.php | CC-MAIN-2018-17 | refinedweb | 602 | 57.87 |
In my last post, I defined the concept Equal. Now, I go one step further and use the concept Equal to define the concept Ordering.
Here is a short reminder of where I ended with my last post. I defined the concept of Equal and a function areEqual to use it.
template<typename T>
concept Equal =
requires(T a, T b) {
{ a == b } -> bool;
{ a != b } -> bool;
};
bool areEqual(Equal auto fir, Equal auto sec) {
return fir == sec;
}
I used the concept of Equal in my last post in the wrong way. The concept Equal requires that a and b have the same type but, the function areEqual allows that fir and sec could be different types that both support the concept Equal. Using a constrained template parameter instead of placeholder syntax solves the issue:
template <Equal T>
bool areEqual(T fir, T sec) {
fir == sec;
}
Now, fir and sec must have the same type.
Thanks a lot to Corentin Jabot for pointing this inconsistency out.
Additionally, the concept Equal should not check if the equal and unequal operator returns a bool but something which is implicitly or explicitly convertible to a bool. Here we are.
template<typename T>
concept Equal =
requires(T a, T b) {
{ a == b } -> std::convertible_to<bool>;
{ a != b } -> std::convertible_to<bool>;
};
I have to add. std::convertible_to is a concept and requires, therefore, the header <concepts>.
template <class From, class To>
concept convertible_to =
std::is_convertible_v<From, To> &&
requires(From (&f)()) {
static_cast<To>(f());
};
The C++ 20 standard has already defined two concepts for equality comparing:
I ended my last post by presenting a part of the type class hierarchy of Haskell.
The class hierarchy shows that the type class Ord is a refinement of the type class Eq. This can elegantly be expressed in Haskell.
class Eq a where
(==) :: a -> a -> Bool
(/=) :: a -> a -> Bool
class Eq a => Ord a where
compare :: a -> a -> Ordering
(<) :: a -> a -> Bool
(<=) :: a -> a -> Bool
(>) :: a -> a -> Bool
(>=) :: a -> a -> Bool
max :: a -> a -> a
Here is my challenge. Can I express such as relationship quite elegantly with concepts in C++20? For simplicity reasons, I ignore the functions compare and max of Haskell's type class. Of course, I can.
Thanks to requires-expression, the definition of the concept Ordering looks quite similar to the definition of the type class Equal.
template <typename T>
concept Ordering =
Equal<T> &&
requires(T a, T b) {
{ a <= b } -> std::convertible_to<bool>;
{ a < b } -> std::convertible_to<bool>;
{ a > b } -> std::convertible_to<bool>;
{ a >= b } -> std::convertible_to<bool>;
};
Okay, let me try it out.
// conceptsDefinitionOrdering.cpp
#include <concepts>
#include <iostream>
#include <unordered_set>
template<typename T>
concept Equal =
requires(T a, T b) {
{ a == b } -> std::convertible_to<bool>;
{ a != b } -> std::convertible_to<bool>;
};
template <typename T>
concept Ordering =
Equal<T> &&
requires(T a, T b) {
{ a <= b } -> std::convertible_to<bool>;
{ a < b } -> std::convertible_to<bool>;
{ a > b } -> std::convertible_to<bool>;
{ a >= b } -> std::convertible_to<bool>;
};
template <Equal T>
bool areEqual(T a, T b) {
return a == b;
}
template <Ordering T>
T getSmaller(T a, T b) {
return (a < b) ? a : b;
}
int main() {
std::cout << std::boolalpha << std::endl;
std::cout << "areEqual(1, 5): " << areEqual(1, 5) << std::endl;
std::cout << "getSmaller(1, 5): " << getSmaller(1, 5) << std::endl;
std::unordered_set<int> firSet{1, 2, 3, 4, 5};
std::unordered_set<int> secSet{5, 4, 3, 2, 1};
std::cout << "areEqual(firSet, secSet): " << areEqual(firSet, secSet) << std::endl;
// auto smallerSet = getSmaller(firSet, secSet);
std::cout << std::endl;
}
The function getSmaller requires, that both arguments a and b support the concept Ordering, and both have the same type. This requirement holds for the numbers 1 and 5.
Of course, a std::unordered_set does not support ordering. The actual msvc compiler is very specific, when I try to compile the line auto smaller = getSmaller(firSet, secSet) with the flag /std:c++latest.
By the way. The error message is very clear: the associated constraints are not satisfied.
Of course, the concept Ordering is already part of the C++20 standard.
Maybe, you are irritated by the term three-way. With C++20, we get the three-way comparison operator, also known as the spaceship operator. <=>. Here is the first overview: C++20: The Core Language. I write about the three-way comparison operator in a future post.
I learn new stuff by trying it out. Maybe, you don't have an actual msvc available. In this case, use the current GCC (trunk) on the Compiler Explorer. GCC support the C++20 syntax for concepts. Here is the conceptsDefinitionOrdering.cpp for further experiments:.
When you want to define a concrete type that works well in the C++ ecosystem, you should define a type that "behaves link an int". Such a concrete type could be copied and, the result of the copy operation is independent of the original one and has the same value. Formally, your concrete type should be a regular type. In the next post, I define the concepts Regular and SemiReg 334
Yesterday 5796
Week 34765
Month 212086
All 9685718
Currently are 125 guests and no members online
Kubik-Rubik Joomla! Extensions
Read more...
Read more... | https://modernescpp.com/index.php/c-20-define-the-concept-equal-and-ordering | CC-MAIN-2022-27 | refinedweb | 860 | 63.19 |
Simple lazy dependencies
Background
I do a lot of work in computational chemistry, and more specifically in chemical informatics. The scientists I work with develop ways (sometimes successfully) to understand how a small molecule will work in a living system. For example, is the molecule likely to go through the blood/brain barrier (BBB). I'll call this a "predictive model" because in a bit I'll use "model" for a different concept.
A predictive model may be built on top of other models. A great example is a consensus model which merges the results of three other models. At some point the models use values from general chemistry and not biochemistry. These are things like molecular weight, graph theoretical measures, and computed logP. These terms are called "descriptors." In turn these are based on the chemical structure. Often the input structure needs cleanup and registration: removing salts, fixing the charge, canonicalizing tautomers, and other stages of input data normalization.
Normally these steps are done imperatively. "Read structure. Register. Compute molecular weight and clogp. Compute BBB predictive model #1, #2 and #3. Merge the results into a consensus model. Save result." Keeping track of the dependencies is hard. This breaks down quickly
A simple lazy dependency system
I developed a dependency tracking system for one of my clients many years ago - CombiChem, when they were part of DuPont Pharma. It wasn't all that complicated or sophisticated. Later I implemented a similar but much simpler system for another client, AstraZeneca. It's simpler because unlike the CombiChem version the AZ one assumes that once a value is computed it will never change.
Here is the heart of the AZ system. Note that this "Model" has little to do with "predictive model."
class Model(object): def __init__(self, data=None, resources=None): if data is None: data = {} if resources is None: resources = {} self.data = data self.resources = resources def __getitem__(self, key): try: return self.data[key] except KeyError: pass resource = self.resources[key] resource(key, self) # recursive call return self.data[key] def __setitem__(self, key, value): self.data[key] = valueThe 'data' dictionary contains initial values and any other computed values, cached for later. The 'resources' dictionary contains handlers which are callables. If the requested data item is not in the data dictionary then get the associated resource and call it. The resource must set the expected property, which is returned to the caller. Setting an item saves it in the data dictionary.
The trick here is this is a recursive call. The handler gets the name of the desired property and the model object. If it needs a property it can turn around and query the model for it.
StringModel - an example
For example, here's a StringModel which takes a string and the choice of forcing it to lower case for use by the analysis handlers.
def normalize(name, model): if model["force_lowercase"]: model["_text"] = model["text"].lower() else: model["_text"] = model["text"] def num_letter_a(name, model): model["num_letter_a"] = model["_text"].count("a") def num_letter_i(name, model): model["num_letter_i"] = model["_text"].count("i") string_resources = { "_text": normalize, "num_letter_a": num_letter_a, "num_letter_i": num_letter_i, } class StringModel(Model): def __init__(self, text, force_lowercase=True): Model.__init__(self, dict(text=text, force_lowercase=force_lowercase), string_resources)Note that the "num_letter_{a,i}" functions use the internal "_text" property rather than the intial "text" property. "_text" is normalized to lower-case by default, else it's the actual input string.
Making and using a StringModel is easy:
smodel = StringModel("I came, I saw, I kicked butt.") assert smodel["num_letter_a"] == 2 assert smodel["num_letter_i"] == 4 smodel = StringModel("I came, I saw, I kicked butt.", force_lowercase=False) assert smodel["num_letter_a"] == 2 assert smodel["num_letter_i"] == 1
(Thank to Marcin Feder for spotting and correcting mistakes in my original code for the StringModel and the asserts.)
A resource may compute multiple properties at once, as
LC_VOWELS = "aeiou" LC_CONSONANTS = "bcdfghjklmnpqrstvwxyz" def compute_lc_vowel_and_consonant_counts(name, model): text = model["_text"] # the normalized string d = dict.fromkeys(text, 0) for c in text: d[c] += 1 model["num_lc_vowels"] = sum([d.get(c, 0) for c in LC_VOWELS]) model["num_lc_consonants"] = sum([d.get(c, 0) for c in LC_CONSONANTS])which is added to the "string_resources" as
string_resources = { "_text": normalize, "num_letter_a": num_letter_a, "num_letter_i": num_letter_i, "num_lc_vowels": compute_lc_vowel_and_consonant_counts, "num_lc_consonants": compute_lc_vowel_and_consonant_counts, }
The "name" parameter is the name of requested descriptor. It's most often used for similar properties which can share much of the same code through a function or class instance. For example, here's an adapter to convert value into roman numbers. The descriptor "X_roman" is the roman numeral form of the descriptor "X".
roman_numerals = dict( zip(range(16), "* I II III IV V VI VII VIII IX X X1 XII XIII XIV XV".split())) def roman(name, model): if not name.endswith("_roman"): raise TypeError("unexpected descriptor name %r" % (name,)) model[name] = roman_numerals[model[name[:-6]]]and it's registered as
string_resources = { "_text": normalize, "num_letter_a": num_letter_a, "num_letter_i": num_letter_i, "num_lc_vowels": compute_lc_vowel_and_consonant_counts, "num_lc_consonants": compute_lc_vowel_and_consonant_counts, "num_letter_a_roman": roman, "num_letter_i_roman": roman, }
Step 3: Success
Developing the first version for AZ, including testing and adding features over time like trace logging and error handling took perhaps 2 weeks altogether. That includes code review and training of the other people on the team as well as porting existing compute functions to the new system. They really like how easy it is to add new descriptors, and they continue to contract my service in part to add new resources. Usually only the hard ones which require combined knowledge of unix and chemistry.
The approach I used is sometimes called "declarative", because there are a set of rules which declare how to compute a result given input, and not an ordered set of instructions. Famous declarative systems include Prolog and the CLIPS expert system. My first published paper was in the proceedings of a CLIPS conference, in 1992, so it's not like I came up with this on my own.
Python is not a declarative language. While is it possible to introspect a function to determine most dependencies on other variables, part the elegance of my solution was to ignore that hard problem and compute dependencies only when needed. This is called lazy evaluation. The chain of dependences is pull-oriented meaning that you set up the network of resources and ask for a value. That value depends on others, which depends on other, which ... until it gets to the input data.
Compare that to most dataflow systems, like the commercial Pipeline Pilot for chemistry or EBI's open-source Taverna for bioinformatics. From what I've seen those are push oriented. When all input data is available for a node, compute values and push the result to the next nodes in the network. Drop a compound at the top of the network and it will merrily compute all properties. If you only want to compute a few properties you need to make a new network, and be ordered correctly. In a pull system you can use the same network -- AZ's has several hundred compute rule and over 1,000 properties -- and only get what you asked for.
In theory I could keep track of the resources used by each resource. Then if a dependant value changes I can remove the computed value from cache and recompute it the next time it's needed. This is pretty complicated and not something I needed for the project. It would be needed in a more interactive research environment where someone might ask "what's the change in the XYZ prediction if the pH was dropped by 0.1?" or "how does this conformational change affect the binding quality?"
Really new ideas are rare in any field. What I did was nice and useful, and new to me and my clients, but others have worked on the same problem. One solution, with dependency tracking, is PyCells developed by Ryan Forsythe and based on Ken Tilton's Cells code for Lisp. I'll talk more about it next time.
Andrew Dalke is an independent consultant focusing on software development for computational chemistry and biology. Need contract programming, help, or training? Contact me
| http://www.dalkescientific.com/writings/diary/archive/2006/09/07/simple_lazy_dependencies.html | CC-MAIN-2014-41 | refinedweb | 1,353 | 56.55 |
Hi All,
ArcPro 2.4.1 on a 4 cores server, 12GB RAM (server). For some reason, the Adjustment process just used 2 parallel instances instead of 4.
I tried to reset the parallel processing factor to 100% by
import arcpy
arcpy.env.parallelProcessingFactor = "100%"
However, nothing changed. it took 50mins to finish the job
With the same project on my laptop, I got 4 parallel instances on 4 cores 8GB Ram laptop and took 33 mins to finish the job.
Do you have any idea how to force ArcPro to use more resource?
Thanks,
you can set the parallel processing factor from the GP environment setting. Click Analysis->Environments-Parallel Processing | https://community.esri.com/thread/241253-ortho-mapping-arcpro-241-parallel-instance-number | CC-MAIN-2020-05 | refinedweb | 112 | 67.96 |
Currently, we are making different update processes depending on the selection of the select box.
It is a select box like the one below,
The choices are "None", "Approve", "Reject", "Applying" and the column name is identifyr_reply.
The column name of the check box on the right is change, and the check box is checked, and the select option is selected to update.
In the case of "None", "Approve" and "Reject", the column is updated without any problem and flash appears.
If i do not check or if the select is "Applying", the same flash as "None", "Approve", and "Reject" will appear.
In the 8th line of the controller, the check box is checked, and the content progresses only when "None", "Approve", and "Reject".
I don't understand why I don't get an exception error.
Excuse me, can you tell me how?
def update_overtime_notice ActiveRecord :: Base.transaction do o1 = 0 o2 = 0 o3 = 0 overtime_notice_params.each do | id, item | if item [: invoker_reply] .present? if (item [: change] == "1")&&(item [: identifyr_reply] == "None" || item [: identifyr_reply] == "Approve" || item [: identifyr_reply] == "Negative") attendance = Attendance.find (id) user = User.find (attendance.user_id) if item [: invoker_reply] == "None" o1 + = 1 item [: overtime_finished_at] = nil item [: tomorrow] = nil item [: overtime_work] = nil item [: invoker_check] = nil elsif item [: invoker_reply] == "Approval" item [: invoker_check] = nil o2 + = 1 attendance.indicater_check_anser = "Approved" elsif item [: invoker_reply] == "denial" item [: invoker_check] = nil o3 + = 1 attendance.indicater_check_anser = "denied" end attendance.update_attributes! (item) end end end flash [: success] = "[Overtime application] # {o1} none, # {o2} approved, # {o3} denied" redirect_to user_url (params [: user_id]) end rescue ActiveRecord :: RecordInvalid flash [: danger] = "The update was canceled because there was invalid input data." redirect_to edit_overtime_notice_user_attendance_url (@ user, item) end
- Answer # 1
- Answer # 2
I have another problem
if (item [: change] == "1")&&a || b || c
Part of.
if (item [: change] == "1")&&(a || b || c)
I have to do it.
Related articles
- ruby - rails transaction does not execute the process after rescue
- i want to keep data consistent using [ruby on rails] transaction and payjp
- ruby - the last db setting doesn't work when deploying rails app on aws
- ruby on rails - it is not saved in the database after registering the product
- ruby on rails - mvc on rails - heroku: how to check the database_url used for regular backup
- ruby on rails - i want to implement a function for administrator users to register luggage for general users in rails
- ruby on rails - things associated with a foreign key cannot be called in
On the contrary, did you think "Why fly to the exception"? If the condition is not met, I think there is an element that raises an exception simply by not processing. | https://www.tutorialfor.com/questions-320642.htm | CC-MAIN-2021-49 | refinedweb | 432 | 51.28 |
lesson learned: where to place your “Liquid-tags”Carla@home May 1, 2016 2:59 PM
I’ve read somewhere that you have to learn from your mistakes. Well, I did.
I’ve just started out with using “Liquid” (template language).
With the logic tag {% capture -%} you can assign a block of text, HTML code to a variable and reuse it later in your (html)-page several times. The HTML code or text between the capture tags does not render on the page.
Example
{% capture test -%}
<span>red</span>
{% endcapture -%}
<p>my {{test}} shoes</p>
<p>my {{test}} nose</p>
Output:
my red shoes
my red nose
So far, so good.
If you want to use the logic tag {% capture -%} in combination with a web app, be sure to place it after the command
{module_webapps id=“1234” filter="all" resultsPerPage="2" hideEmptyMessage="true" rowCount="" collection=“nameOfCollection” template=""}
{% for item in nameOfCollection.items -%}
{% capture nameOfCapture -%}
<h2 >{{item.['headline sub']}}</h2>
more fields from the web app
{% endcapture -%}
Now you can “call” {{nameOfCapture}} where ever you want.
{% endfor -%}
If you place {% capture -%} before the {module_webapps ….} and recall it later, anything between the logic tag won’t render and will output it like this:
{{item.['headline sub']}}
So, to recap: place any dynamically retrievable fields after the {module_webapps ….}
Kind regards, Carla
1. Re: lesson learned: where to place your “Liquid-tags”Liam Dilley May 1, 2016 3:31 PM (in response to Carla@home)
Hi Carla,
That is untrue because while you should avoid using the capture module (little reason you should ever do, its rare) where I have done so and others like assign (which you should try to use instead of capture where possible) it works totally fine.
The use case you have is one of the cases that you should not be capturing and using it like that.
What are you trying to achieve?
2. Re: lesson learned: where to place your “Liquid-tags”Carla@home May 1, 2016 4:06 PM (in response to Liam Dilley)
Hi Liam,
In an other post how to filter web app items with boolean item field checked (true) in module_webapps I was trying to filter a web-app item bases on a boolean field. That's working now. But based on the boolean field there is a "chunk" of code to be rendered twice. First I thought to place the code in a content holder but then I read the docs about the %capture -%.
In an other document or blog I read that it would be better to use this instead of assign. Since I'm not a native English speaker I perhaps misunderstood the whole discussion.
I must say I'm a lazy coder and I wanted to keep it DRY.
If I understand you correctly I always should use from now on to use assign instead of capture.
For my curiosity why is this better.
Thanks, Carla
3. Re: lesson learned: where to place your “Liquid-tags”Liam Dilley May 1, 2016 5:06 PM (in response to Carla@home)
Hi Carla,
I replied to that post but looks like it was deleted
I replied again to your other post.
You have includes Carla.
{% include "thefilepath" -%}
To achieve chunks or templates of reusable code.
If you read to use capture over assign for use... Who ever wrote that is wrong. Capture has to wait till everything is rendered before it can work so takes longer for a page to process because of that. Assign just accepts the values and data you give it. Even BC say to try avoid using capture where possible.
4. Re: lesson learned: where to place your “Liquid-tags”Carla@home May 2, 2016 12:17 AM (in response to Liam Dilley)
Hi Liam,
Btw: The post I referred to still exists.
Since we are here in the "newbie corner" and to learn. Do you use "assign" for variables (small piece of code) and "includes" where you have a lot of code?
So what is the purpose of capture? And in which situations should/could you use it?
In a BC webinar (some years old) if my eyes weren't deceiving me, I saw some Javascript and html code placed in here.
5. Re: lesson learned: where to place your “Liquid-tags”Liam Dilley May 2, 2016 3:24 AM (in response to Carla@home)
Do not think of them like that.
Its more programming logic.
Include are for large chunks of code to "modulize" your content as it were for easy management and re-use. Assign is like variables in most programming languages.
Capture is for rare cases where you need to capture rendered content, after everything has finished rendering you want to grab that block and either render it out later or manipulate it in some way.
One use example is in a for loop to do odd and even:
{% for item in site.posts %}
{% capture thecycle %}{% cycle 'odd', 'even' %}{% endcapture %}
{% if thecycle == 'odd' %}
<div>echo something</div>
{% endif %}
{% endfor %}
6. Re: lesson learned: where to place your “Liquid-tags”Carla@home May 2, 2016 4:36 AM (in response to Liam Dilley)
Thanks Liam, for your explanation. It makes more sense now. | https://forums.adobe.com/thread/2148287 | CC-MAIN-2018-22 | refinedweb | 868 | 71.55 |
.12-rc2. Kernel development has slowed
significantly while the source code management issues are being worked out
- see below.
The current -mm tree is 2.6.12-rc2-mm3. Recent changes
to -mm include a big x86-64 update, an NFSv4 update, some scheduler tweaks,
the removal of the last user of the deprecated inter_module functions, and
lots of fixes.
The current 2.4 kernel remains 2.4.30; no 2.4.31 prepatches have
been released.
Kernel development news
Quotes of the week
The guts of git
For a while, the leading contender appeared to be monotone, which supports the
distributed development model used with the kernel. There are some issues
with monotone, however, with performance being at the top of the list:
monotone simply does not scale to a project as large as the kernel. So
Linus has, in classic form, gone off to create something of his own. The
first version of the tool called "git" was announced on April 7. Since then, the
tool has progressed rapidly. It is, however, a little difficult to
understand from the documentation which is available at this point. Here's
an attempt to clarify things.
Git is not a source code management (SCM) system. It is, instead, a
set of low-level utilities (Linus compares it to a special-purpose
filesystem) which can be used to construct an SCM system. Much of the
higher-level work is yet to be done, so the interface that most developers
will work with remains unclear.
At the lower levels,
Git implements two data structures: an object database, and a directory
cache. The object database can contain three types of objects:
The object database relies heavily on SHA hashes to function. When an
object is to be added to the database, it is hashed, and the resulting
checksum (in its ASCII representation) is used as its name in the database
(almost - the first two bytes of the checksum are used to spread the files
across a set of directories for efficiency). Some developers have
expressed concerns about hash collisions,
but that possibility does not seem to worry the majority. The object itself is
compressed before being checksummed and stored.
It's worth repeating that git stores every revision of an object separately
in the database, addressed by the SHA checksum of its contents. There is
no obvious connection between two versions of a file; that connection is
made by following the commit objects and looking at what objects were
contained in the relevant trees. Git might thus be expected to consume a
fair amount of disk space; unlike many source code management systems, it
stores whole files, rather than the differences between revisions. It is,
however, quite fast, and disk space is considered to be cheap.
The directory cache is a single, binary file containing a tree object; it
captures the state of the directory tree at a given time. The state as
seen by the cache might not match the actual directory's contents; it could
differ as a result of local changes, or of a "pull" of a repository from
elsewhere.
If a developer wishes to create a repository from scratch, the first step
is to run init-db in the top level of the source tree.
People running PostgreSQL want to be sure not to omit the hyphen, or they
may not get the results they were hoping for. init-db will create
the directory cache file (.dircache/index); it will also, by
default, create the object database in .dircache/objects. It is
possible for the object database to be elsewhere, however, and possibly
shared among users. The object database will initially be empty.
Source files can be added with the update-cache program.
update-cache --add will add blobs to the object database for new
files and create new blobs (leaving the old ones in place) for any files which have changed.
This command will also update the directory cache with entries associating
the current files' blobs with their current names, locations, and
permissions.
What update-cache will not do is capture the state of the
tree in any permanent way. That task is done by write-tree, which
will generate a new tree object from the current directory cache and enter
that object into the database. write-tree writes the SHA checksum
associated with the new tree object to its standard output; the user is
well-advised to capture that checksum, or the newly-created tree will be
hard to access in the future.
The usual thing to do with a new tree object will be to bind it into a
commit object; that is done with the commit-tree command.
commit-tree takes a tree ID (the output from
write-tree) and a set of parent commits,
combines them with the changelog entry, and stores the whole thing as a
commit object. That object, in essence, becomes the head of the current
version of the source tree. Since each commit points to its parents, the
entire commit history of the tree can be traversed by starting at the
head. Just don't lose the SHA
checksum for the last commit.
Since each commit contains a tree object, the state of the source tree
at commit time can be reconstructed at any point.
The directory cache can be set to a given version of the tree by using
read-tree; this operation reads a tree object from the object
database and stores it in the directory cache, but does not actually change any files
outside of the cache. From there, checkout-cache can be used make
the actual source tree look like the cached tree object. The
show-diff tool prints the differences between the directory cache
and what's actually in the directory tree currently. There is also a
diff-tree tool which can generate the differences between any two
trees.
An early example of what can be done with these tools can be had by playing
with the git-pasky distribution by Petr
Baudis. Petr has layered a set of scripts over the git tools to create
something resembling a source management system. The git-pasky
distribution itself is available as a network repository; running
"git pull" will update to the current version.
A "pull"
operation, as implemented in git-pasky, performs these steps:
Petr's version of git adds a number of other features as well. It is a far
cry from a full-blown source code management system, since it lacks little
details like release tagging, merging, graphical interfaces, etc. A
beginning structure is beginning to emerge, however.
When this work was begun, it was seen as a sort of insurance policy to be
used until a real
source management system could be found. There is a good chance, however,
that git will evolve into something with staying power. It provides the
needed low-level functionality in a reasonably simple way, and it is
blindingly fast. Linus places a premium on
speed:
As if on cue, Andrew announced a set of 198
patches to be merged for 2.6.12:
If this test (and the ones that come after) goes well, and the resulting
system evolves to where it meets Linus's needs, he may be unlikely to
switch to yet another system in the future. So git is worth watching; it
could develop into a powerful system in a hurry.
Some git updates
A mailing list has been set up to take discussion of git off linux-kernel.
The list is called "git," and it is hosted on vger.kernel.org; sending a
message containing "subscribe git" to
majordomo@vger.kernel.org will get you onto the list. As of this
writing, the traffic is not small.
A couple of quotes from that list, that didn't quite make the "quotes of
the week":
It's perfect, I tell you.
Linus has an experimental kernel repository on kernel.org, and has
committed Andrew Morton's initial 200-patch bomb to it. It's in:
pub/linux/kernel/people/torvalds/kernel-test.git
for those who are
interested. Commits to this repository are also being broadcast to the
same "commits" list that tracked the BitKeeper repository. Here's an example patch for those interested in what
a git commit looks like, or in the ioread/iowrite API change that your
editor has not yet managed to cover on this page.
Extending netlink
Use.
FUSE hits a snag
That review has happened, and it has turned up a problem; it seems that
FUSE, in some situations, implements some rather strange filesystem
semantics.
Consider the case of a filesystem hosted in a tar archive. FUSE will
present files within the archive with the owners and permission modes
specified inside that archive. The owner and permissions of the files, in
other words, do not
necessarily have anything to do with the owner of the archive or the user
who mounted it as a filesystem. To allow that user to actually work with
files in the archive, the "tarfs" FUSE module disables ordinary permissions
checking. A file may, according to a tool like ls, be owned by
another user and inaccessible, but the user who mounted the filesystem has
full access anyway. FUSE also ensures that no other user has any
access to the mounted filesystem - not even root.
This twisting of filesystem semantics does not sit well with some kernel
developers, who tend to think that Linux systems should behave like Linux.
The FUSE semantics have the potential to confuse programs which think that
the advertised file permissions actually mean something (though, evidently,
that tends not to be a problem in real use) and it makes it impossible to
mount a filesystem for use by more than one user. So these developers have
asked that the FUSE semantics be removed, and that a FUSE filesystem behave
more like the VFAT-style systems; the user mounting the filesystem should
own the files, and reasonable permissions should be applied.
In fact, FUSE does provide an option ("allow_others") which causes
it to behave in this way. But that approach goes against what FUSE is
trying to provide, and raises some security issues of its own. FUSE hacker
Miklos Szeredi sees the issue this way:
In this view, a FUSE filesystem is very much a single-user thing. In some
cases, it really should be that way; consider a remote filesystem
implemented via an ssh connection. The user mounting the
filesystem presumably has the right to access the remote system, on the
remote system's terms. The local FUSE filesystem should not be trying to
figure out what the permissions on remote files should be. Other users on
the local system - even the root user - may have no right to access the
remote system, and should not be able to use the FUSE filesystem to do so.
It's not clear where this discussion will go. There are some clear reasons
behind the behavior implemented by FUSE, and it may remain available,
though, perhaps, not as a default, and possibly implemented in a different
way. The little-used Linux namespace capability has been mentioned as a
way of hiding single-user FUSE filesystems, though there may be some
practical difficulties in making namespaces actually work with FUSE. Until
the core filesystem hackers are happy, however, FUSE is likely to have a
rough path into the mainline.
Patches and updates
Kernel trees
Core kernel code
Development tools
Device drivers
Documentation
Filesystems and block I/O
Memory management
Networking
Security-related
Miscellaneous
Page editor: Jonathan Corbet
Next page: Distributions>>
Linux is a registered trademark of Linus Torvalds | http://lwn.net/Articles/130865/ | crawl-002 | refinedweb | 1,943 | 60.35 |
#include <gromacs/utility/flags.h>
Template class for typesafe handling of combination of flags.
This class is not used publicly, but is present in an installed header because it is used internally in public template classes.
Currently, it is not completely transparent, since or'ing together two
FlagType flags does not automatically create a FlagsTemplate object. Also, some operators and more complex operations (like testing for multiple flags at the same time) are missing, but can be added if the need arises.
Tests if the given flag is set.
Note that if
flag has more than a single bit set, then returns true if any of them is set. | https://manual.gromacs.org/current/doxygen/html-full/classgmx_1_1FlagsTemplate.xhtml | CC-MAIN-2021-17 | refinedweb | 109 | 62.17 |
Example
comboBox validation
comboBox validation dear sir.
i want to know . how to validate radio button in struts using xml validation.
thanks
combobox Tag (Form Tag) Example
Combobox Tag (Form Tag) Example
In this section, we are going to describe the combobox
tag. The combo box is basically... together
created using the list.
The tag <s:checkboxlist name="Animals
values in combobox - Java Beginners
values in combobox how to fill values in combo box i.e. select tag in html using javascript?
Hi Friend,
Try the following code:
ComboBox
var arr = new Array();
arr[0] = new Array("-select-");
arr[1
Combobox in HTML
Combobox in HTML is used to display a drop-down list of some options from which one can be selected.
<select> tag is supported in all the web browsers.
<option> tag is used inside the <select> tag that displays
to jsp combobox exmple
to jsp combobox exmple to jsp combobox exm
jsp combobox
jsp combobox ihave three tables in database country,state and city..if i select one country throug combo box than other combobox show state only select country ...than city how i can implement through
how to item
fill combobox at runtime jsp
fill combobox at runtime jsp i have 1 combobox in jsp which... another combobox below it, i want it to be filled on the basis of selected value of 1st combobox...plz help with code
Passing values in ComboBox from XML file
are inserting the values in the combobox so we are
using the <select> tag...Passing values in ComboBox from XML file
In this tutorial we are going to know how we can pass a
values in ComboBox by using XML.
This example
two linked combobox
two linked combobox give jsp example of two combo box when i select state in one combobox in second combo box cities will display according to state which i select
cookie and session dependency
of login and logout action.
We are using Struts2, apache tomcat 6.5, j2ee, j2se 1.6.... - 3.1 jar
echache - 1.2.3.jar
struts-core-2.1.8.1.jar>
Combobox program - Java Beginners
Combobox program import javax.swing.*;
import java.awt.*;
public class SwingFrame1
{
public static void main(String[] args) throws Exception... in combobox a new text box have to open beside that combo box.. Hi
datagrid including combobox
Combobox jsp from 0 to 10
Combobox jsp from 0 to 10 Hi guys please help me to write a very easy program using jsp to display value in combobox from 0 to 10. How to write the for loop? Please help.Thank!!!
<html>
<
FLEX 3 Combobox - Development process
and just started tinkering with Adobe Flex. I have downloaded your combobox... a combobox - once selected - it will show the airport code into a text field.
Example:
ComboBox
Chicago,Illinois
Boise, Idaho
Buffalo, New York
Baltmore Connect J ComboBox with Databse - Java Beginners
How to Connect J ComboBox with Databse How to Connect J ComboBox with Databse Hi Friend,
Do you want to get JComboBox values from database?Please clarify this.
Thanks
Flex ComboBox controls
is created by using <mx:ComboBox>tag.
ComboBox value are provided through...Flex ComboBox Control:-
The ComboBox control is a Data-Driven control in flex. ComboBox is a drop
down list which we can display a list of value and user
how to insert the selected item of combobox in mysql - XML
how to insert the selected item of combobox in mysql hi,
i have to insert the selected item from combobox into MYSQL database.pls provide... of combobox in mxml.how to insert selecteditem in database.
pls suggest me i have
Jdbc Login Page Validation using Combobox
want to login by validating with combobox....The link which you send
its.......please help me i dont know how to validate the combobox for diffrent cities please help me by validating with combobox....
<form action typed by the user. For example, ["Callisto", "Charls", "chim"] are the data
Java swing: get selected value from combobox
Java swing: get selected value from combobox
In this tutorial, you will learn how to get selected value from combobox.
The combobox provides the list... of programming languages to the combobox using addItem() of
JComboBox class. We have
Connect J ComboBox with Databse - Java Beginners
Connect J ComboBox with Databse Hello Sir I want To Connect MS Access Database with JComboBox ,
when I Select any Item from Jcombobox Related Records will Display in to JTextBox
eg
when i select MBA then fees ,Duration
Application |
Struts 2 |
Struts1 vs
Struts2 |
Introduction... Manager on Tomcat 5 |
Developing Struts PlugIn |
Struts
Nested Tag...) |
Date Tag (Data Tag) |
Include Tag (Data Tag) in Struts 2 |
Param Tag (Data:student;
@table:stu_info;
Combobox values:(class1,class2,class3);
textbox1
struts2 - Struts
struts2 hello, am trying to create a struts 2 application that
allows you to upload and download files from your server, it has been challenging for me, can some one help Hi Friend,
Please visit the following
HOW TO DISPLAY ID IN TEXTBOX BASED ON COMBOBOX SELECTION IN A SAME PAGE
Jdbc Login Page Validation using Combobox
Struts
is not only thread-safe but thread-dependent.
Struts2 tag libraries provide...Struts Why struts rather than other frame works?
Struts is used into web based enterprise applications. Struts2 cab be used with Spring
pbml in inserting selected item from combobox into mysql - XML
pbml in inserting selected item from combobox into mysql hi,
i have to insert the selected item from combobox into MYSQL database.pls provide... of combobox in mxml.how to insert selecteditem in database.
pls suggest me i have
Flex ComboBox Component
Adobe Flex Combo Box Component:
The ComboBox component of Flex is similar to the select option of HTML code.
This component
also has editable mode, in which a user can type on the top of the list. We use
this component inside
Struts2 Actions
generated by a Struts
Tag. The action tag (within the struts root node of ... a loosely coupled
system, Struts2 uses a technique called
dependency...Struts2 Actions
how to retrieve data from database using combobox value without using request.getParameter in jsp - JSP-Servlet
how to retrieve data from database using combobox value without using request.getParameter in jsp Answer pl
Struts2 Actions
is usually generated by a Struts
Tag.
Struts 2 Redirect Action
In this section, you will get familiar with struts 2 Redirect action...
Struts2 Actions
Struts2 Actions
.
Struts1/Struts2
For more information on struts visit to : Hello
I like to make a registration form in struts inwhich
Struts2 Validation Problem - Struts
Struts2 Validation Problem Hi,
How to validate field that should not accept multiple spaces in Struts2?
Regards,
Sandeep Hi... in the browser having the example of handling the error in struts 2.
http
I am not able to display the selected value of my combob
retrieve Dept Name from table dept and retrieve list of employee from emp table for that dept in combobox
for that dept in combobox I have an 2 textboxes and 1 combobox in my... and display that list in combobox.
For example, In HTML page, we have Dept ID Textbox (Input Paramater), Dept Name Textbox, and Employee combobox.
If user enter
Struts1 vs Struts2
Struts1 vs Struts2
Struts2 is more... differences between struts1 and struts2
Feature
Struts 1
Struts 2
Action classes
how to prepopulate data in struts2 - Struts
how to prepopulate data in struts2 I wanted to show the data from database using
New to struts2
New to struts2 Please let me know the link where to start for struts 2 beginners
Struts 2 Tutorials comboBox from first frame to textField on the second frame? please
Nitobi ComboBox V3
Nitobi ComboBox V3
.... Internationalization & Accessibility
9. Image Support: Display images in the combobox... declarative menu attributes, menu items can be added to any combobox easily.
13
struts html tag - Struts
struts html tag Hi, the company I work for use an "id" tag on their tag like this: How can I do this with struts? I tried and they don't work
Ask Questions?
If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for.
Ask your questions, our development team will try to give answers to your questions. | http://www.roseindia.net/tutorialhelp/comment/34631 | CC-MAIN-2013-20 | refinedweb | 1,379 | 62.68 |
An software vendors developing all kinds of MathML tools. the working group monitors the public www-math@w3.org mailing list, and will attempt answer.
embedding mechanisms.
mathElement:
macrosis provided to make possible future development of more streamlined, MathML-specific macro mechanisms.
modeattribute specifies whether the enclosed MathML expression should be rendered in a display style or an in-line style. The default is
mode="inline". This attribute is deprecated in favor of the standard CSS2 `display' property with the analogous
blockand
inlinevalues. Document Object Model working group in an effort to provide better communication between embedded MathML renderers and browsers (see appendix E [Document Object Model for MathML]).
The basic requirements for communication between an embedded MathML and a browser include:
In browsers where MathML is not natively supported,:
which corresponds to the HTML anchor element a. In HTML, anchors are used both to make links, and to provide locations to link to. MathML, as an XML application, defines links by the use of the XLink mechanism. XML Linking Language (XLink) working draft. The reader is cautioned that this is as present still a working draft, and is therefore subject to future revision. Since the MathML linking mechanism is defined in terms of the XML linking specification, the same proviso holds for it as well.
A MathML element is designated as a link by the presence of the
xlink:href attribute. To use the
xlink:href attribute, it is also necessary to
declare the xlink namespace. Thus, a typical MathML link might look like:
<mrow xmlns: ... </mrow>
Issue (add-xlink-to-DTD): If we say this, we ought to add these attributes to all linkable elements in the DTD. See section 5.1 of the XLink working draft.. (See for further W3C activity in this area.):
Beyond the above definitions, the MathML specification makes no demands of individual processors. In order to guide developers, the MathML specification includes advisory material; for example, there are suggested rendering rules included in Chapter 3.. at the World Wide Web Consortium. Both XSL and CSS are working to incorporate greater support for mathematics. Further, XSL can be used to provide XML style sheet and macro facility activities.
Some of the possible uses of MathML macros. | http://www.w3.org/TR/1999/WD-MathML2-19991201/chapter7.html | CC-MAIN-2014-35 | refinedweb | 372 | 57.98 |
Using NetBeans
TM
IDE 3.6
Feedback
Your Guide to Getting Work Done in NetBeans IDE
Welcome to the Using NetBeans
TM
IDE 3.6 guide. This guide is designed to
give you a more detailed introduction to the IDE than available in the
Getting Started tutorial. Various aspects of the IDE are explored in detail.
This guide is geared mostly for newcomers to NetBeans IDE, whether they
are new to Java, new to using IDEs, or experienced IDE users that are
switching over from a different IDE.
This guide covers the following:
●
Setting Up Your Project
❍
Basic IDE Concepts
■
The Filesystems Window
■
Projects in the IDE
❍
Accessing Source Directories
■
Filesystems and the Java Classpath
■
Correctly Mounting Java Packages
■
Mounting Resource Libraries
❍
Advanced Project Setup
●
Creating and Editing Java Source Code
❍
Creating Java Files
■
GUI Templates and Java Templates
❍
Editing Java Files in the Source Editor
■
Using Abbreviations, Word Matching, and Code
Completion
■
Configuring Code Completion
■
Adding Fields, Bean Properties, and Event
Listeners
■
Working With Import Statements
■
Search and Selection Tools
■
Formatting Java Source Code
❍
Navigating Between Documents
❍
Configuring the Editor
Page 1 of 58
●
Compiling Java Programs
❍
Compiling Files
❍
Working with Compiler Types
■
Specifying the Compiler Type for Files and Projects
■
Creating Custom Compiler Types
■
Setting the Target Directory for .class Files
❍
Cross-Compiling Between Java Platforms
❍
Using JavaMake to Manage Class Dependencies
●
Debugging Java Programs
❍
Basic Debugging
■
Starting a Debugging Session
■
Debugger Windows
■
Stepping Through Your Code
❍
Working With Breakpoints
■
Setting a Breakpoint
■
Setting Conditions for Breakpoint
■
Customizing the Output for a Breakpoint
■
Breakpoint Types
❍
Setting Watches
●
Packaging and Deploying Your Applications
❍
Creating a JAR File
■
Creating a JAR Recipe
■
Compiling and Creating the JAR File
■
Mounting and Checking the JAR File
❍
Modifying a JAR File
■
Adding Files to a JAR Recipe
■
Modifying the Manifest File
■
Setting the JAR Content Filter
❍
Executing a JAR File
●
Using Javadoc
❍
Integrating Java API Documentation into the IDE
■
Making Javadoc Documentation Available in the
IDE
■
Searching and Displaying Javadoc Documentation
■
Configuring the External Browser to Display
Javadoc Files
❍
Adding Javadoc Comments to Your Code
❍
Generating Javadoc Documentation
Page 2 of 58
■
Specifying an Output Directory for Javadoc Files
●
Team Development With CVS
❍
Checking Out Sources
■
Mounting a CVS Filesystem
■
Specifying Advanced Command Options
❍
Configuring a CVS Filesystem
■
Setting the Relative Mount Point
■
Shortening and Hiding File Status
❍
Working With CVS Files
■
Using the Graphical Diff Tool
■
Creating and Applying Patches
■
Resolving Merge Conflicts Graphically
❍
Making Safe Commits
●
Configuring the IDE
❍
Setting IDE Default Settings
■
Configuring IDE Startup Switches
■
Configuring General Java Settings
■
Working with Unknown File Types
❍
Enabling and Disabling IDE Functionality
■
Disabling Modules
■
Installing New Modules from the Update Center
❍
Boosting NetBeans Performance
■
Tuning JVM Switches for Performance
Page 3 of 58
Setting Up Your Project
Using NetBeans IDE 3.6
Previous - TOC - Next
Feedback
This section covers the basics of
correctly setting up your IDE to
start developing your projects.
The process of managing project
contents and properties is centered around the Filesystems window. The
most common tasks in setting up a project are adding source files to the
project, making resource libraries available to the project, correctly setting
up the Java classpath, and configuring output directories for compiled
classes and Javadoc documentation.
This section covers:
●
Basic IDE Concepts - The Filesystems window and working with
projects.
●
Accessing Source Directories - Adding source files and directories
to a project, understanding filesystems, configuring the Java
classpath, correctly mounting Java packages, and making class
libraries (JAR files) available to the project.
●
Advanced Project Setup - An example of a more advanced IDE
project with two separate output directories for compiled classes and
one for Javadoc documentation.
Basic IDE Concepts
Before you start setting up your project, let's take a minute to get
acquainted with some of the basic concepts involved with using the IDE.
The Filesystems Window
Page 4 of 58
The starting
point for
development
in the IDE is
the
Filesystems
window. The
Filesystems
window is
where you
organize
your project
contents,
access and
run
commands
on individual
files, and
view the
structure of
your source
files. The
Filesystems
window
contains all
of the
directories,
packages,
and archive
files that
you have
added to
your project.
When you first run the IDE, the Filesystems window contains the NetBeans
sample directory with some sample code. Each source file has its own
Filesystems window node. These nodes have:
●
Contextual Menu Commands - You can run commands on files by
right-clicking them and choosing from the contextual menu. The
commands that are available vary depending on the type of node you
are working with.
●
Properties - Choose Window > Properties (Ctrl-1) to open a context-
Page 5 of 58
sensitive Properties window. The Properties window always shows the
properties of the component in the NetBeans user interface that has
the focus. If you want to open a persistent Properties window that
only shows the properties of one Filesystems window node, right-click
the node and choose Properties.
If a property contains an ellipsis button (...), you can click the button
to access a special property editors that you can use to define
properties visually.
●
Subnodes - You can expand most nodes in the Filesystems window
to view subnodes representing the internl structure of the node's file.
These subnodes often have their own properties and contextual menu
commands.
The Filesystems window does not show a node for every file in a mounted
directory. For example, for the compiled ColorPicker form object, the source
directory contains the ColorPicker.java source file, the ColorPicker.
form file used to build its GUI in the IDE, and the ColorPicker.class
compiled class. The IDE hides .form and .class files by default, so only
one node is shown for ColorPicker. You can also choose to hide files by type
and extension.
Projects in the IDE
A project is the basic unit of work in the IDE. It includes all the files with
which you are working and the IDE settings that you apply to those files.
The NetBeans IDE has a very straightforward projects system. You always
have one project open at a time. Everything that is in the Filesystems
window is part of the currently open project. You add directories and files to
the project by mounting them as filesystems in the Filesystems window.
Note: You do not have to create a new project for each application you are
working on. You can add the source files for multiple applications to the
Filesystems window and work on them at once. Each project, however, has
only one classpath and set of IDE settings.
Most IDE settings are applied at one of two levels: for the whole project or
for individual files. Project-wide settings are managed in the Options
window. You can open the Options window by choosing Tools > Options in
the Main window. You configure settings on a file using the Properties
window.
Page 6 of 58
When you open the IDE, the default project opens with some sample source
mounted in the Filesystems window. If you do not need the examples, you
can remove the sample filesystem by right-clicking the filesystem node and
choosing Unmount Filesystem. You can also create an empty project by
choosing Project > Project Manager and clicking the New button.
Accessing Source Directories
As mentioned before, you access source directories and files in the IDE by
mounting them in the Filesystems window. To mount a local directory as a
filesystem, go to the Filesystems window, right-click the root Filesystems
node (
) and choose Mount > Local Directory.
Each project also contains some hidden filesystems that are added by the
IDE. These filesystems contain the JDK sources which you can view in
debugging sessions, common Java libraries, and Javadoc documentation
libraries. You can view all of the filesystems in your project by right-clicking
the root Filesystems node and choosing Customize.
Note: If the sources you are working with are under version control, you
can mount them as a VCS filesystem. VCS filesystems let you see files'
versioning status and run VCS commands right in the Filesystems window.
For more information, see Team Development With CVS.
Filesystems and the Java Classpath
Mounting filesystems not only defines the contents of a project, it also
defines the Java classpath for the project. Unlike in command-line
development, the IDE ignores the CLASSPATH variable on your system and
builds a unique internal classpath for each of
your projects. This classpath is
Page 7 of 58
made up of all the mounted
filesystems, including
hidden filesystems and
filesystems that are
mounted by default by the
IDE.
Note: You can view all of a
project's filesystems,
including hidden
filesystems, by right-
clicking the root Filesystems
node and choosing
Customize.
In general, whenever you
want to add something to the classpath, you should mount it in the
Filesystems window. This includes Java libraries (JAR files) that your code
depends on (see Mounting Resource Libraries). You can also customize
the classpath for various operations, like running, compiling, and
debugging, using the filesystem's property sheet. Go to the filesystem's
property sheet and set the Capabilities properties accordingly. For example,
you should exclude filesystems that only contain Javadoc documentation
from the classpath for running, compiling, and executing.
In addition to building the classpath, mounting files in the Filesystems
window also makes them available for other IDE tools such as code
completion.
Correctly Mounting Java Packages
Directories that contain Java source code must be mounted at the package
root, which is the directory that contains the default package. The sources
in the directories must be in packages corresponding to their position
relative to the mount point. If a filesystem of Java sources is mounted at
the wrong point, your source code will contain error markers in the Source
Editor and will not compile.
Page 8 of 58
If you have multiple source trees with the
package root of each tree grouped
together under one directory, you have to
mount each package root separately. For
example, in the directory structure
pictured on the right, src is the package
root for the class com.myapp.MyApp.java
and lib is the package root for the class
com.mylib.MyLib.java. In this example,
you cannot simply mount MyProject - you
have to mount src and lib separately.
You can add more than one directory at a time by holding down the Control
key and selecting multiple directories in the Mount wizard.
Mounting Resource Libraries
If your code depends on any resource libraries, you have to mount the
libraries in order to add them to the project's Java classpath. Resource
libraries can be contained in regular directories or, more commonly, in JAR
files. You can mount a JAR file as a filesystem in the IDE by right-clicking
the root filesystem node and choosing Mount > Archive File. Mounting a
resource library also makes all of its contents available for code completion.
If you do not need to browse through the files in your resource library, you
can hide the filesystem by setting its Hidden property to True.
Note: You can display a hidden filesystem by right-clicking the root
Filesystems node and choosing Customize. In the customizer, select the
hidden filesystem and set its Hidden property to False.
Advanced Project Setup
Now let's look at a more complicated project structure and how to mount it
correctly in the IDE. We will be discussing a few concepts we haven't gone
over yet, but you can use the links in the text to jump ahead to any
sections that you are not clear about.
Here is the structure of our project:
MyProject
Page 9 of 58
myApp
src // contains sources for myApp
lib // contains binary libraries (JAR files) used by myApp
myLib
src // sources for a library used by myApp and other
applications
lib // contains binary libraries (JAR files) used by myLib
build
myApp // output dir for compiled classes for myApp
myLib // output dir for compiled classes for myLib
lib // contains binary libraries (JAR files) used by both
myApp and myLib
doc // contains generated Javadoc for the project
First, mount the myApp/src and myLib/src directories as separate
filesystems. These are our main development directories - except for the
doc directory, they will be the only filesystems that are visible in our
Filesystems window when we are done setting up our project.
Next, mount the output directories for our classes, build/myApp and build/
myLib, as separate filesystems. There is no reason to keep the output
directories visible in the Filesystems window, since you can execute files
from their source nodes in the development directories. Hide the filesystems
by setting their Hidden property to True.
Now let's set up the compiler types that will place the compiled classes for
myApp and myLib in the correct build directories. First, go to the Options
Window and make a copy of External Compilation called myApp
Compilation. To set this compiler type to store compiled classes in the
build/myApp directory, set the compiler type's Target property to build/
myApp. Then create another copy of External Compilation called myLib
Compilation and set its Target property to build/myLib.
Now we are ready to assign our custom compiler types to the sources in our
source tree. This is a bit tricky, since you cannot just select a filesystem or
group of files in the Filesystems window and set the Compiler property for
all of them. Instead, we will search for all Java objects in each filesystem
and assign the compiler type from the Search Results window.
First, choose Window > Properties (Ctrl-1) to open the Properties window.
Then right-click the myApp/src filesystem and choose Find. Click the Type
Page 10 of 58
tab and select Java Source Objects, then click Search. The Search Results
window returns all the Java source files in myApp/src. Select all of the
sources, then in the Properties window set the Compiler property to myApp
Compilation. Follow the same process to assign myLib Compilation to all the
Java sources in myLib/scr.
Next we can set up our Javadoc output directory. Mount the doc directory
as a filesystem. In the filesystem's property sheet, set the Use in Execution,
Use in Compiler, and Use in Debugger properties to False and set the Use as
Documentation property to True. Then go to the Options window and set
the IDE to use it as the default Javadoc output directory for the
project. The directory will then house all of the Javadoc documentation you
generate for the source you are developing. This documentation will also be
available for Javadoc index searches in the IDE.
Finally, mount your resource libraries in the Filesystems window. In our
example, the libraries are stored in JAR files throughout our source tree, so
you have to mount them with the Mount > Archive File command. If you do
not need to browse through the code in these libraries, hide the filesystems
so that they do not clutter up your Filesystems window.
Previous - TOC - Next
Page 11 of 58
Creating and Editing Java Source Code
Using NetBeans IDE 3.6
Previous - TOC - Next
Feedback
Creating and editing Java source code is the most
important function that the IDE serves. After all, that's
probably what you spend most of your day doing. NetBeans
IDE provides a wide range of tools that can compliment any
developer's personal style, whether you prefer to code everything by hand or want the IDE to
generate large chunks of code for you.
This section covers the following topics:
●
Creating Java files - Using the New wizard and the IDE's templates to create new files, GUI
form templates versus Java source templates.
●
Editing Java files in the Source Editor - Using code completion and abbreviations,
generating bean properties and event listeners, working with import statements, search and
selection tools, and formatting Java code.
●
Navigating between documents - switching between open files, cloning the view of a file,
and splitting the Source Editor.
●
Configuring the Source Editor - customizing the Source Editor to fit your development style.
Creating Java Files
NetBeans IDE contains templates and wizards that you can use to create all kinds of source files,
from Java source files to XML documents and resource bundles.
The easiest way to create a file is to right-click the directory
node in the Filesystem window where you want to create
the file and choose from the New submenu in the node's
contextual menu. The New submenu contains shortcuts to
commonly-used templates and an All Templates command
that you can use to choose from all NetBeans templates.
To demonstrate some of the IDE's source creation and
editing features, let's recreate the ColorPreview class that
comes with the colorpicker example in the IDE's sample
code. Right-click any directory in your mounted filesystems and choose New > Java Class. Name the
file ColorPreview and click Finish. The file opens in the Source Editor.
GUI Templates and Java Templates
If you want to visually edit a Java GUI form using the Form Editor, Form Editor. The form must be created from the JPanel template.
Editing Java Files in the Source Editor
The Source Editor is your main tool for editing source code. It provides a wide range of features that
make writing code simpler and quicker, like code completion, highlighting of compilation errors,
syntax highlighting of code elements, and advanced formatting and search features.
Page 12 of 58
Although we talk about the Source Editor as one Filesystems
window.
Note: Double-clicking a Java form node (
) in the Filesystems opens two tabs in the Source Editor:
a source tab containing the Java source code for the form, and a Form Editor tab showing the design-
time view of the form. To edit the source code for a Java form without opening the Form Editor, right-
click its node and choose Edit.
Using Abbreviations, Word Matching, and Code Completion
The Source Editor provides many features that spare you from having to enter long Java class names
and expressions by hand. The most commonly used of these features are abbreviations, code
completion, and word matching.
Code completion in the Java Source
Editor lets you type a few
characters and then choose from a
list of possible classes, methods,
variables, and so on to
automatically complete the
expression. The Source Editor also
includes a Javadoc preview window
that displays the Javadoc
documentation for the current
selection in the code completion
box, if any exists. The Javadoc is
drawn from the compiled source
files mounted in the IDE.
Abbreviations are short groups of
characters that expand into a full
word or phrase when you press the
space bar. For example, if you enter
psfs and press the space bar, it expands into public static final String. For a full list of the
IDE's default abbreviations, click here.
You can also add your own custom abbreviations for each type of editor. In the Options window,
select Editing > Editor Settings > Java Editor and open the property editor for the Abbreviations
property. You can use the Abbreviations property editor to add, remove, and edit the abbreviations
for Java files.
Word matching is a feature that lets you type a few characters of a word that appears elsewhere in
your code and then have the Source Editor generate the rest of the word. Type a few characters and
press Ctrl-L to generate the next matching word or Ctrl-K to generate the previous matching word.
As a quick exercise, let's make ColorPreview extend JPanel. Put the insertion point after
ColorPicker in the class declaration, then type ex and press the space bar to expand the
abbreviation into extends. Then type the first few letters of javax. The code completion box should
pop up after a few seconds. If it does not, you can always manually open it by pressing Ctrl-Space.
Page 13 of 58
Use the code completion box to enter javax.swing.JPanel.
Configuring Code Completion
The IDE maintains a code completion database which it uses to provide suggestions for code
completion and other features. The code completion database contains classes from the J2SK version
1.4, other commonly used APIs like the Servlet and XML APIs, and the sources in all of the
filesystems you have mounted in your project. Whenever you mount a filesystem, the IDE
automatically adds all of the filesystem's public and protected classes to the project's code
completion database. You can also right-click the filesystem and choose Tools > Update Code
Completions to configure which of the filesystem's classes are available for code completion.
In the Options window, you can disable and enable code completion and set the length of the pause
before the code completion box appears in the Source Editor. Select Editing > Editor Settings > Java
Editor and set the Auto Popup Completion Window property and the Delay of Completion Window
Auto Popup property accordingly.
You can also turn off the Javadoc preview box for code completion. Select Java Editor and uncheck
the Auto Popup Javadoc Window property.
Adding Fields, Bean Properties, and Event Listeners
Even if you prefer to write your code the old-fashioned way, the NetBeans Java editor has some cool
code generation features that you may find handy, especially when dealing with bean properties and
event listeners.
Let's start by adding some of the fields for our colors in ColorPreview. Go to the first line after the
class declaration and type in the following code:
private int red;
Now let's turn this ordinary field into a bean property by making some getter and setter methods for
it. Right-click anywhere in the field declaration and choose Tools > Generate R/W Property for Field.
The following code is generated in the file:
public int getRed() {
return red;
}
public void setRed(int red) {
this.red = red;
}
The methods now show up under the Methods node. The Bean Patterns node now also contains a
bean property node for red.
Now let's add both the field and the get and set methods at the same time. In the Filesystems
window, right-click the Bean Patterns node for ColorPreview and choose Add > Property. In the
dialog, enter green for the name and int for the type, then check Generate Field, Generate Get
Method, and Generate Set Method and click OK. The following code is added to the file:
private int green;
Page 14 of 58
public int getGreen() {
return this.green;
}
public void setGreen(int green) {
this.green = green;
}
So far, so good. But to fully generate a working bean that can get and set the value of each of the
color bean properties and notify the caller of its changes, we have to add event listeners to each of
the set methods. There are two ways to do this. You could right-click the Bean Patterns node and
choose Add > Multicast Event Source to add the java.beans.propertyChangeListener methods,
then enter the rest of the source by hand.
An easier way is to generate all of the necessary code when you create the bean properties. First,
let's get rid of all of the methods and fields we have created so far. You can do so by deleting the
nodes from the Filesystems window or just by deleting the code in the Source Editor.
Next, right-click the Bean Patterns node and choose Add > Property. Enter red for the name, int for
the type, and select the Bound checkbox. Now you can set the dialog to generate not just the field
and methods for the property, but also the property change support code. Click OK to generate the
following code in the Source Editor:
private int red;
private java.beans.PropertyChangeSupport propertyChangeSupport = new java.beans.
PropertyChangeSupport(this);
public void addPropertyChangeListener(java.beans.PropertyChangeListener l) {
propertyChangeSupport.addPropertyChangeListener(l);
}
public void removePropertyChangeListener(java.beans.PropertyChangeListener l) {
propertyChangeSupport.removePropertyChangeListener(l);
}
public int getRed() {
return this.red;
}
public void setRed(int red) {
int oldRed = this.red;
this.red = red;
propertyChangeSupport.firePropertyChange("red", new Integer(oldRed), new
Integer(red));
}
Then all you have to do is repeat the process for the green and blue properties and change the
ColorPreview constructor to the following:
public ColorPreview() {
propertyChangeSupport = new java.beans.PropertyChangeSupport(this);
}
Page 15 of 58
And that's it! You've got a nice working bean ready to be used by the ColorPicker program.
Working With Import Statements
Whenever the IDE generates Java source code, it uses the fully qualified names for all the elements it
creates. There are two tools that you can use to add import statements to your code and change
between simple names and fully qualified names: the Fast Import command and the Import
Management Tool.
To use the Fast Import command, place the insertion point on any class name and press Alt-Shift-I.
In the following dialog box, specify whether to import the class or the entire package.
Unfortunately, the Fast Import command does not change all fully qualified names for the class to
simple names. A more complete tool for handling import statements is the Import Management Tool
(IMT). By default, the IMT changes all occurrences of fully qualified names into simple names and
creates a single-name import statement for each.
Right-click anywhere in the ColorPicker file in the Source Editor and choose Tools > Import
Management Tool. The first page of the IMT shows any unresolved identifiers in your file. These can
occur when you incorrectly enter the class name or when you are referencing code that you do not
have mounted in your project yet. You can enter a new package name to import for the classes, or
import the classes as they are written.
At this point, you can click Finish immediately to run the IMT with its default settings. You can also
click Next to further customize the tool's actions. For example, if you are importing several classes
from a single package, you may want to import the entire package. You can do so on the Removed
Unused Imports page of the wizard. Change the Action column for the package from Use Single-
Name Import to Use Package Import.
Page 16 of 58
Search and Selection Tools
When you are dealing with a large group of files, the ability to quickly find, navigate to, and select
certain strings or files is critical to your productivity. The following list gives you a quick overview of
the search and selection tools that are available in the Source Editor:
Keyboard
Shortcut
Description
of Command
Ctrl-F Search for text in the currently selected file. The Source Editor jumps to the
first occurrence of the string and highlights all matching strings. You can use
F3 to jump to the next occurrence and Shift-F3 to jump to the previous. Turn off search result highlighting.
Page 17 of 58
Alt-Shift-O Open the Fast Open dialog box, which lets you quickly open a file. Start typing
a class name in the dialog box. As you type, all files that match the typed
prefix are shown. The list of files is generated from the the project's mounted
filesystems.
Alt-O Go to source. This shortcut opens the file where the item at the insertion point
is defined.
Alt-G Go to declaration. Similar to the previous shortcut, this opens the file where
the variable at the insertion point is declared.
Ctrl-G Go to line. Enter any line number for the current file and press Enter to jump
to that line.
Ctrl-F2 Add a bookmark (
) to the line of code that the insertion point is currently
on. If the line already contains a bookmark, this command removes the
bookmark.
F2 Go to the next bookmark.
Alt-L Go to the next location in the jump list for the currently selected file. The
jump list is a history of all locations where you made modifications in the
Editor.
Alt-K Go to the previous location in the jump list for the currently selected file.
Alt-Shift-L Go to the next jump list location in all files (not the currently selected file).
Alt-Shift-K Go to the previous jump list location in all files.
Formatting Java Source Code
The IDE automatically formats your code as you write it. You can automatically reformat specific lines
of code or
Navigating Between Documents
The Source Editor makes it easy to manage large number of open documents at one time. The
Source Editor displays a row of tabs for open documents. The tabs appear in the order in which you
opened the documents. You can grab any tab and drag it along the row of tabs to move its position.
Use the left and right buttons in the top-right corner to scroll through the row of tabs.
Page 18 of 58
To switch between open files, do any of the following:
●
Use the drop down list at the top-right of Source Editor. The drop down list displays all of your
open files in alphabetical order.
●
Press Alt-Left and Alt-Right to move one editor tab to the left or right.
●
Press Ctrl-` to open the IDE window manager, which contains icons for each open document in
the Source Editor as well as all open windows like the Filesystems window.
You can also:
●
Maximize the Source Editor. Double-click any document tab or press Shift-Escape to hide all
other IDE windows. If you have split the Source Editor, only the partition you maximize is
displayed.
●
Clone a document. Right-click the document in the Source Editor and choose Clone
Document.
●
Split the Source Editor. Grabbing any document tab and drag it to the left or bottom margin
of the Source Editor. A red box shows you where the new Source Editor partition will reside
once you drop the document. Any Source Editor partition can also be split any number of times.
●
Move documents between Source Editor partitions. Grab the document tab and drag it to
the row of tabs in the destination partition.
Configuring the Editor
To configure Source Editor settings, open the Options window and expand Editing > Editor Settings.
The Editor Settings node has subnodes for the editors used for each different file type. In this
section, we will be looking at configuring the Java editor, but many of the settings are the same for
all editors.
Here is a quick overview of some of the more common customizations to the Source Editor:
●
View or change abbreviations. Open the property editor for the Abbreviations property and
make any changes to the list.
●
View or change all keyboard shortcuts for the IDE. Open the property editor for the Key
Bindings property.
●
View or change all recorded macros. Open the property editor for the Key Bindings
property.
●
Turn off code completion. Set the Auto Popup Completion Window property to False.
●
Set the font size and color for code. Use the Font Size property to quickly change the font
size for all Java code in the Source Editor. Open the property editor for Fonts and Colors to
change the font and color of each type of Java code, like method names or strings.
●
Change the indentation used in your code. You can switch between indentation engines by
choosing a new engine from the Indentation Engine property. You can also configure each
indentation engine by opening the property editor for the property.
●
Set how many spaces are inserted for each tab in your code. Set the Tab Size property
accordingly.
●
Turn off Javadoc for code completion. Go to the Expert tab and set the Auto Popup
Javadoc Window to False.
Previous - TOC - Next
Page 19 of 58
Compiling Java Programs
Using NetBeans IDE 3.6
Previous - TOC - Next
Feedback
Basic compilation is simple. You select the file or folder you want
to compile and choose the appropriate Build or Compile
command. The IDE then compiles the files using the compilation
type you have specified for them. The NetBeans IDE also gives
you tools to deal with more complex project compilation, such as JavaMake
TM
for dependency
management and advanced compilation options for cross-compiling for different SDK versions.
In this section you will learn about the following:
●
Compiling files - the behavior of the Compile and Build commands and viewing output from the
compiler.
●
Working with compiler types - which one to use, creating custom compiler types, setting the
output target directory.
●
Cross-compiling between Java platforms - specifying the compiler executable or libraries used
in compilation.
●
Using JavaMake to Manage Class Dependencies - managing complex dependencies between
Java classes.
Compiling Files
To compile a file or directory, select it in the Filesystems window and choose one of the following from the
main window:
●
Build > Compile (F9) to compile only those files that are new or have changed since the last
compile. The up-to-date check is done by comparing timestamps between the source (.java) and
products (.class) of the compile. This command does not compile the files in subfolders.
●
Build > Compile All (Shift+F9) to compile only those files that are new or have changed since the
last compile, including the files in subfolders.
●
Build > Build (F11) to build all the selected files from source regardless of their up-to-date status.
This command deletes the sourcename.class files in the folder and compiles the source files. This
command does not remove .class files or compile source files in subfolders.
●
Build > Build All (Shift+F11) to build all files from source within the selected folder and its
subfolders .
Any compilation errors and output are displayed in the Output Window. In the Output Window you can:
●
Click any error to jump to the location in the source file where the error occurred.
●
Copy the output to the clipboard by right-clicking in the window and choosing Copy.
●
Redirect the output to a file by right-clicking in the window and choosing Start Redirection of This
View to File. The output is written to the output directory in your IDE's user directory. You can also
choose a specific directory to redirect the output to under Output Window settings in the Options
window.
Working with Compiler Types
Now that we've seen how compilation is initiated, let's look at how the NetBeans IDE defines the rules for
how compilation is carried out. Compiler types are the IDE's main tool for specifying compilation options.
To view and configure compiler types, go to the Options window and expand Building > Compiler Types.
Page 20 of 58
Internal Compilation compiles files within the same virtual machine as the IDE using the javac compiler of
the IDE's default SDK. External Compilation spawns a new VM for compilation. While Internal Compilation
is faster, External Compilation offers you greater configuration options.
All other compiler types shipped with the IDE are basically copies of External Compilation that have been
configured for different compiler executables. Additional IDE modules may insert their own compiler types,
such as the RMI Stub Compiler from the RMI module.
Specifying the Compiler Type for Files and Projects
You can specify which compiler type is used for compilation
at two levels:
●
The project-wide default compiler type. Open the
Options window, select Editing > Java Sources, and
set the Default Compiler property.
●
The compiler type for an individual file. Right-click the
file in the Filesystems window, choose Properties, and
set the Compiler property.
Creating Custom Compiler Types
Page 21 of 58
Each compiler type contains properties that affect how the compiler generates code, such as whether to
generate debugging information and which libraries to use. You can configure compiler types in the Options
window under Building > Compiler Types.
Remember that when you change a compiler type's property, that property is changed for all files that use
that compiler type. If you need to set different options for only some files in your project, you should make
a copy of the compiler type with the desired configuration changes, then set the appropriate files to use
this new compiler type.
You can create a new compiler type with default settings by right-clicking the Compiler Types node in the
Options window and choosing from the New menu. To copy an existing compiler type with all of its
settings, right-click the compiler type and choose Copy. Then right-click the Compiler Types node and
choose Paste > Copy.
Note: You can also change compiler options from any Java source file node's property sheet. Just right-
click any Java source file node in the Filesystems window, choose Properties, and click the ellipsis (...) in
the Compiler property. Remember, though, that the properties you change in the Compiler dialog box are
applied to all files that use this compiler type.
Setting the Target Directory for .class Files
By default, the IDE generates your compiled .class files to the same
directory as the Java source files you are compiling. If you want to keep
your .class files in a separate directory, first mount the target directory in
the IDE. Because the compiler generates classes into subfolders of the
class package, you only need to direct the output to the root of the
filesystem.
For example, in the figure on the right Digits.java is in the package com.
mycompany. If you redirect the compiler output to the build directory, the
compiler automatically generates the com and mycompany directories to
house Digits.class.
Once you have mounted the output directory, select the compiler type's
node in the Options window. The Target property for the node contains a
combo box with all mounted filesystems in your project. Select the output directory in the combo box.
For more information on mounting complex project structures, see Advanced Project Setup.
Cross-Compiling Between Java Platforms
By default, the IDE compiles sources against the JDK on which it is running. You may, however, want to
compile an application to optimize it for a specific version of the Java platform. In this case, you will want
to compile the sources against a specific Java platform's system libraries and possibly using a specific
compiler version.
For example, you might be developing an application that is designed to run on JDK 1.3 while running the
IDE on JDK 1.4. In this case, want to configure the compiler type for your source files to use the JDK 1.3
compiler. To do so, select the compiler type type used by your source files (for example, External
Compilation) in the Options window. Then click the ellipsis button in the External Compiler property. The
External Compiler dialog box, shown below, opens.
Page 22 of 58
This dialog defines how the IDE makes calls to the compiler executable. The Process field points to the
executor that is used. In this case, the Process field is using the {jdk.home}variable to point to your
computer's default SDK. The Arguments field uses variables to insert the various compilation options that
are defined for the compiler type, such as Debug or Optimize.
To switch this compiler type to use a different Java platform's compiler executable, click the ellipsis button
and browse to the executable, or type the absolute path to the executable in the field. Also, since you are
not using the JDK 1.4 compiler, make sure to uncheck the Enable JDK 1.4 Source property.
However, you might need to compile an application against an older JDK version without using the older
JDK's compiler. For example, you might need to compile applets against JDK 1.1, but not want to use the
JDK 1.1 compiler because of performance reasons. In this case, set the compiler type's Boot Class Path
property to the desired Java platform libraries. Again, make sure the Enable JDK 1.4 Source property is
unchecked.
Using JavaMake to Manage Class Dependencies
When you compile Java classes, the compiler performs a basic dependency analysis on the classes you are
compiling. The compiler looks for classes that the class being compiled is dependent on, checks if they are
up-to-date as described above, and compiles any classes that are not up-to-date.
For simple projects this is often enough. For code with complex dependency relationships, however, the
normal Java dependency checking mechanism isn't enough. For examples of what kinds of dependency
relationships are missed by javac, see
Page 23 of 58
JavaMake/index.html.
NetBeans IDE solves this problem by integrating JavaMake, a tool that provides more extensive
dependency management between Java classes. You can enable JavaMake for all of your project's Java
classes by selecting Editing > Java Sources in the Options window and checking the Use JavaMake
property.
The first time you compile a project with JavaMake, the IDE examines all of the classes in a project's
mounted filesystems and records the dependency information in a project database. The IDE only records
dependency information for filesystems which have compilation enabled. The IDE uses this information
during compilation to perform a complete check for any dependent classes that need compilation.
When JavaMake is enabled, the Compile and Build commands behave differently than when using normal
compilation. The behavior of the commands is as follows:
●
Compile/Build. Only compiles or builds the selected files without checking the status of dependent
classes.
●
Compile All/Build All. Compiles the selected file and checks all dependent classes. If any dependent
classes are not up-to-date, the IDE compiles them. These commands effectively build or compile the
entire project, regardless of which class they are run on
Previous - TOC - Next
Page 24 of 58
Debugging Java Programs
Using NetBeans IDE
Previous - TOC - Next
Example Code:
●
arrayFill.java
●
sampleBean.java
Feedback
NetBean's debugging features
expand on the capabilities
provided by the JPDA debugger.
You can visually step through
source code and monitor the state
of watches, variables, strings, and
other elements of your code's
execution.
In this section you will learn about:
●
Basic debugging - Starting a debugging session, using the
Debugger windows, and stepping through your code.
●
Working with breakpoints - Adding and removing a breakpoint,
different types of breakpoints, setting breakpoint conditions, and
customizing the output of a breakpoint.
●
Setting watches - Adding a watch or fixed watch to an object.
Basic Debugging
In this section, we will use a simple example to demonstrate how to start a
debugging session, step through your code manually, and monitor variables
and method calls in the Debugging workspace. We will leave more advanced
functions like setting breakpoints and watches for the following sections.
Our example for this section is the arrayFill program. This program is
very simple. It creates an array of sampleBeans, each one of which has two
properties, firstName and lastName. It then assigns values to the
properties of each bean and prints out the values.
The first thing you want to do is run the program to see if it throws any
exceptions. Open arrayFill.java and press F6 to execute it. The following
output should appear in the Output window:
java.lang.NullPointerException
at arrayFill.loadNames(arrayFill.java:27)
at arrayFill.main(arrayFill.java:34)
Exception in thread "main"
Page 25 of 58
Starting a Debugging Session Debug menu:
●
Start > Run in Debugger (Alt-F5). Runs the program until the first
breakpoint is encountered.
●
Step Into (F7). Starts running the program and stops at the first
executable statement.
●
Run to Cursor (F4). Starts a debugging session, runs the program
to the cursor location in the Source Editor, and pauses the program.
Since you did not set
any breakpoints in the
example program, just
select arrayFill in the
Filesystems window and
press F7. The IDE opens
the file in the Source
Editor, displays the
Output window and
Debugger windows, and
stops just inside the
main method.
Debugger Windows
Let's take a minute to look at the Debugger windows. The Debugger
windows automatically open whenever you start a debugging session and
close when you finish the session. By default, the IDE opens three Debugger
windows: the Local Variables window, Threads window, and Call Stack
window.
You can open other Debugger windows by choosing from the Window >
Debugger menu. When you open a Debugger window during a debugging
session, it closes automatically when you finish the session. If you open a
Page 26 of 58
Debugger window when no debugging session is open, it stays open until
you close it manually. You can arrange Debugger windows by dragging
them to the desired location.
The following table lists the Debugger windows.
Name Shortcut Description
Local
Variables
Ctrl-Alt-1 Lists the local variables that are within the
current call.
Watches Ctrl-Alt-2 Lists all variables and expressions that you
elected to watch while debugging your
program.
Call Stack Ctrl-Alt-3 Lists the sequence of calls made during
execution of the current thread.
Classes Ctrl-Alt-4 Displays the hierarchy of all classes that have
been loaded by the process being debugged.
Breakpoints Ctrl-Alt-5 Lists the breakpoints in the current project.
Sessions Ctrl-Alt-6 Lists the debugging sessions currently
running in the IDE.
Threads Ctrl-Alt-7 Lists the thread groups in the current
session.
All in One Ctrl-Alt-8 Provides session, threads, calls, and local
variables in a single view.
Stepping Through Your Code
You can use the following commands in the Debug menu to control how
your code is executed in the debugger:
●
Step Over (F8). Executes one source line. If the source line contains
a call, executes the entire routine without stepping through the
individual instructions.
●
Step Into (F7). Executes one source line. If the source line contains
a call, stops just before executing the first statement of the routine.
●
Step Out (Alt-Shift-F7). Executes one source line. If the source line
Page 27 of 58
is part of a routine, executes the remaining lines of the routine and
returns control to the caller of the routine.
●
Pause. Pauses program execution.
●
Continue (Ctrl-F5). Continues program execution. The program will
stop at the next breakpoint.
●
Run to Cursor (F4). Runs the current session to the cursor location
in the Source Editor and pauses the program.
In our example, use the
F7 key to step through the
code one line at a time.
The first time you press
F7, you are presented
with a dialog saying that
the IDE couldn't find java.
lang.ClassLoader.
loadClassInternal in the mounted filesystems. If you want to be able to
step through methods in the JDK as well, mount the JDK sources in the
Filesystems window. Otherwise, use the Step Out option in this dialog to
have the debugger execute the process without trying to open the file in the
debugger..
The problem here is that while the line
sampleBean[] myNames=new sampleBean[fnames.length];
initiates the array that holds the beans, it does not initiate the beans
themselves. The individual beans have to be initiated in the loadNames
method by adding the following code in line 28:
names[i]=new sampleBean();
Working With Breakpoints
Most programs are far too big to examine one line at a time. More likely,
Page 28 of 58
you set a breakpoint at the location where you think a problem is occurring
and then run the program to that location. You can also set more
specialized breakpoints, such as conditional breakpoints that only stop
execution if the specified condition is true or breakpoints for certain threads
or methods.
In this section, we will use the arrayFill program from the last example,
so you will have to recreate the bug by commenting out the code you added
above.
Setting a Breakpoint
If you just want to set a
simple line breakpoint,
you can click the left
margin of the desired line.
A line breakpoint icon (
)
appears in the margin.
You can remove the line
breakpoint by clicking it
again.
For more complex breakpoints, use the New Breakpoint (Ctrl-Shift-F8)
command in the Debug menu. The New Breakpoint dialog box lets you
choose the type of breakpoint you want to create and set breakpoint options
such as conditions for breaking or the information that the breakpoint prints
to the Output window.
Setting Conditions for a Break myNames=null in the Condition field and click OK. The
conditional breakpoint icon (
) appears in the margin before the method
call. Then press Alt-F5 to start debugging the program. The execution
should break at the loadNames method call.
Page 29 of 58
Customizing the Output for a Breakpoint.
Breakpoint Types
The following table lists the different breakpoint types that are available.
Type Description
Line You can break execution when the line is reached, or when
elements in the line match certain conditions.
Page 30 of 58
Method When you set a breakpoint on a method name, program program handles the error or not.
Variable You can stop execution of your program whenever a variable
in a specific class and field is accessed (for example, the
method was called with the variable as an argument) or
modified.
Thread You can break program execution whenever a thread starts,
stops, or both.
Class When you set a breakpoint on a class, you can stop the
debugger when the class is loaded into the virtual machine,
unloaded from the virtual machine, or both.
Setting Watches
A watch enables you to track the changes in the value of a variable or
expression during program execution. To set a watch, select the variable or
expression you want to set a watch on in the Source Editor, then right-click
and choose New Watch (Ctrl-Shit - Next
Page 31 of 58
Packaging and Deploying Your Applications
Using NetBeans IDE 3.6
Previous - TOC - Next
Example Code:
●
NetBeans example code
(ZIP)
Feedback
The standard tool used for packaging and deploying Java applications is the Java
Archive (JAR) file format. JAR files are packaged with the ZIP file format. You can use
JAR files for simple compression and archiving of your application class files, or you can
specify more advanced options like signing and verifying your JAR files or making them
runnable. The IDE provides several features that help you to easily create and work
with JAR files.
This section covers the following topics:
●
Creating a JAR File - Using JAR recipes to specify JAR file contents and
properties, creating a manifest, creating and mounting the JAR file.
●
Modifying a JAR File - Adding and removing files to an existing JAR file, making changes to the manifest, and setting
custom file filters.
●
Executing a JAR File - Specifying the main method in the manifest and executing your application.
Creating a JAR File
To create a JAR file in the IDE, you first create a JAR recipe that specifies the contents and properties of the JAR file. You then
create the JAR file itself by running the Compile command on the JAR recipe file. In this example we will create a JAR file using
the example sources that are automatically mounted in the IDE when you first start the IDE. If you have lost or deleted the
example sources, you can download them using the link above.
Creating a JAR Recipe
To create a JAR recipe, choose File > New from the Main window. In the wizard, expand the JAR Archives node, choose JAR
Recipe, and click Next. In the second page of the wizard, specify the name and location of the JAR file you are going to produce.
The third page of the wizard is where things start getting interesting. This is where you specify the contents of the JAR file.
Select any directory or file from the panel in the left of the wizard and use the Add button to schedule it for inclusion in the panel
on the right. The panel on the left shows all of your mounted filesystems. At this point you can click Finish to create the JAR
recipe, or click Next to specify more detailed options.
Page 32 of 58
The fourth page of the wizard lets you set special options for the JAR file contents. The most important part of this page is the
Target Directory column, which shows the directory structure of the JAR file's contents. For Java sources, the directory structure
must correctly match the Java package structure of the Java classes. If the filesystem from which you added the contents was
correctly mounted at the Java package root, this should automatically be configured correctly. For example, the target directory
for our examples.colorpicker.ColorPicker class is correctly set at examples/colorpicker.
Finally, the fifth page of the wizard lets you generate the manifest file for the JAR file. The JAR manifest file contains meta
information for handling the files contained in the JAR file, such as the location of the main method or signing information. For
more information about the JAR manifest file, click here.
In the JAR Manifest page of the wizard, you can generate basic manifest information automatically, enter information by hand, or
use an existing file as the manifest file. For now, let's just enter the basic manifest information by clicking the Generate button.
Then click Finish to create the JAR Recipe.
Page 33 of 58
The JAR recipe node appears in the Filesystems window, as shown in the
figure on the right. The JAR recipe node includes a subnode for the JAR file
it creates and a Contents subnode listing all of the JAR recipe contents. You
can use the property sheet to modify the contents and properties of the JAR
recipe, such as the compression level and file filter used to produce the JAR
file. To open the property sheet, right-click the JAR contents node and
choose Properties.
Compiling and Creating the JAR File
Once you have created a JAR recipe, you can compile its contents and create the JAR file by right-clicking the JAR recipe and
choosing Compile. The contents of the JAR file are compiled using whatever compiler types and settings you assigned to them in
the IDE. See Compiling Java Programs for more information on compilation settings.
Mounting and Checking the JAR File
If you want to check your JAR file to make sure that the directory structure and manifest file are correct, you can mount the JAR
file in the Filesystems window. To mount the file, right-click the JAR recipe node or the JAR file node under it and choose Mount
JAR. You can then expand the JAR file to view its contents and execute any executable classes it contains. Mounting a JAR also
adds it to the project's classpath.
Modifying a JAR File
Once you have created a JAR recipe, you can modify all aspects of the JAR file that it produces. Your main tool for modifying a
JAR recipe is its property sheet. You can open the property sheet by right-clicking the JAR recipe node and choosing Properties.
Whenever you modify the JAR recipe, you can update its corresponding JAR file with your changes by recompiling the JAR recipe.
Note: If you make any changes to the JAR recipe and recompile the JAR file, these changes are not reflected in the mounted JAR
file. You have to unmount the JAR file and mount it again to view the changes.
Adding Files to a JAR Recipe
You can add files to a JAR recipe using the Contents property in the JAR recipe's property sheet. The Contents property editor
lets you add files and directories to the JAR recipe like you do in the JAR Recipe wizard. The Chosen Content pane on the right
also contains the Target Directory and Target Name info, so you can check that the JAR file's directory structure is correct.
Page 34 of 58
In our case, we want to add the entire colorpicker directory to the JAR recipe, since the ColorPicker application will not work
without the other classes in the directory. To do so, select the directory and click Add. Then click OK and recompile the JAR
recipe to update the JAR file.
Modifying the Manifest File
To modify the manifest file, open the property sheet for the JAR recipe node and click the ellipsis button for the Manifest
property. The Manifest property editor is basically the same as the Manifest page in the JAR Recipe wizard. You can enter the
manifest information by hand, generate the basic information using the Generate button, or use an existing manifest file using
the Load From File button.
Setting the JAR Content Filter
Use a JAR recipe's File Filter property to specify which types of files should be included in your JAR file. When you create a JAR
file, you usually want to include just the compiled .class files and any other resource files located in your source directory, such
as resource bundles or XML documents. The default JAR filter does this for you by excluding all .java, .jar, and .form files from
your JAR file.
Regular Expression Description
\.html$
Include all HTML files
\.java$ Include all Java files
(\.html$)|(\.java$)
Include all HTML and Java files
(Key)|(\.gif$)
Include all GIF files and any files with
Key in their name
In the File Filter property editor, you can also set the filter
using a POSIX-style regular expression. The Regular
Expression field checks your expression's syntax and displays
any invalid expressions in red text. The custom filter is stored
in the JAR recipe file, so you can use or edit the filter if you
later modify the JAR.
The table on the right provides some examples of regular
expressions you can write. For a guide to regular expression
syntax, click here.
Executing a JAR File
In order to execute a JAR file, you must first specify the JAR file's main class in the manifest. If the sources in your JAR file
depend on sources located in other JAR files, the manifest must also contain the classpath to those JAR files. It is not enough to
have the JAR files mounted in the IDE, since the IDE classpath is ignored when running a JAR file.
To make the example JAR file runnable, open the Manifest property editor for its JAR recipe and add the following line:
Main-Class: examples/colorpicker/ColorPicker
Then compile the JAR recipe to produce the new JAR file. You can then run the JAR file by right-clicking the JAR recipe node and
choosing Execute. The ColorPicker application should open in a new window.
Page 35 of 58
Previous - TOC - Next
Page 36 of 58
Using Javadoc
Using NetBeans IDE 3.6
Previous - TOC - Next
Example Code:
●
NetBeans example code
(ZIP)
Feedback
Javadoc is the Java programming language's tool for generating API
documentation. Java API documentation describes important elements
of your code, such as methods, parameters, classes, fields, and so
forth. You can insert special Javadoc comments into your code so that
they will be automatically included in the generated documentation.
Describing your code within the code itself rather than in a separate
document helps to keep your documentation current, since you can
regenerate your documentation as you modify it.
In this section, you will learn about the following:
●
Integrating Java API documentation into the IDE - Searching for and displaying Javadoc, mounting
and configuring Javadoc filesystems, configuring the IDE's Web browser to display Javadoc files, and
integrating Javadoc with code completion.
●
Adding Javadoc comments to your code - Rules and special tags for Javadoc comments, tools for
automatically commenting your code, and correcting errors in comments.
●
Generating Javadoc documentation - Using the standard Javadoc doclet, initializing generation, and
specifying the output directory for the generated files.
Integrating Java API Documentation into the IDE
The IDE lets you integrate API documentation for the code you are working on into the IDE itself. You can then
quickly bring up the documentation for any classes in your code or even when you're looking for a particular
class or method in the code completion box. The referenced API documentation can be stored in an archive file,
regular directory, or on the Internet.
Making Javadoc Documentation Available in the IDE
In order to make Javadoc documentation available in the IDE, you must mount the documentation as a Javadoc
filesystem. A Javadoc filesystem is any directory, archive file, or location on the Internet that contains API
documentation.
You mount Javadoc filesystems in by choosing Tools > Javadoc Manager from the main window. Use the Add
buttons to add the appropriate type of Javadoc filesystem. You must mount each filesystem at the directory
that contains the Javadoc index, which is located in a document called index.html file or a directory called
index-files. (Sometimes both an index file and an index directory are present). The directory that contains
the Javadoc index is usually called api or apidocs.
For example, if you want to make the NetBeans Execution API documentation available directly from the
netbeans.org portal, click Add HTTP and enter
ExecutionAPI/.
Page 37 of 58
For each filesystem, you can specify the following:
●
Hidden. Specifies whether this filesystem is visible in the Filesystems window. You should set this
property to False if you want to browse through the documentation tree in the Filesystems window.
●
Search Engine. Specifies the default Javadoc search engine. The Japanese version of the search engine
lets you to search internationalized Javadoc documentation.
●
Root Offset. If your Javadoc documentation is inside a JAR or zip file, the Javadoc index is sometimes
buried in the file's hierarchy. Since you can only mount the JAR or zip file as a whole, you have to set the
Root Offset for these filesystems to the directory that contains the Javadoc index. (For HTTP and local
filesystems, you just mount the filesystem directly at the directory that contains the Javadoc index.)
Searching and Displaying Javadoc Documentation
The easiest way to search for Javadoc documentation for any element of Java code is to select any occurrence
of the element in the Source Editor and press Shift-F1. Doing so opens the Javadoc Index Search in a Source
Editor tab. The Javadoc Index Search tab displays all matching entries in your mounted Javadoc filesystems.
Select any search result to view the Javadoc in the bottom panel of the dialog box, or double-click a search
result to open it in the IDE's external browser.
Page 38 of 58
If you prefer to browse through your Javadoc filesystem hierarchy, choose the Javadoc filesystem from the
View > Documentation Indices menu. The filesystem's index page is opened in your external web browser.
Configuring the External Browser to Display Javadoc Files
Javadoc files are displayed in the IDE's designated web browser. To set the IDE's designated web browser,
choose Tools > Setup Wizard and choose a browser from the Web Browsers combo box. The Setup Wizard lists
all of the web browsers installed on your system.
Page 39 of 58
If you select a web browser and it does not open correctly, it is possible that the IDE does not have the correct
location for the browser executable. You can configure the web browser by opening the Options window,
expanding IDE Configuration > Servers and External Tool Settings > Web Browsers and selecting the web
browser. Open the property editor for the Browser Executable property, then click the ellipsis button for the
Process field to locate your browser executable. Then click OK to exit the dialog box.
If your Web browser uses a proxy to access the Internet from behind a firewall, you must also configure the
browser to bypass the proxy for local files. If this option is not set, you could get a 404 File Not Found error
when you try to display Javadoc files that reside on your local machine.
Adding Javadoc Comments to Your Code
Javadoc comments are special comments (marked by a /**, as opposed to a /* for regular comments) that
describe your code. When you generate Javadoc documentation for a source file, all of the Javadoc comments
in the file are automatically included in the documentation. You can put special tags describing elements of your
code in Javadoc comments and format your comments with XHTML tags.
The IDE provides an Auto Comment tool that analyzes your code for any elements that have incomplete or
incorrect documentation and lets you enter the documentation right in the tool. To see how the Auto Comment
tool works, let's use it on one of the example files that comes with the IDE. In the IDE's default project, go to
the examples/colorpicker directory and double-click the ColorPreview Java file node to open the file in the
Source Editor.
The ColorPreview class is a simple bean that sets the background color for a visual component to various
Page 40 of 58
colors. The code is already completely documented, so to see how the Auto Comment tool works let's first put
some errors in the documentation. In the comment above the addPropertyChangeListener method, remove
one of the stars (*) to change it from a Javadoc comment to a regular comment.
Now right-click anywhere inside the Source Editor and choose Tools > Auto Comment. The Auto Comment tool
shows all of the methods in the file that should be commented in the top left of the tool. You can use the
buttons above this field to choose which methods are processed by the tool.
As you can see, all of the methods in the file have the green "correct Javadoc" icon except for
addPropertyChangeListener, which has a red "missing Javadoc" icon. Select addPropertyChangeListener to
see what problem the tool found with the method's comment. Use the View Source button to jump to the line in
the Source Editor where the method first appears and the Refresh button to rescan the file for incorrect
comments. You can add Javadoc comment text and tags in the right side of the tab.
Generating Javadoc Documentation
Once you have entered Javadoc comments into your code, you can generate the HTML Javadoc files for your
source files. The Java language uses a program called a doclet to generate and format the API documentation
files. Although there are numerous doclets that produce documentation in a wide variety of formats, the
standard doclet used by the IDE generates HTML documentation pages.
To generate documentation, right-click any file or folder and choose Tools > Generate Javadoc. By default, the
doclet generates the documentation files to the javadoc directory in your user directory. The doclet generates
the Javadoc index files (including frame and non-frame versions, package lists, help pages explaining how the
documentation is organized, and so forth) into the javadoc directory. Individual files describing each class are
generated into subdirectories that match the directory structure in your source tree. For example, if you run the
Page 41 of 58
Generate Javadoc command on the sampledir filesystem, the javadoc directory contains the Javadoc index
for the filesystem and a directory called examples with all of the individual documentation files.
Specifying an Output Directory for Javadoc Files
You can specify any mounted filesystem as the output directory for generated Javadoc files. For example, if you
want to create a docs directory to house API documentation for sources in the sampledir filesystem, create the
docs directory somewhere on your system and mount it in the IDE. Then go to the Options window, select Code
Documentation > Doclets > Standard Doclet, and choose the docs directory in the Destination property.
Previous - TOC - Next
Page 42 of 58
Team Development With CVS
Using NetBeans IDE 3.6
Previous - TOC - Next
Feedback
Version control software (VCS) programs track the changes to a set of files
and manage how users access and change those files. A VCS is an
essential tool for any team of developers that works on a common code
base. It lets you roll back any unwanted changes, avoid conflicts when two
developers alter the same file, and establish different development branches on the same codeline.
NetBeans IDE integrates VCS functionality right into the IDE itself, letting you view versioning status and run VCS
commands on files in the Filesystems window. The IDE uses VCS profiles to pass commands and arguments to the
VCS executable on your machine. Profiles for Concurrent Versioning System (CVS), Visual Safe Source (VSS) and
Polytron Version Control System (PVCS) are included in the IDE, and you can download experimental profiles for
other VCS programs from the netbeans.org website. The IDE also includes a built-in CVS executable that you can
use without having CVS installed on your computer.
In this section, we will cover the basics of using CVS in the IDE. Although command usage differs between VCS
applications, many of the concepts discussed here are common to all VCS applications.
This section covers:
●
Checking Out Sources - Mounting CVS filesystems, selecting which sources to check out, and running CVS
commands.
●
Configuring a CVS Filesystem - Configuring CVS filesystems and changing the display of file status
information.
●
Working With CVS Files - Generating diffs and patches, applying patches, and resolving merge conflicts.
●
Making Safe Commits - Finding all modified files in your working directory, checking for mistakes,
committing your changes.
Checking Out Sources
Like all source files, you have to mount version controlled sources in the Filesystems window to be able to work with
them. You mount version controlled sources in a VCS filesystem. A VCS filesystem is just like a regular IDE
filesystem, except that it is directly linked to the VCS repository so you can use it to call VCS commands right in the
Filesystems window. You can mount a source directory that is already under version control, or mount an empty
directory and check out source files from the CVS repository.
To illustrate how to check out sources in the IDE, let's check out the Beans module from the NetBeans CVS
repository. We will use the anoncvs guest account, so you do not have to worry about registering with netbeans.org
to complete this example.
Mounting a CVS Filesystem
To get started, choose Versioning > Mount Version Control from the main window. In the wizard, select CVS in the
Version Control System Profile combo box.
Page 43 of 58
Now you can start filling in the CVS repository information. First, you need to create a directory to house the
sources. Click the Browse button in the Working Directory field and create a directory called beans somewhere on
your system. Then fill in the following CVS server information:
●
CVS Server Type - pserver
●
CVS Server Name - cvs.netbeans.org
●
CVS Server User Name - the anonymous login name, anoncvs
●
CVS Repository - the location of the sources on the repository server, /cvs
●
Use Built-In CVS Client - sets which CVS client the IDE uses
Before you finish mounting the CVS filesystem, you have to log in to the server. Click the Login button without
entering a password. . (No password is necessary for the anoncvs account.) If the command succeeds, the text
beneath the Password field changes to You are already logged in. If the command fails, check your connection
to the Internet and your firewall settings.
Page 44 of 58
Note: To use CVS, you must be able to access the Internet on the CVS port (2401 by default).
Click Finish to close the wizard and mount the filesystem. A new CVS filesystem node appears in the Filesystems
window.
Now that you have mounted the
filesystem, you can get the sources
from the repository. Right-click the
filesystem node and select the CVS
submenu. This submenu contains
CVS commands that you can run
on your files. Hold down the Ctrl
key and choose Checkout from the
CVS menu.
The CVS Checkout dialog box lets
you set advanced options for the
CVS Checkout command. The "." in
the Module(s) field indicates that
you want to check out the entire
CVS repository. Since we only want
to check out the Beans module,
enter beans in this field and click
OK. Alternatively, you can click the
Select button to view a list of all
modules in the repository and then
select from the list.
Once you run the command, the
VCS Output window opens listing
the CVS command status and the
CVS output. You can kill the
command by clicking the Stop
button. When the command
finishes, you can expand the
filesystem and begin working with
the files.
Specifying Advanced Command Options
The IDE's CVS support lets you set all of the command options that are available on the command line. To see a
command dialog in which you can specify advanced command options, hold down the Ctrl key when choosing a
command from the CVS commands menu. You can also configure a VCS filesystem to always display the advanced
options dialog of CVS commands. Right-click the filesystem's node in the Filesystems window and choose Properties,
then set the filesystem's Advanced Command Options property to True.
Configuring a CVS Filesystem
Once you have checked out your files, you can usually start working with them immediately. You may, however,
need to further configure the filesystem to correctly build the Java classpath or display VCS status information. The
two main tools for configuring a CVS filesystem are the VCS filesystem customizer, in which you can change the
server and user information that you entered when mounting the filesystem, and the filesystem's property sheet.
Right-click the filesystem and choose Customize to view the filesystem's customizer, or choose Properties to view
the filesystem's property sheet.
Page 45 of 58
Setting the Relative Mount Point
Like all Java filesystems in the IDE, CVS filesystems must be mounted at the directory that contains the default
package. In order for all your VCS commands to function correctly, however, the filesystem must be mounted at the
working directory root. You can resolve this problem by mounting the filesystem at the working directory root, then
setting the relative mount point at the default package root.
For example, the default package for the sources in the Beans filesystem is in the src directory. To set the relative
mount point, right-click the filesystem node and choose Customize. Click the Select button in the Relative Mount
Point field, expand beans, and select src. Then click OK to apply your changes.
If you want to mount more than one directory as a default package root, hold down the Ctrl key, select all of the
relative mount points, and click OK. Each relative mount point is mounted as its own filesystem.
Shortening and Hiding File Status
Expand the Beans filesystem to take a look at how
NetBeans IDE displays CVS status information. The
name of each file is followed by the file's status, its
revision number, and if you are working on a branch
also the branch tag name. If you double-click a file to
open it in the Source Editor, you will notice that the
CVS status information is also printed on the file's
Source Editor tab.
This status information can actually be problematic,
because it makes the Source Editor tabs take up a lot
of room. One solution is to set Shorten File Statuses to
True in the filesystem's property sheet . This option
saves some space, but not very much.
As you can see, the IDE also displays a badge for each
file that expresses its CVS status. The badges and their
meanings are shown in the table below.
Badge Description
Locally modified
Locally removed
Merge conflict
Needs checkout
Up-to-date
For the most part, you probably just want to know whether a file is up-to-date or not.
You can therefore hide all file status information and just use the badges. If you need to
see more versioning information, like the file's version number or sticky tag information,
you can always run the CVS Status command on the file.
To hide all status information, open the filesystem's property sheet and click the ellipsis
in the Annotation Pattern property. The Annotation Pattern property editor shows a node
for each type of CVS status information, plus subnodes that govern how the information
is displayed. Simply delete each node (except the Variable:filename node, of course)
and click Apply Changes. If you find that you miss the status info, you can always
display it again by opening this dialog box and clicking Restore Defaults.
Working With CVS Files
One of the main advantages of CVS is that it lets you see how your files evolve as you and members of your
development team make changes to them. NetBeans IDE expands on this functionality by making it easier to view
changes to files and resolve conflicts between file revisions. It also makes it easier to prepare your checkins and
check for mistakes before you commit.
Using the Graphical Diff Tool
When diff version-controlled files on the command line, the diff command compares two file revisions and prints
out the differences between the two revisions along with the line numbers where they occurred. Although many
command-line users are adept at reading long diff print outs, new users (and even some command-line veterans)
often find this format confusing.
Page 46 of 58
NetBeans IDE uses a graphical diff viewer to display both file revisions side-by-side with the differences highlighted.
The repository version is shown in the left pane, and your working file is shown in the right pane. You can use the
buttons in the top left-hand corner of the viewer to navigate through the revision differences.
To run a graphical diff on a file, right-click it in the Filesystems window and choose CVS > Diff Graphical. If you run
the command without any advanced options specified, it compares your working directory version with the head
revision in the repository. If you want to specify which revisions to diff by tag name or date, hold down the Ctrl key
while choosing the command and enter the information in the advanced command dialog.
Unlike the Diff Textual command, you cannot run the Diff Graphical command on a directory. You can, however,
select several files in the Filesystems window and run the Diff Graphical command on them. The diff for each file
appears in its own Source Editor tab. As we will see in the Making Safe Commits section, you can also use the
Search Results window as a powerful tool to find and diff all Locally Modified files in a filesystem.
Creating and Applying Patches
A patch file lets you take a snapshot of the changes between two revisions and send them to another developer
without checking in the changes or even requiring the other developers to have a CVS connection to the repository.
Patch files are often used when a development team wants to evaluate a proposed change before checking it into
the repository.
To create a patch, right-click the directory or file you want to create the patch from and choose CVS > Diff Textual.
Click OK without specifying any options if you want to diff your working directory against the current head revision
in the repository, or enter any additional options like specific tags or dates.
Page 47 of 58
The patch is displayed in the VCS Output window. Right-click inside the VCS Output window and choose Save to
File, then specify a location and name for the file.
To apply a patch file, right-click the exact file or directory from which the patch was created and choose Tools >
Apply Patch. Then browse to the location of the patch file and click OK. If you want to undo the changes made by
the patch, you can delete the modified files and then run the Update command to get a clean version from the
repository.
Resolving Merge Conflicts Graphically
When you update a locally modified file, CVS merges changes from the repository with the changes you have made
to your local file. If someone else on your development team has committed changes to the same lines that you
have changed in your working directory, a merge conflict occurs.
CVS marks merge conflicts by bracketing the offending lines with error markers (<<<<<< and >>>>>>>). In the
following example, the changes to the field name have generated the merge error. The first line of the error shows
what you have in the working directory, and the second line shows what is in the repository.
<<<<<<< ColorPreview.java
private int yellow;
=======
private int orange;
>>>>>>> 1.2
Normally, you would open the file in a text editor, delete the error markers and the version you do not want to
keep, and commit the file. (You do not necessarily have to choose one of the two versions - you could write
something completely new.) As long as the error markers are present in the file, CVS will not let you check it into
the repository.
The IDE provides a Merge Conflicts Resolver that makes resolving merge conflicts easier. To open the tool, right-
click any file whose status is Merge Conflict and choose Resolve Conflicts. The Merge Conflicts Resolver looks very
similar to the Graphical Diff tool - it displays the repository revision on the right, working revision on the left, and
the final version in the bottom.
Page 48 of 58
Use the Accept buttons to choose which of the two versions you want to accept. You can use the Accept & Next
buttons to resolve a conflict and jump to the next conflict. You cannot write into the bottom panel of the Merge
Conflicts Resolver, so if you want to write something new rather than accepting either of the two versions, close the
dialog and resolve the conflict manually in the Source Editor.
Making Safe Commits
CVS lets you roll back most changes to the repository, so for the most part you do not have to worry about your
commits doing permanent damage to your project. Still, introducing code that breaks your project's build process or
introduces critical bugs is embarrassing and time-consuming to fix. This can often happen not because your code
contained bugs, but because you forgot to check something in or checked something in that you did not mean | https://www.techylib.com/el/view/kaputmaltworm/using_netbeans_ide_3.6 | CC-MAIN-2021-25 | refinedweb | 13,695 | 61.46 |
Rumor has it that Ertl, John may have mentioned these words: >All, > >I hate to ask this but I have just installed 2.4 and I need to get some info >from a subprocess (I think that is correct term). > >At the Linux command line if I input dtg I get back a string representing a >date time group. How do I do this in Python? I would think Popen but I >just don't see it. It could, but there's also a better (IMHO), 'pythonic' way, something like this: def gettoday(): import time today = time.strftime('%Y%m%d%H',time.localtime(time.time())) return (today) >$ dtg >2004122212 If you wanted to use popen, it would look rather like this: import os dtg_s = os.popen("/path/to/dtg").readlines()[0] But this may use more system resources (spawning child shells & whatnot) than doing everything internally with the time module in Python. HTH, Roger "Merch" Merchberger -- Roger "Merch" Merchberger | A new truth in advertising slogan SysAdmin, Iceberg Computers | for MicroSoft: "We're not the oxy... zmerch at 30below.com | ...in oxymoron!" | https://mail.python.org/pipermail/tutor/2004-December/034300.html | CC-MAIN-2017-30 | refinedweb | 182 | 71.95 |
iCelPath Struct Reference
Interface for CEL Path. More...
#include <tools/celgraph.h>
Inheritance diagram for iCelPath:
Detailed Description
Interface for CEL Path.
Definition at line 169 of file celgraph.h.
Member Function Documentation
Adds a new node at the end of the path.
Clears path.
Get current node in path.
Get currents node position.
Get currents node sector.
Get First node in path.
Get last node in path.
Get number of nodes in the path.
Checks if there are more nodes ahead in the path.
Checks if there are more nodes back in the path.
Adds a new node in position pos.
Invert nodes in the path.
Get next node in path.
Get previous node in path.
Restarts path.
The documentation for this struct was generated from the following file:
- tools/celgraph.h
Generated for CEL: Crystal Entity Layer 2.1 by doxygen 1.6.1 | http://crystalspace3d.org/cel/docs/online/api/structiCelPath.html | CC-MAIN-2015-27 | refinedweb | 146 | 80.99 |
I need to write a function that normalizes a vector (finds the unit vector). A vector can be normalized by dividing each individual component of the vector by its magnitude.
The input for this function will be a vector i.e. 1 dimensional list containing 3 integers.
The code follows:
def my_norml(my_list):
tot_sum = 0
for item in my_list:
tot_sum = tot_sum + item**2
magng = tot_sum**(1/2)
norml1 = my_list[0]/magng #here i want to use a for loop
norml2 = my_list[1]/magng
norml3 = my_list[2]/magng
return [norml1, norml2,norml3]
There's a couple of things you could do here.
Initially, let me just point out that
tot_sum = tot_sum + item**2 can be written more concisely as
tot_sum += item**2. To answer your question, you could use a loop to achieve what you want with:
ret_list = [] for i in my_list: ret_list.append(i / magng) return ret_list
But this isn't the best approach. It is way better to utilize comprehensions to achieve what you need; also, the
sum built-in function can do summing for you instead of you needing to manually perform it with a
for-loop:
magng can easily be computed in one line by passing a comprehension to
sum. With the comprehension you raise each
item to
** 2 and then immediately divide the summation that
sum returns:
magng = sum(item**2 for item in my_list) ** (1/2)
After this, you can create your new list by again utilizing a comprehension:
return [item/magng for item in my_list]
Which creates a list out of every
item in
my_list after dividing it by
magng.
Finally, your full function could be reduced to two lines (and one but that would hamper readability):
def my_norml(my_list): magng = sum(item**2 for item in my_list) ** (1/2) return [item/magng for item in my_list]
This is more concise and idiomatic and pretty intuitive too after you've learned comprehensions. | https://codedump.io/share/rLR3Z1L1dNc6/1/creating-a-new-list-using-for-loop | CC-MAIN-2017-26 | refinedweb | 318 | 53.34 |
/* * Copyright (c) 2000 David Flanagan. All rights reserved. * This code is from the book Java Examples in a Nutshell, 2nd Edition. * It is provided AS-IS, WITHOUT ANY WARRANTY either expressed or implied. * You may study, use, and modify it for any non-commercial purpose. * You may distribute it non-commercially as long as you retain this notice. * For a commercial use license, or to purchase the book (recommended), * visit. */ /** * This program computes prime numbers using the Sieve of Eratosthenes * algorithm: rule out multiples of all lower prime numbers, and anything * remaining is a prime. It prints out prime numbers up to the supplied * command-line argument. */ public class Sieve { public static void main(String[] args) { // We will compute all primes less than the value specified on the // command line, or, if no argument, all primes less than 100. int max = 100; // Assign a default value try { max = Integer.parseInt(args[0]); } // Parse user-supplied arg catch (Exception e) { } // Silently ignore exceptions. // Create an array that specifies whether each number is prime or not. boolean[] isprime = new boolean[max + 1]; // Assume that all numbers are primes, until proven otherwise. for (int i = 0; i <= max; i++) isprime[i] = true; // However, we know that 0 and 1 are not primes. Make a note of it. isprime[0] = isprime[1] = false; // To compute all primes less than max, we need to rule out // multiples of all integers less than the square root of max. int n = (int) Math.ceil(Math.sqrt(max)); // See java.lang.Math class // Now, for each integer i from 0 to n: // If i is a prime, then none of its multiples are primes, // so indicate this in the array. If i is not a prime, then // its multiples have already been ruled out by one of the // prime factors of i, so we can skip this case. for (int i = 0; i <= n; i++) { if (isprime[i]) // If i is a prime, for (int j = 2 * i; j <= max; j = j + i) // loop through multiples isprime[j] = false; // they are not prime. } // Now output the results: for (int i = 0; <= n; i++) { if (isprime[i]) System.out.println(i); } } } | https://www.cs.kent.ac.uk/people/staff/dat/miranda/javasieve.html | CC-MAIN-2019-09 | refinedweb | 362 | 73.47 |
A example code block, if the
Promise is on
resolved state, the first parameter holding a callback function of the
.then() method will print the resolved value. Otherwise, an alert will be shown.
const promise = new Promise((resolve, reject) => {const res = true;// An asynchronous operation.if (res) {resolve('Resolved!');}else {reject(Error('Error'));}});promise.then((res) => console.log(res), (err) => alert(err));
.catch()method for handling rejection
The function passed as the second argument to a
.then() method of a promise object is used when the promise is rejected. An alternative to this approach is to use the JavaScript
.catch() method of the promise object. The information for the rejection is available to the handler supplied in the
.catch() method.
const promise = new Promise((resolve, reject) => {setTimeout(() => {reject(Error('Promise Rejected Unconditionally.'));}, 1000);});promise.then((res) => {console.log(value);});promise.catch((err) => {alert(err);});
The JavaScript
Promise.all() method can be used to execute multiple promises in parallel. The function accepts an array of promises as an argument. If all of the promises in the argument are resolved, the promise returned from
Promise.all() will resolve to an array containing the resolved values of all the promises in the order of the initial array. Any rejection from the list of promises will cause the greater promise to be rejected.
In the code block,
3 and
2 will be printed respectively even though
promise1 will be resolved after
promise2.
const promise1 = new Promise((resolve, reject) => {setTimeout(() => {resolve(3);}, 300);});const promise2 = new Promise((resolve, reject) => {setTimeout(() => {resolve(2);}, 200);});Promise.all([promise1, promise2]).then((res) => {console.log(res[0]);console.log(res[1]);});
A JavaScript promise’s executor function takes two functions as its arguments. The first parameter represents the function that should be called to resolve the promise and the other one is used when the promise should be rejected. A
Promise object may use any one or both of them inside its executor function.
In the given example, the promise is always resolved unconditionally by the
resolve function. The
reject function could be used for a rejection.
const executorFn = (resolve, reject) => {resolve('Resolved!');};const promise = new Promise(executorFn);
The
.then() method of a JavaScript
Promise object can be used to get the eventual result (or error) of the asynchronous operation.
.then() accepts two function arguments. The first handler supplied to it will be called if the promise is resolved. The second one will be called if the promise is rejected.
const promise = new Promise((resolve, reject) => {setTimeout(() => {resolve('Result');}, 200);});promise.then((res) => {console.log(res);}, (err) => {alert(err);});
setTimeout()
setTimeout() is an asynchronous JavaScript function that executes a code block or evaluates an expression through a callback function after a delay set in milliseconds.
const loginAlert = () =>{alert('Login');};setTimeout(loginAlert, 6000);
Promiseand
.then()
In JavaScript, when performing multiple asynchronous operations in a sequence, promises should be composed by chaining multiple
.then() methods. This is better practice than nesting.
Chaining helps streamline the development process because it makes the code more readable and easier to debug.
const promise = new Promise((resolve, reject) => {setTimeout(() => {resolve('*');}, 1000);});const twoStars = (star) => {return (star + star);};const oneDot = (star) => {return (star + '.');};const print = (val) => {console.log(val);};// Chaining them all togetherpromise.then(twoStars).then(oneDot).then(print);
An instance of a JavaScript
Promise object is created using the
new keyword.
The constructor of the
Promise object takes a function, known as the executor function, as the argument. This function is responsible for resolving or rejecting the promise.
const executorFn = (resolve, reject) => {console.log('The executor function of the promise!');};const promise = new Promise(executorFn);
PromiseObject
A
Promise is an object that can be used to get the outcome of an asynchronous operation when that result is not instantly available.
Since JavaScript code runs in a non-blocking manner, promises become essential when we have to wait for some asynchronous operation without holding back the execution of the rest of the code.
The
.then() method returns a Promise, even if one or both of the handler functions are absent. Because of this, multiple
.then() methods can be chained together. This is known as composition.
In the code block, a couple of
.then() methods are chained together. Each method deals with the resolved value of their respective promises.
const promise = new Promise(resolve => setTimeout(() => resolve('dAlan'), 100));promise.then(res => {return res === 'Alan' ? Promise.resolve('Hey Alan!') : Promise.reject('Who are you?')}).then((res) => {console.log(res)}, (err) => {alert(err)}); | https://www.codecademy.com/learn/becp-22-async-javascript-and-http-requests/modules/wdcp-22-learn-javascript-syntax-promises/cheatsheet | CC-MAIN-2022-40 | refinedweb | 744 | 51.55 |
koolaid
Restful model framework for Express based on es7 decorators that drank all the babel koolaid
Warning: I have no idea if this is a good idea.
Motivation
Frameworks and examples built around Express tend to closely couple business logic to routes. This coupling often doesn't scale well (for access control) and makes it difficult for the business logic in one route to interact with the business logic on another route (unless you did a great job separating your models and controllers)
Also, decorators sounded cool.
Usage
Instead of putting controllers in one folder tree and models in another a la rails, koolaid provides a light DSL (yes, I know I just referenced two things with ruby associations, bare with me) the lets you specify your controller logic in the same place as your model logic.
n.b. I'm speaking in MVC terms (well, MC, since the V is pretty much always assumed to be toJSON()), but koolaid is not an MVC framework. It simply helps you mount your models on routes without tightly coupling model interactions to HTTP.
At this time, koolaid serves two main purposes:
- binding methods to routes
- access control plumbing.
- context (provided by continuation-local-storage)
The following examples will be a bit naive, but should provide a basic overview of what koolaid can do. See the documentation for RestModel for a base class that will likely get you started.
(The next few sections explain how to use koolaid's decorators - skip to the end to see how to initialize the library).
Route Binding
Let's say we have a metrics backend that has the concept of a counter and we can increment and decrement counters for different gauges. Finally, assume we have reasonable db implementation available to us.
class Counter { constructor(data) { data = data || {}; this.count = data.count || 0; } async increment() { this.count++; await db.write(this); }, async decrement() { this.count--; await db.write(this); } }
Now, let's expose that Counter over HTTP. We need to indicate the Counter is a @resource and specify routes for
POST /counter/:id/increment and
POST /count/:id/decrement.
@resource({basePath: `/counter`}) class Counter { constructor(data) { data = data || {}; this.count = data.count || 0; } @method({verb: `POST`, path: `/:id/increment`}) async increment() { this.count++; await db.write(this); } @method({verb: `POST`, path: `/:id/decrement`}) async decrement() { this.count--; await db.write(this); } }
Since these methods don't return anything, they'll return a 204 success instead of a 200.
But wait, you say. This looks like it'll expose some things to HTTP, but how does it deal with that
:id routeParam? Well, we need to add one more method:
findById()
findById()will be needed for any class that has non-static methods.
findById()is really a special case of
find(), so if your models inherit from RestModel,
findById()is implemented for you, but you'll need to implement
find().
); } }
Access Control
Now, let's assume we want admins to be able to fetch a particular counter's value (e.g. GET /counter/rpm) or all counters values (e.g. GET /counter) but that regular users shouldn't be able to.
Note: it's up to you figure out who the user is and populate
req.user before koolaid starts.
); } @method({verb: `GET`, path: `/`}) @access((user) => { return user.isAdmin(); }) static async getAll() { const counters = await db.getAll(); return counters; } @method({verb: `GET`, path: `/:id`}) @access((user) => { return user.isAdmin(); }) getOne() { return this; } }
Now, we've added both a static and a non-static GET method for retrieving counter data. Note that since the model gets loaded automatically for routes with the
:id route parameter, the non-static GET doesn't need to do much of anything - all the work is done internally.
Context
Context (for lack of a better term) is a way to pass arbitrary data throughout a resource's methods. Every method (static and non-static alike) of a class decorated with @resource gets an extra argument added to its parameter list,
ctx. ctx is a continuation-local-storage namespace with several things bound to it already; more can be bound by passing a function to koolaid's initializer (described later).
The built-in properties are
req: Express's HttpRequest
res: Express's HttpResponse
user: extraced from
req.user
Model: The constructor use for the current resource (useful when the method being invoked is defined in a parent class of the resource)
model: The model auto-loaded via the
:idroute parameter (mostly used internally since your logic is probably in a non-static method where
this === ctx.get('model')
logger: By default,
ctx.get('logger')is simply
console, but you can override it with your own.
For this section, let's use a slightly more abstract example. In this case, when we call our create method, we'll proxy to a third party service. If the third-party service call fails, we want to log that error, but send back a 502.
@resource({basePath: `/my-resource`}) class MyResource { static async create(ctx) { try { const data = await thirdParty.create() return new MyResource(data); } catch (e) { ctx.get(`logger`).error(e); throw new BadGateway(`It looks like we're having a problem with one of our vendors. Please try again later`); } } }
Now, let's get fancy. If we initialize koolaid with a custom context function, we can provide a more robust logging implementation.
function context(ctx) { const req = ctx.get(`req`); const user = ctx.get(`user`); const logger = Object.keys(console).reduce((logger, key) => { logger[key] = function(...args) { console[key]({ user: user.id, requestId: req.headers[`x-request-id`] }, ...args); }; return logger; }, {}); ctx.set(`logger`, logger); }
Now, we'll have some extra metadata for every log statement. Note that
context() gets called pretty early in koolaid's request handling process, so only
req,
res, and
user will be available. If you haven't already populated
req.user, you could instead inject user directly into
ctx here.
Initialization
This is all well and good, you say, but, how do we turn it on? Well, it initializes like most any other Express middleware.
import express from 'express'; import koolaid from 'koolaid'; const app = express(); app.use(koolaid({ models: path.join(__dirname, `models`) context(ctx) { ctx.set(`logger`, myFancyLogger) } }));
The only required property is
models which is a path to a directory containing your model definitions.
path.join is probably the easiest way to make sure the path resolves, but as longs as it's a path that can be found by requireDir, it'll work.
Optionally, you can also pass in a
context() function to add extra data to each invocation. | https://www.npmtrends.com/@ianwremmel/koolaid | CC-MAIN-2021-31 | refinedweb | 1,101 | 57.87 |
If you’re familiar with functions in Python, then you know that it’s quite common for one function to call another. In Python, it’s also possible for a function to call itself! A function that calls itself is said to be recursive, and the technique of employing a recursive function is called recursion.
It may seem peculiar for a function to call itself, but many types of programming problems are best expressed recursively. When you bump up against such a problem, recursion is an indispensable tool for you to have in your toolkit.
By the end of this tutorial, you’ll understand:
- What it means for a function to call itself recursively
- How the design of Python functions supports recursion
- What factors to consider when choosing whether or not to solve a problem recursively
- How to implement a recursive function in Python
Then you’ll study several Python programming problems that use recursion and contrast the recursive solution with a comparable non-recursive one.
Free Bonus: Get a sample chapter from Python Basics: A Practical Introduction to Python 3 to see how you can go from beginner to intermediate in Python with a complete curriculum, up to date for Python 3.9.
What Is Recursion?
The word recursion comes from the Latin word recurrere, meaning to run or hasten back, return, revert, or recur. Here are some online definitions of recursion:
- Dictionary.com: The act or process of returning or running back
- Wiktionary: The act of defining an object (usually a function) in terms of that object itself
- The Free Dictionary: A method of defining a sequence of objects, such as an expression, function, or set, where some number of initial objects are given and each successive object is defined in terms of the preceding objects
A recursive definition is one in which the defined term appears in the definition itself. Self-referential situations often crop up in real life, even if they aren’t immediately recognizable as such. For example, suppose you wanted to describe the set of people that make up your ancestors. You could describe them this way:
Notice how the concept that is being defined, ancestors, shows up in its own definition. This is a recursive definition.
In programming, recursion has a very precise meaning. It refers to a coding technique in which a function calls itself.
Why Use Recursion?
Most programming problems are solvable without recursion. So, strictly speaking, recursion usually isn’t necessary.
However, some situations particularly lend themselves to a self-referential definition—for example, the definition of ancestors shown above. If you were devising an algorithm to handle such a case programmatically, a recursive solution would likely be cleaner and more concise.
Traversal of tree-like data structures is another good example. Because these are nested structures, they readily fit a recursive definition. A non-recursive algorithm to walk through a nested structure is likely to be somewhat clunky, while a recursive solution will be relatively elegant. An example of this appears later in this tutorial.
On the other hand, recursion isn’t for every situation. Here are some other factors to consider:
- For some problems, a recursive solution, though possible, will be awkward rather than elegant.
- Recursive implementations often consume more memory than non-recursive ones.
- In some cases, using recursion may result in slower execution time.
Typically, the readability of the code will be the biggest determining factor. But it depends on the circumstances. The examples presented below should help you get a feel for when you should choose recursion.
Recursion in Python
When you call a function in Python, the interpreter creates a new local namespace so that names defined within that function don’t collide with identical names defined elsewhere. One function can call another, and even if they both define objects with the same name, it all works out fine because those objects exist in separate namespaces.
The same holds true if multiple instances of the same function are running concurrently. For example, consider the following definition:
def function(): x = 10 function()
When
function() executes the first time, Python creates a namespace and assigns
x the value
10 in that namespace. Then
function() calls itself recursively. The second time
function() runs, the interpreter creates a second namespace and assigns
10 to
x there as well. These two instances of the name
x are distinct from each another and can coexist without clashing because they are in separate namespaces.
Unfortunately, running
function() as it stands produces a result that is less than inspiring, as the following traceback shows:
>>> function() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 3, in function File "<stdin>", line 3, in function File "<stdin>", line 3, in function [Previous line repeated 996 more times] RecursionError: maximum recursion depth exceeded
As written,
function() would in theory go on forever, calling itself over and over without any of the calls ever returning. In practice, of course, nothing is truly forever. Your computer only has so much memory, and it would run out eventually.
Python doesn’t allow that to happen. The interpreter limits the maximum number of times a function can call itself recursively, and when it reaches that limit, it raises a
RecursionError exception, as you see above.
Technical note: You can find out what Python’s recursion limit is with a function from the
sys module called
getrecursionlimit():
>>> from sys import getrecursionlimit >>> getrecursionlimit() 1000
You can change it, too, with
setrecursionlimit():
>>> from sys import setrecursionlimit >>> setrecursionlimit(2000) >>> getrecursionlimit() 2000
You can set it to be pretty large, but you can’t make it infinite.
There isn’t much use for a function to indiscriminately call itself recursively without end. It’s reminiscent of the instructions that you sometimes find on shampoo bottles: “Lather, rinse, repeat.” If you were to follow these instructions literally, you’d shampoo your hair forever!
This logical flaw has evidently occurred to some shampoo manufacturers, because some shampoo bottles instead say “Lather, rinse, repeat as necessary.” That provides a termination condition to the instructions. Presumably, you’ll eventually feel your hair is sufficiently clean to consider additional repetitions unnecessary. Shampooing can then stop.
Similarly, a function that calls itself recursively must have a plan to eventually stop. Recursive functions typically follow this pattern:
- There are one or more base cases that are directly solvable without the need for further recursion.
- Each recursive call moves the solution progressively closer to a base case.
You’re now ready to see how this works with some examples.
The first example is a function called
countdown(), which takes a positive number as an argument and prints the numbers from the specified argument down to zero:
>>> def countdown(n): ... print(n) ... if n == 0: ... return # Terminate recursion ... else: ... countdown(n - 1) # Recursive call ... >>> countdown(5) 5 4 3 2 1 0
Notice how
countdown() fits the paradigm for a recursive algorithm described above:
- The base case occurs when
nis zero, at which point recursion stops.
- In the recursive call, the argument is one less than the current value of
n, so each recursion moves closer to the base case.
Note: For simplicity,
countdown() doesn’t check its argument for validity. If
n is either a non-integer or negative, you’ll get a
RecursionError exception because the base case is never reached.
The version of
countdown() shown above clearly highlights the base case and the recursive call, but there’s a more concise way to express it:
def countdown(n): print(n) if n > 0: countdown(n - 1)
Here’s one possible non-recursive implementation for comparison:
>>> def countdown(n): ... while n >= 0: ... print(n) ... n -= 1 ... >>> countdown(5) 5 4 3 2 1 0
This is a case where the non-recursive solution is at least as clear and intuitive as the recursive one, and probably more so.
Calculate Factorial
The next example involves the mathematical concept of factorial. The factorial of a positive integer n, denoted as n!, is defined as follows:
In other words, n! is the product of all integers from 1 to n, inclusive.
Factorial so lends itself to recursive definition that programming texts nearly always include it as one of the first examples. You can express the definition of n! recursively like this:
As with the example shown above, there are base cases that are solvable without recursion. The more complicated cases are reductive, meaning that they reduce to one of the base cases:
- The base cases (n = 0 or n = 1) are solvable without recursion.
- For values of n greater than 1, n! is defined in terms of (n - 1)!, so the recursive solution progressively approaches the base case.
For example, recursive computation of 4! looks like this:
The calculations of 4!, 3!, and 2! suspend until the algorithm reaches the base case where n = 1. At that point, 1! is computable without further recursion, and the deferred calculations run to completion.
Define a Python Factorial Function
Here’s a recursive Python function to calculate factorial. Note how concise it is and how well it mirrors the definition shown above:
>>> def factorial(n): ... return 1 if n <= 1 else n * factorial(n - 1) ... >>> factorial(4) 24
A little embellishment of this function with some
print() statements gives a clearer idea of the call and return sequence:
>>> def factorial(n): ... print(f"factorial() called with n = {n}") ... return_value = 1 if n <= 1 else n * factorial(n -1) ... print(f"-> factorial({n}) returns {return_value}") ... return return_value ... >>> factorial(4) factorial() called with n = 4 factorial() called with n = 3 factorial() called with n = 2 factorial() called with n = 1 -> factorial(1) returns 1 -> factorial(2) returns 2 -> factorial(3) returns 6 -> factorial(4) returns 24 24
Notice how all the recursive calls stack up. The function gets called with
n =
4,
3,
2, and
1 in succession before any of the calls return. Finally, when
n is
1, the problem can be solved without any more recursion. Then each of the stacked-up recursive calls unwinds back out, returning
1,
2,
6, and finally
24 from the outermost call.
Recursion isn’t necessary here. You could implement
factorial() iteratively using a
for loop:
>>> def factorial(n): ... return_value = 1 ... for i in range(2, n + 1): ... return_value *= i ... return return_value ... >>> factorial(4) 24
You can also implement factorial using Python’s
reduce(), which you can import from the
functools module:
>>> from functools import reduce >>> def factorial(n): ... return reduce(lambda x, y: x * y, range(1, n + 1) or [1]) ... >>> factorial(4) 24
Again, this shows that if a problem is solvable with recursion, there will also likely be several viable non-recursive solutions as well. You’ll typically choose based on which one results in the most readable and intuitive code.
Another factor to take into consideration is execution speed. There can be significant performance differences between recursive and non-recursive solutions. In the next section, you’ll explore these differences a little further.
Speed Comparison of Factorial Implementations
To evaluate execution time, you can use a function called
timeit() from a module that is also called
timeit. This function supports a number of different formats, but you’ll use the following format in this tutorial:
timeit(<command>, setup=<setup_string>, number=<iterations>)
timeit() first executes the commands contained in the specified
<setup_string>. Then it executes
<command> the given number of
<iterations> and reports the cumulative execution time in seconds:
>>> from timeit import timeit >>> timeit("print(string)", setup="string='foobar'", number=100) foobar foobar foobar . . [100 repetitions] . foobar 0.03347089999988384
Here, the
setup parameter assigns
string the value
'foobar'. Then
timeit() prints
string one hundred times. The total execution time is just over 3/100 of a second.
The examples shown below use
timeit() to compare the recursive, iterative, and
reduce() implementations of factorial from above. In each case,
setup_string contains a setup string that defines the relevant
factorial() function.
timeit() then executes
factorial(4) a total of ten million times and reports the aggregate execution.
First, here’s the recursive version:
>>>>> from timeit import timeit >>> timeit("factorial(4)", setup=setup_string, number=10000000) Recursive: 4.957105500000125
Next up is the iterative implementation:
>>>>> from timeit import timeit >>> timeit("factorial(4)", setup=setup_string, number=10000000) Iterative: 3.733752099999947
Last, here’s the version that uses
reduce():
>>>>> from timeit import timeit >>> timeit("factorial(4)", setup=setup_string, number=10000000) reduce(): 8.101526299999932
In this case, the iterative implementation is the fastest, although the recursive solution isn’t far behind. The method using
reduce() is the slowest. Your mileage will probably vary if you try these examples on your own machine. You certainly won’t get the same times, and you may not even get the same ranking.
Does it matter? There’s a difference of almost four seconds in execution time between the iterative implementation and the one that uses
reduce(), but it took ten million calls to see it.
If you’ll be calling a function many times, you might need to take execution speed into account when choosing an implementation. On the other hand, if the function will run relatively infrequently, then the difference in execution times will probably be negligible. In that case, you’d be better off choosing the implementation that seems to express the solution to the problem most clearly.
For factorial, the timings recorded above suggest a recursive implementation is a reasonable choice.
Frankly, if you’re coding in Python, you don’t need to implement a factorial function at all. It’s already available in the standard
math module:
>>> from math import factorial >>> factorial(4) 24
Perhaps it might interest you to know how this performs in the timing test:
>>>>> from timeit import timeit >>> timeit("factorial(4)", setup=setup_string, number=10000000) 0.3724050999999946
Wow!
math.factorial() performs better than the best of the other three implementations shown above by roughly a factor of 10.
Technical note: The fact that
math.factorial() is so much speedier probably has nothing to do with whether it’s implemented recursively. More likely it’s because the function is implemented in C rather than Python. For more reading on Python and C, see these resources:
A function implemented in C will virtually always be faster than a corresponding function implemented in pure Python.
Traverse a Nested List
The next example involves visiting each item in a nested list structure. Consider the following Python list:
names = [ "Adam", [ "Bob", [ "Chet", "Cat", ], "Barb", "Bert" ], "Alex", [ "Bea", "Bill" ], "Ann" ]
As the following diagram shows,
names contains two sublists. The first of these sublists itself contains another sublist:
Suppose you wanted to count the number of leaf elements in this list—the lowest-level
str objects—as though you’d flattened out the list. The leaf elements are
"Adam",
"Bob",
"Chet",
"Cat",
"Barb",
"Bert",
"Alex",
"Bea",
"Bill", and
"Ann", so the answer should be
10.
Just calling
len() on the list doesn’t give the correct answer:
>>> len(names) 5
len() counts the objects at the top level of
names, which are the three leaf elements
"Adam",
"Alex", and
"Ann" and two sublists
["Bob", ["Chet", "Cat"], "Barb", "Bert"] and
["Bea", "Bill"]:
>>> for index, item in enumerate(names): ... print(index, item) ... 0 Adam 1 ['Bob', ['Chet', 'Cat'], 'Barb', 'Bert'] 2 Alex 3 ['Bea', 'Bill'] 4 Ann
What you need here is a function that traverses the entire list structure, sublists included. The algorithm goes something like this:
- Walk through the list, examining each item in turn.
- If you find a leaf element, then add it to the accumulated count.
- If you encounter a sublist, then do the following:
- Drop down into that sublist and similarly walk through it.
- Once you’ve exhausted the sublist, go back up, add the elements from the sublist to the accumulated count, and resume the walk through the parent list where you left off.
Note the self-referential nature of this description: Walk through the list. If you encounter a sublist, then similarly walk through that list. This situation begs for recursion!
Traverse a Nested List Recursively
Recursion fits this problem very nicely. To solve it, you need to be able to determine whether a given list item is leaf item or not. For that, you can use the built-in Python function
isinstance().
In the case of the
names list, if an item is an instance of type
list, then it’s a sublist. Otherwise, it’s a leaf item:
>>> names ['Adam', ['Bob', ['Chet', 'Cat'], 'Barb', 'Bert'], 'Alex', ['Bea', 'Bill'], 'Ann'] >>> names[0] 'Adam' >>> isinstance(names[0], list) False >>> names[1] ['Bob', ['Chet', 'Cat'], 'Barb', 'Bert'] >>> isinstance(names[1], list) True >>> names[1][1] ['Chet', 'Cat'] >>> isinstance(names[1][1], list) True >>> names[1][1][0] 'Chet' >>> isinstance(names[1][1][0], list) False
Now you have the tools in place to implement a function that counts leaf elements in a list, accounting for sublists recursively:
def count_leaf_items(item_list): """Recursively counts and returns the number of leaf items in a (potentially nested) list. """ count = 0 for item in item_list: if isinstance(item, list): count += count_leaf_items(item) else: count += 1 return count
If you run
count_leaf_items() on several lists, including the
names list defined above, you get this:
>>> count_leaf_items([1, 2, 3, 4]) 4 >>> count_leaf_items([1, [2.1, 2.2], 3]) 4 >>> count_leaf_items([]) 0 >>> count_leaf_items(names) 10 >>> # Success!
As with the factorial example, adding some
print() statements helps to demonstrate the sequence of recursive calls and return values:
1def count_leaf_items(item_list): 2 """Recursively counts and returns the 3 number of leaf items in a (potentially 4 nested) list. 5 """ 6 print(f"List: {item_list}") 7 count = 0 8 for item in item_list: 9 if isinstance(item, list): 10 print("Encountered sublist") 11 count += count_leaf_items(item) 12 else: 13 print(f"Counted leaf item \"{item}\"") 14 count += 1 15 16 print(f"-> Returning count {count}") 17 return count
Here’s a synopsis of what’s happening in the example above:
- Line 9:
isinstance(item, list)is
True, so
count_leaf_items()has found a sublist.
- Line 11: The function calls itself recursively to count the items in the sublist, then adds the result to the accumulating total.
- Line 12:
isinstance(item, list)is
False, so
count_leaf_items()has encountered a leaf item.
- Line 14: The function increments the accumulating total by one to account for the leaf item.
Note: To keep things simple, this implementation assumes the list passed to
count_leaf_items() contains only leaf items or sublists, not any other type of composite object like a dictionary or tuple.
The output from
count_leaf_items() when it’s executed on the
names list now looks like this:
>>> count_leaf_items(names) List: ['Adam', ['Bob', ['Chet', 'Cat'], 'Barb', 'Bert'], 'Alex', ['Bea', 'Bill'], 'Ann'] Counted leaf item "Adam" Encountered sublist List: ['Bob', ['Chet', 'Cat'], 'Barb', 'Bert'] Counted leaf item "Bob" Encountered sublist List: ['Chet', 'Cat'] Counted leaf item "Chet" Counted leaf item "Cat" -> Returning count 2 Counted leaf item "Barb" Counted leaf item "Bert" -> Returning count 5 Counted leaf item "Alex" Encountered sublist List: ['Bea', 'Bill'] Counted leaf item "Bea" Counted leaf item "Bill" -> Returning count 2 Counted leaf item "Ann" -> Returning count 10 10
Each time a call to
count_leaf_items() terminates, it returns the count of leaf elements it tallied in the list passed to it. The top-level call returns
10, as it should.
Traverse a Nested List Non-Recursively
Like the other examples shown so far, this list traversal doesn’t require recursion. You can also accomplish it iteratively. Here’s one possibility:
def count_leaf_items(item_list): """Non-recursively counts and returns the number of leaf items in a (potentially nested) list. """ count = 0 stack = [] current_list = item_list i = 0 while True: if i == len(current_list): if current_list == item_list: return count else: current_list, i = stack.pop() i += 1 continue if isinstance(current_list[i], list): stack.append([current_list, i]) current_list = current_list[i] i = 0 else: count += 1 i += 1
If you run this non-recursive version of
count_leaf_items() on the same lists as shown previously, you get the same results:
>>> count_leaf_items([1, 2, 3, 4]) 4 >>> count_leaf_items([1, [2.1, 2.2], 3]) 4 >>> count_leaf_items([]) 0 >>> count_leaf_items(names) 10 >>> # Success!
The strategy employed here uses a stack to handle the nested sublists. When this version of
count_leaf_items() encounters a sublist, it pushes the list that is currently in progress and the current index in that list onto a stack. Once it has counted the sublist, the function pops the parent list and index from the stack so it can resume counting where it left off.
In fact, essentially the same thing happens in the recursive implementation as well. When you call a function recursively, Python saves the state of the executing instance on a stack so the recursive call can run. When the recursive call finishes, the state is popped from the stack so that the interrupted instance can resume. It’s the same concept, but with the recursive solution, Python is doing the state-saving work for you.
Notice how concise and readable the recursive code is when compared to the non-recursive version:
This is a case where using recursion is definitely an advantage.
Detect Palindromes
The choice of whether to use recursion to solve a problem depends in large part on the nature of the problem. Factorial, for example, naturally translates to a recursive implementation, but the iterative solution is quite straightforward as well. In that case, it’s arguably a toss-up.
The list traversal problem is a different story. In that case, the recursive solution is very elegant, while the non-recursive one is cumbersome at best.
For the next problem, using recursion is arguably silly.
A palindrome is a word that reads the same backward as it does forward. Examples include the following words:
- Racecar
- Level
- Kayak
- Reviver
- Civic
If asked to devise an algorithm to determine whether a string is palindromic, you would probably come up with something like “Reverse the string and see if it’s the same as the original.” You can’t get much plainer than that.
Even more helpfully, Python’s
[::-1] slicing syntax for reversing a string provides a convenient way to code it:
>>> def is_palindrome(word): ... """Return True if word is a palindrome, False if not.""" ... return word == word[::-1] ... >>> is_palindrome("foo") False >>> is_palindrome("racecar") True >>> is_palindrome("troglodyte") False >>> is_palindrome("civic") True
This is clear and concise. There’s hardly any need to look for an alternative. But just for fun, consider this recursive definition of a palindrome:
- Base cases: An empty string and a string consisting of a single character are inherently palindromic.
- Reductive recursion: A string of length two or greater is a palindrome if it satisfies both of these criteria:
- The first and last characters are the same.
- The substring between the first and last characters is a palindrome.
Slicing is your friend here as well. For a string
word, indexing and slicing give the following substrings:
- The first character is
word[0].
- The last character is
word[-1].
- The substring between the first and last characters is
word[1:-1].
So you can define
is_palindrome() recursively like this:
>>> def is_palindrome(word): ... """Return True if word is a palindrome, False if not.""" ... if len(word) <= 1: ... return True ... else: ... return word[0] == word[-1] and is_palindrome(word[1:-1]) ... >>> # Base cases >>> is_palindrome("") True >>> is_palindrome("a") True >>> # Recursive cases >>> is_palindrome("foo") False >>> is_palindrome("racecar") True >>> is_palindrome("troglodyte") False >>> is_palindrome("civic") True
It’s an interesting exercise to think recursively, even when it isn’t especially necessary.
Sort With Quicksort
The final example presented, like the nested list traversal, is a good example of a problem that very naturally suggests a recursive approach. The Quicksort algorithm is an efficient sorting algorithm developed by British computer scientist Tony Hoare in 1959.
Quicksort is a divide-and-conquer algorithm. Suppose you have a list of objects to sort. You start by choosing an item in the list, called the pivot item. This can be any item in the list. You then partition the list into two sublists based on the pivot item and recursively sort the sublists.
The steps of the algorithm are as follows:
- Choose the pivot item.
- Partition the list into two sublists:
- Those items that are less than the pivot item
- Those items that are greater than the pivot item
- Quicksort the sublists recursively.
Each partitioning produces smaller sublists, so the algorithm is reductive. The base cases occur when the sublists are either empty or have one element, as these are inherently sorted.
Choosing the Pivot Item
The Quicksort algorithm will work no matter what item in the list is the pivot item. But some choices are better than others. Remember that when partitioning, two sublists that are created: one with items that are less than the pivot item and one with items that are greater than the pivot item. Ideally, the two sublists are of roughly equal length.
Imagine that your initial list to sort contains eight items. If each partitioning results in sublists of roughly equal length, then you can reach the base cases in three steps:
At the other end of the spectrum, if your choice of pivot item is especially unlucky, each partition results in one sublist that contains all the original items except the pivot item and another sublist that is empty. In that case, it takes seven steps to reduce the list to the base cases:
The Quicksort algorithm will be more efficient in the first case. But you’d need to know something in advance about the nature of the data you’re sorting in order to systematically choose optimal pivot items. In any case, there isn’t any one choice that will be the best for all cases. So if you’re writing a Quicksort function to handle the general case, the choice of pivot item is somewhat arbitrary.
The first item in the list is a common choice, as is the last item. These will work fine if the data in the list is fairly randomly distributed. However, if the data is already sorted, or even nearly so, then these will result in suboptimal partitioning like that shown above. To avoid this, some Quicksort algorithms choose the middle item in the list as the pivot item.
Another option is to find the median of the first, last, and middle items in the list and use that as the pivot item. This is the strategy used in the sample code below.
Implementing the Partitioning
Once you’ve chosen the pivot item, the next step is to partition the list. Again, the goal is to create two sublists, one containing the items that are less than the pivot item and the other containing those that are greater.
You could accomplish this directly in place. In other words, by swapping items, you could shuffle the items in the list around until the pivot item is in the middle, all the lesser items are to its left, and all the greater items are to its right. Then, when you Quicksort the sublists recursively, you’d pass the slices of the list to the left and right of the pivot item.
Alternately, you can use Python’s list manipulation capability to create new lists instead of operating on the original list in place. This is the approach taken in the code below. The algorithm is as follows:
- Choose the pivot item using the median-of-three method described above.
- Using the pivot item, create three sublists:
- The items in the original list that are less than the pivot item
- The pivot item itself
- The items in the original list that are greater than the pivot item
- Recursively Quicksort lists 1 and 3.
- Concatenate all three lists back together.
Note that this involves creating a third sublist that contains the pivot item itself. One advantage to this approach is that it smoothly handles the case where the pivot item appears in the list more than once. In that case, list 2 will have more than one element.
Using the Quicksort Implementation
Now that the groundwork is in place, you are ready to move on to the Quicksort algorithm. Here’s the Python code:
1import statistics 2 3def quicksort(numbers): 4 if len(numbers) <= 1: 5 return numbers 6 else: 7 pivot = statistics.median( 8 [ 9 numbers[0], 10 numbers[len(numbers) // 2], 11 numbers[-1] 12 ] 13 ) 14 items_less, pivot_items, items_greater = ( 15 [n for n in numbers if n < pivot], 16 [n for n in numbers if n == pivot], 17 [n for n in numbers if n > pivot] 18 ) 19 20 return ( 21 quicksort(items_less) + 22 pivot_items + 23 quicksort(items_greater) 24 )
This is what each section of
quicksort() is doing:
- Line 4: The base cases where the list is either empty or has only a single element
- Lines 7 to 13: Calculation of the pivot item by the median-of-three method
- Lines 14 to 18: Creation of the three partition lists
- Lines 20 to 24: Recursive sorting and reassembly of the partition lists
Note: This example has the advantage of being succinct and relatively readable. However, it isn’t the most efficient implementation. In particular, the creation of the partition lists on lines 14 to 18 involves iterating through the list three separate times, which isn’t optimal from the standpoint of execution time.
Here are some examples of
quicksort() in action:
>>> # Base cases >>> quicksort([]) [] >>> quicksort([42]) [42] >>> # Recursive cases >>> quicksort([5, 2, 6, 3]) [2, 3, 5, 6] >>> quicksort([10, -3, 21, 6, -8]) [-8, -3, 6, 10, 21]
For testing purposes, you can define a short function that generates a list of random numbers between
1 and
100:
import random def get_random_numbers(length, minimum=1, maximum=100): numbers = [] for _ in range(length): numbers.append(random.randint(minimum, maximum)) return numbers
Now you can use
get_random_numbers() to test
quicksort():
>>> numbers = get_random_numbers(20) >>> numbers [24, 4, 67, 71, 84, 63, 100, 94, 53, 64, 19, 89, 48, 7, 31, 3, 32, 76, 91, 78] >>> quicksort(numbers) [3, 4, 7, 19, 24, 31, 32, 48, 53, 63, 64, 67, 71, 76, 78, 84, 89, 91, 94, 100] >>> numbers = get_random_numbers(15, -50, 50) >>> numbers [-2, 14, 48, 42, -48, 38, 44, -25, 14, -14, 41, -30, -35, 36, -5] >>> quicksort(numbers) [-48, -35, -30, -25, -14, -5, -2, 14, 14, 36, 38, 41, 42, 44, 48] >>> quicksort(get_random_numbers(10, maximum=500)) [49, 94, 99, 124, 235, 287, 292, 333, 455, 464] >>> quicksort(get_random_numbers(10, 1000, 2000)) [1038, 1321, 1530, 1630, 1835, 1873, 1900, 1931, 1936, 1943]
To further understand how
quicksort() works, see the diagram below. This shows the recursion sequence when sorting a twelve-element list:
In the first step, the first, middle, and last list values are
31,
92, and
28, respectively. The median is
31, so that becomes the pivot item. The first partition then consists of the following sublists:
Each sublist is subsequently partitioned recursively in the same manner until all the sublists either contain a single element or are empty. As the recursive calls return, the lists are reassembled in sorted order. Note that in the second-to-last step on the left, the pivot item
18 appears in the list twice, so the pivot item list has two elements.
Conclusion
That concludes your journey through recursion, a programming technique in which a function calls itself. Recursion isn’t by any means appropriate for every task. But some programming problems virtually cry out for it. In those situations, it’s a great technique to have at your disposal.
In this tutorial, you learned:
- What it means for a function to call itself recursively
- How the design of Python functions supports recursion
- What factors to consider when choosing whether or not to solve a problem recursively
- How to implement a recursive function in Python
You also saw several examples of recursive algorithms and compared them to corresponding non-recursive solutions.
You should now be in a good position to recognize when recursion is called for and be ready to use it confidently when it’s needed! If you want to explore more about recursion in Python, then check out Thinking Recursively in Python. | https://realpython.com/python-recursion/ | CC-MAIN-2021-49 | refinedweb | 5,348 | 51.89 |
See PR 45819 for additional information (such as how to reproduce the bug, problem analysis etc).
We noticed that when building an executable with ThinLTO (cache enabled), from time to time unexpected cache misses happening and new cache entries are generated when not needed. It doesn't happen in a predictable manner (i.e. a cache miss might or might not happen).
‘ExportList’(‘DenseSet’ of ValueInfo's) is used for the calculation of a hash value (LTO Cache Key).
Though the elements of this DenseSet are the same all the time, the order in which the iterator walks through the elements of this set might (or might not) be different the next time when we relink with ThinLTO. If the order happens to be different, we will generate a different hash value and a cache miss will happen.
Looking at the implementation of:
template <> struct DenseMapInfo<ValueInfo> (see ModuleSummaryIndex.h)
we notice that the hash value that is being used by a DenseMap is a pointer:
static unsigned getHashValue(ValueInfo I) { return (uintptr_t)I.getRef(); }
We cannot guarantee that the compiler will allocate memory at the same
address for the object when we run the executable for the second time.
The same problem is applicable to ImportList as well.
A proposed solution here is:
(a) Create a vector of Export List's GUID (int64 integers) and sort this vector. Iterate through this sorted vector and use its elements for cache entry value calculations.
(b) Create a vector of ImportList's ModuleIDs (StringRefs) and sort this vector in lexicographical order. Iterate through this sorted vector. Use ModuleID as a key for obtaining an iterator to corresponding element of the ImportList, which has StringMap type. Use ModuleID (first element of the pair) and the list of GUIDs of all functions to import for this module (second element of the pair) for cache entry value calculations.
Note: it looks like I just spotted another potential problem in here. If confirmed, I will do a separate fix for it.
We use unordered_set container for storing GUIDs of all the functions to import for a source module.
using FunctionsToImportTy = std::unordered_set<GlobalValue::GUID>;
That means that in theory, when when we iterate through unordered_set for the second time, we could get a different order of the elements in this set, computing a different hash value and causing a potential cache miss. | https://reviews.llvm.org/D79772 | CC-MAIN-2020-24 | refinedweb | 396 | 53 |
to our www pages. Here you'll find some information about our dogs, our
breeding and our ideas. Enjoy !
Our first OES came in 1985. Happy's pedigree was a combination of German and
English bloodlines, Happy became multi champion, our first BIS winner and
sire of many pups. We were lucky enough to have a dog of Happy's quality as
our first OES. With Happy we entered the magnificent world of showing
bobtails. At that time we used to breed Afghan hounds and American Cockers
where we also bred champions and BIS winners.
Our first OES bitch was Pearly, we had three litters with her but we never
achieved the quality that we wished for.
Then came Monti, our first UK import and a great show dog. Monti died early
and we were completely devastated. Our next dog was Winsti again an UK
import and again A great show dog as well as a superb stud dog. Winsti is
still here with us.
It was time for a bitch, the foundation bitch and that was the most
important moment for our future breeding. Now we can say that we took
correct decision when we bought Bubi. With Bubi we were able to achieve
top results on the highest World level in the first step. If we can give an
advice that would be: foundation bitch is the key factor to success.
So you have to put all the efforts while purchasing a foundation bitch. The
rest is a combination of knowledge, talent and luck. From Bubi's first
litter we kept Berti, then from Berti's first litter we kept Kimi. Kimi has
her first litter now but unfortunately without bitches in it, only boys !!!
During the years our dogs have been winning around the world, from the
local shows to the World's biggest shows and specialties, under the judges
from all parts of the World. We are trying to breed dogs that can win under
breed specialist as well as under allrounders because we think that
a good dog is a good dog, here , there and anywhere.
Living and showing in the different parts of the world gave us many
opportunities to see dogs and to meet breeders. We consider that
another very important factor in successful breeding. You will find many
information on our www pages but if you have any questions just drop us a
line. We deliberately didn't put pedigrees here but all the pedigrees are
available, just let us know, we would like to have personal contact with
you.
All our show pups are sold with world's unique guaranty:
"guaranty of championship"
ask for a details. And please keep on mind that we are not just selling you
the puppy. We'll try to help you on every step and to keep the contact with
you so you'll get benefits from our experience and knowledge. You'll always
get our support . We give you much more then just a cute puppy !
We are proud that we have bred or owned bobtails that have multiply obtained
the following titles:
International Champion
World Junior Champion
European Champion
European Junior Champion
Argentinean Champion
Austrian Champion
Austrian Junior Champion
Austrian Bundessieger
Austrian Junior Bundessieger
Chilean Champion
Chilean Grand Champion
Croatian Champion
Croatian Junior Champion
Danish Champion
German-VDH Champion
German-Club Champion
Gibraltar
Champion
Greek Champion
Hungarian Champion
Italian Champion
Italian Junior Champion
Luxemburg Champion
Luxemburg Junior Champion
San Marino Champion
Slovenian Champion
Swedish Champion
Uruguay Champion
Yugoslav Champion
Club Winners in many countries
All Breeds and Specialty BIS winners ..... | http://www.reata-oes.com/docs/introduction-body.htm | crawl-001 | refinedweb | 597 | 61.36 |
0
My code is:
import re import urllib import urllib2 webURL="" #the website is connect=urllib.urlopen(webURL) #connect to this website htmlDoc=connect.read()#get the html document from this website patternIN="Permanent Address" # Where to begin to keep the text patternOUT="</tr>" # Where to end to keep the text (after the begining) keepText=False # Do we keep the text ? address="" # We init the address # Now, we read the file to keep the text for line in htmlDoc: if keepText: address+=line.strip() # We store the line, stripping the \n if patternOUT in line: # Next line won't be kept any more keepText=False if patternIN in line: # Starting from next line, we keep the text keepText=True # Now, it's time to clean all this rTags=re.compile("<.*?>") # the regexp to recognise any tag address=rTags.sub(":", address) # we replace the tags with ":" (I could have chosen anything else, # especially if there is some ":" in the address rSep=re.compile(":+") # Now, we replace any number of ":" with a \n address=rSep.sub("\n", address) print address
For line 15...whats wrong there?why i cannot do the for loop in the html file? | https://www.daniweb.com/programming/software-development/threads/225759/help-why-i-cannot-read-the-html-file | CC-MAIN-2016-40 | refinedweb | 195 | 72.76 |
GenericSetup for CPS, CMF and Zope
GenericSetup is a framework to describe the configuration of a Zope site as a set of XML files (and sometimes other associated files). It can import profiles, which may create objects or change their configuration, and export profiles, which makes a snapshot of the configuration and writes it to a set of XML files.
GenericSetup provides a tool that can store snapshots of a configuration in the ZODB itself, where it can be examined and even modified. It can also do diffs between two snapshots, which is very useful to find out what changed in a configuration (it's a good idea to take a full snapshot anytime some significant changes are made to the configuration).
GenericSetup differentiates between Base and Extension profiles. A Base profile is a profile that describes "everything". When it is imported, it removes and overwrites any previous configuration. An Extension profile is a profile designed to be incrementally added on top of a previously existing configuration. It may of course overwrite some settings, or even in some case remove objects, but its goal is generally to add optional configuration on top of a main one
GenericSetup is based on a small number of concepts: the toolset, and some import and export steps. GenericSetup provides a framework where import and export steps can be written simply using Zope 3 adapters.
GenericSetup was born as "CMFSetup" but was later made generic and can be used by any Zope 2 application. It is now available at. A subclass with additional features is used in CPS.
Setup tool
The setup tool is called setup_tool by default, and portal_setup in CPS (if missing, it can be instantiated by selecting "CPS Tools" from the ZMI Add menu).
At a given time, the setup tool knows about a few things:
- the currently selected profile, which may point to an area in the filesystem where a profile has been registered, or to a path in the ZODB where a snapshot was taken,
- the current toolset,
- the current import steps,
- the current export steps.
When a new profile it selected, its toolset, import steps and export steps XML files are loaded and merged with the tool's current ones.
Any import or export is based on the full toolset or import/export steps (even if the currently selected profile is an extension profile), but the source of XML configuration files depends solely on the currently selected profile.
Toolset
The file toolset.xml describes the Toolset, which is the set of tools needed in a given configuration. (While the name "tool" would suggest CMF, it's just a set of objects that can be instantiated at the root of the configured site.)
Beyond being based on data that is updated when an extension profile is selected, the toolset is a normal import/export step.
When the toolset is imported, all the objects in the toolset are instantiated if they're missing or if their class doesn't match with what it's supposed to be.
An excerpt of toolset.xml for the CPS Tree Tool would be:
<?xml version="1.0"?>
<tool-setup>
...
<required tool_id="portal_trees"
class="Products.CPSCore.TreeTool.TreeTool"/>
...
</tool-setup>
Import steps
Import steps (import_steps.xml) describe a set of configuration steps available for import, and their dependencies. A step is just the dotted name of a function, and the dependencies is simply the steps that have to be run before this one can be done (most steps depend on the toolset, because they need a base tool to be instantiated before it can be configured).
While an import step can do anything it likes, GenericSetup provides a framework based on Zope 3 adapters to simply describe how a single object is imported from XML, and to recurse among objects to create or configure them one by one.
Purge
During import, there are two possible behaviors, corresponding to the two kinds of profiles. For a Base profile, the import happens in "purge" mode, while for an extension profile the import doesn't purge. The functions and adapters doing the import have to take that into account when they read a profile.
In purge mode, every previous configuration has to be removed, so that the result is indistinguishable from an install from scratch. In non-purge mode, care must be taken to not overwrite or remove settings or objects which are not explicitely specified in the imported XML file.
An excerpt of import_steps.xml for the CPS Tree Tool would be:
<?xml version="1.0"?>
<import-steps>
...
<import-step
<dependency step="toolset"/>
Import tree tool and tree caches.
</import-step>
...
</import-steps>
Export steps
Export steps (export_steps.xml) describe the list of steps available for export. An export step is used in exactly the same way than an import step, except for the fact that there are no dependencies between export steps, and that instead of being read, XML files are written.
One important thing to note is that export steps cannot do the "incremental exports" that many people expect. When an extension profile is read by the import steps, only the available XML files for that profile are read. However when writing, there's no way to choose which XML files are relevant, so the whole profile for that step is written (and recursion is done in all subobjects).
It's possible that future versions of GenericSetup will have some capabilities to do incremental exports, but this is not possible for now.
An excerpt of export_steps.xml for the CPS Tree Tool would be:
<?xml version="1.0"?>
<export-steps>
...
<export-step
Export tree tool and tree caches.
</export-step>
...
</export-steps>
Adapters
The standard work done in an import or export step is to find the base tool, and call importObjects or exportObjects on it; these are recursive functions that take each object, find an adapter for them describing how the XML import or export is done, and call it.
For the CPS Tree Tool, the import steps and export steps call the following functions:
def exportTreeTool(context):
"""Export Tree tool and tree caches as a set of XML files.
"""
site = context.getSite()
tool = getToolByName(site, 'portal_trees', None)
if tool is None:
logger = context.getLogger('trees')
logger.info("Nothing to export.")
return
exportObjects(tool, '', context)
def importTreeTool(context):
"""Import Tree tool and tree caches as a set of XML files.
"""
site = context.getSite()
tool = getToolByName(site, 'portal_trees')
importObjects(tool, '', context)
This is pretty boileplate and could even be simplified in the future through ZCML declarations. Above, '' simply refers to the root of the profile directory.
The adapters are multi-adapters, adapting both an object and an import context (called "environ" in GenericSetup), to the IBody interface that basically describes a file body. They can be registered through ZCML using the standard statement:
<adapter
factory=".exportimport.TreeToolXMLAdapter"
provides="Products.GenericSetup.interfaces.IBody"
for=".interfaces.ITreeTool
Products.GenericSetup.interfaces.ISetupEnviron"
/>
This assumes of course that the exported object is described through an interface, for instance here declared as:
from zope.interface import Interface
class ITreeTool(Interface):
"""Tree Tool.
"""
The interface is implemented by the object's class with:
from zope.interface import implements
class TreeTool(UniqueObject, Folder):
"""Tree Tool that caches information about the site's hierarchies.
"""
implements(ITreeTool)
id = 'portal_trees'
meta_type = 'CPS Tree Tool'
...
When doing an export, the adapters build a DOM tree for the configuration. When doing an import, they read the DOM tree and create or modify properties as needed. The standard adapters don't create subobjects or recurse in them, this is left to the importObjects function.
These adapters can be written easily because GenericSetup provides helpers for the common cases of objects configured only through standard Zope 2 properties (PropertyManager), or having subobjects (ObjectManager).
Of course all this can be changed for specific cases. Many older CMF tools are configured through things that are not standard properties, and for instance CPS needs to do recursion into more than simple ObjectManager subobjects. It is also possible to read and write files that are not XML files, CPS does this for the images included in portlet objects, where it writes a real image file.
Additional CPS feature: upgrades
CPS has extended the setup tool to provide a basic Upgrade feature, that is related to the configuration of the site but cannot be expressed by the standard GenericSetup profiles.
An upgrade step is registered through ZCML with something like:
<cps:upgradeStep
This describes between what CPS versions this upgrade is needed, how to do it, and how to check if it's already been done.
The setup tool lists which steps have not yet been done, and provides a way to run them one by one or all at once. At a given time, a CPS site "knows" (through a site-global property last_upgraded_version) up to which version it's been upgraded..
Category: Product & Development | https://www.nuxeo.com/blog/genericsetup-for-cps-cmf/ | CC-MAIN-2017-22 | refinedweb | 1,481 | 51.99 |
Summary
Creates a raster object by unpacking the bits of the input pixel and mapping them to specified bits in the output pixel. The purpose of this function is to manipulate bits from a couple of inputs, such as the Landsat 8 quality band products.
Discussion
For more information about how this function works, see the Transpose Bits raster function.
The referenced raster dataset for the raster object is temporary. To make it permanent, you can call the raster object's save method.
Syntax
TransposeBits (raster, {input_bit_positions}, {output_bit_positions}, {constant_fill_value}, {fill_raster})
Code sample
Remaps the bits from the input raster to the Landsat 8 Water bit pattern.
import acrpy transpose_raster = arcpy.sa.TransposeBits("Landsat_8.tif",[4, 5],[0, 1], 0, None) | https://pro.arcgis.com/en/pro-app/latest/arcpy/spatial-analyst/transposebits.htm | CC-MAIN-2021-49 | refinedweb | 120 | 57.16 |
How long can the same publisher ID be used to update an apptedalde2 Jan 4, 2010 7:38 PM
I recently signed an update with ADL 1.5.3 using the original publisherID in the app descriptor and 1.5.3 in the namespace. It currently works fine, updating the original app as expected.
The original app was signed using ADL 1.5.1 (or 1.5.2... can't remember). Now lets say 6 months (the grace period) elapses and the client wants to update the app again. I'll need to sign the app WITHOUT a migration signature since the grace period for the original certificate has passed. Will users of the original app be able to update without un-installing if publisherID is specified in the namespace?
I think the answer is no, since that would mean that anyone could spoof anyone's app just by knowing the publisherID. However it's bad user experience to require the un/re-install, and also a pain to develop around (user settings migration from ELS, etc). Can't ADL have some kind of chainmigrate command, where in order to migrate a signature, one could line up certs back to the one that produced the original publisherID?
1. Re: How long can the same publisher ID be used to update an appJoe ... Ward Jan 5, 2010 12:34 PM (in response to tedalde2)
Once someone has installed the app update signed with the new certificate + the migration certificate, any AIR package signed by the new certificate alone will update the existing installation.
Users who did not install an update signed with the migration signature will have to install such an update first (or use the uninstall/reinstall method). The update with migration signature will remain valid indefinitely unless you disabled the timestamp feature when you applied either signature.
2. Re: How long can the same publisher ID be used to update an apptedalde2 Jan 5, 2010 5:25 PM (in response to Joe ... Ward)
OK, that's clear enough, but it still means the user must have a mandatory update to the application to continue with application updates over the remaining certificate lifetime + grace period, which would almost always be at maximum 18 months, and at minimum, well... a day.
It means that developers who sign their apps close to certificate expiration time only have just over 6 months to produce another update. After that their users will be stuck uninstalling and re-installing an update, which is a lousy option. Or, perhaps as you imply, another option is that the user could install some "intermediate" app whose sole function is to carry the certificate forward, but that implies foresight by the developer to produce such an app before grace period expiration.
I just think we need better options than that. If we had the option to chain the app back to the original certificate that would be best.
3. Re: How long can the same publisher ID be used to update an apptzeng
Jan 8, 2010 10:54 AM (in response to tedalde2)
We did a research on how developers renew their certificates and come to the current implementation as a good compromise.
4. Re: How long can the same publisher ID be used to update an appstefanyotov Feb 24, 2010 11:11 AM (in response to tzeng)
HI Guys,
Thanks for holding this discussion.
I need your help with regards to signing, certificates renewal and publisher id.
Our team is working on a serious AIR app out there, having a considerable install base. We've now renewed our certificate for the second time, but this time the compiler would generate a different publisher ID. How could that possibly happen, anyone any idea?
Please note, that we've already done that before, we're completely aware that publisher id is generated based on details in the certificate. We also had to renew the previous certificate a couple of times, because different details was causing a different publisher ID. It is totally confirmed that all the details in the current certificate are exactly the same, all of them. But still, the publisher ID is different and that is a nightmare. I'll can share other more complicated explorations further on in the discussion, but this is the first question that needs to be answered.
Please reply.
Kind Regards
Stefan Yotov
5. Re: How long can the same publisher ID be used to update an appJoe ... Ward Feb 24, 2010 12:53 PM (in response to stefanyotov)
Stefan,
I can't tell from here why the pub ID would be different. One possibility is that the CA changed the certificate used to sign your certificate.
However, because of issues like these, as of version 1.5.3 (released in Dec '09), AIR no longer bases the publisher ID on the certificate. For situations like yours where you have an existing app, you set the original publisher ID in the application descriptor, sign the updates with a migration signature, and updates will work properly. (New apps should not define a pub ID at all.)
The down side to this is that users who don't update for a long time might have to update to an intermediate version before updating to the latest version. This would happen, for example, if a user had version A installed, but the latest version C, was produced more than 6 months after the cert used to sign A expired. The user would have to first install version B and then C (or do an uninstall/reinstall). You could anticipate this problem with custom update logic, but that does take some forethought.
6. Re: How long can the same publisher ID be used to update an appstefanyotov Feb 25, 2010 12:07 PM (in response to Joe ... Ward)
Thanks Joe, really appreciate your answer and I'd kindly ask you follow this conversation with me.
We're already in the process of checking with Thawte if our certificate has been signed with a different certificate.
We also have tried the 1.5.3 migration approach as suggested: "you set the original publisher ID in the application descriptor, sign the updates with a migration signature, and updates will work properly". Unfortunately it doesn't work. Followed that exactly as described in the Certificate Renewals section of the 1.5.3 Release Notes document
Previous certificate has expired on Jan 18, 2010.
Compiling the build with the migration signature goes well.
Unfortunately, when you launch (double click) the .air file, the AIR runtime attempts a brand new installation, instead of updating the existing one (that is signed with the old certificate).
Please advise!
Thank you
Stefan
7. Re: How long can the same publisher ID be used to update an appJoe ... Ward Feb 25, 2010 1:41 PM (in response to stefanyotov)
First, please file a bug against AIR at
Second, to rule out some obvious errors:
A. You have set the namespace to reflect AIR 1.5.3 in the application descriptor.
B. When you signed the update you did:
- Created the AIR package using the NEW certificate
- Used the ADT -migration command with the OLD certificate
C. The existing installation you are testing has the same publisher ID as specified in the new application descriptor.
D. Can you post the part of your app descriptor where you specify the publisher ID (or PM me)
E. You used the ADT program that came with the 1.5.3 SDK (not an earlier version)
F. The application IDs of the update and the installed version are the same.
8. Re: How long can the same publisher ID be used to update an appstefanyotov Mar 1, 2010 5:05 AM (in response to Joe ... Ward)
Thanks Joe,
We've rulled out possible errors. Actually the test was repeated from the beginning. All your checkpoints are passed.
However, there is some new information that would hopefully help. When the new .air file is built, with the migration signature and everything, it actually doesn't get detected as a brand new application, but it fails to unpack:
Starting app install of. air
UI SWF load is complete
UI initialized
Unpackaging to /private/var/folders/DE/DEMJRl6mHJK+f9uulQA75k+++TM/TemporaryItems/FlashTmp0
unpackaging is complete
application is bound to side-by-side version 1.0
application is bound to this version of the runtime
app id xxxxxxxx
pub id xxxxxxxx
failed while unpackaging: [ErrorEvent type="error" bubbles=false cancelable=false eventPhase=2 text="Error #3003" errorID=3003]
starting cleanup of temporary files
application installer exiting
Its totally amazing that the exact same simulation works great, in case we use two active certificates. But with the old build (AIR 1.5.0 namespace), it would get that error.
Any thoughts would be really appreciated.
Thanks again
Stefan
9. Re: How long can the same publisher ID be used to update an appJoe ... Ward Mar 1, 2010 10:22 AM (in response to stefanyotov)
When you say, "But with the old build (AIR 1.5.0 namespace), it would get that error..." Do you mean the update was built using the 1.5.0 namespace (which wouldn't be expected to work), or that the old, installed version was built using the AIR 1.5.0 namespace?
10. Re: How long can the same publisher ID be used to update an apptzeng
Mar 1, 2010 5:34 PM (in response to stefanyotov)
Hi Stefan,
We would like to help you to figure out what the problem could be. I have sent you an email to ask for more info.
Could you take a look at my email?
Thanks,
-ted
11. Re: How long can the same publisher ID be used to update an appstefanyotov Mar 15, 2010 7:49 AM (in response to Joe ... Ward)
Hi Joe,
I apologize for the delay.
Update is always built with the 1.5.3 namespace, it doesn't compile with 1.5.0.
Apparently, the unpacking error that we get is only present on a few computers, for no obvious reason. On most machines it works fine, as well as the migration signature works exactly as expected. That works for us and we are considering the problem resolved.
Thanks very much for your time, it is appreciated!
Stefan | https://forums.adobe.com/thread/549866 | CC-MAIN-2018-34 | refinedweb | 1,719 | 62.98 |
On Sat, Jun 12, 2010 at 12:45:29AM +0200, Stefano Sabatini wrote: > On date Wednesday 2010-06-09 00:17:27 +0200, Stefano Sabatini encoded: > > This is required, since all the frames in the filterchain are supposed > > to use a time base of AV_TIME_BASE. > > --- > > ffplay.c | 3 ++- > > 1 files changed, 2 insertions(+), 1 deletions(-) > > > > diff --git a/ffplay.c b/ffplay.c > > index 129cd28..dd3cba0 100644 > > --- a/ffplay.c > > +++ b/ffplay.c > > @@ -1678,7 +1678,7 @@ static int input_request_frame(AVFilterLink *link) > > } > > av_free_packet(&pkt); > > > > - picref->pts = pts; > > + picref->pts = av_rescale_q(pkt.pts, priv->is->video_st->time_base, AV_TIME_BASE_Q); > > picref->pos = pkt.pos; > > picref->pixel_aspect = priv->is->video_st->codec->sample_aspect_ratio; > > avfilter_start_frame(link, picref); > > @@ -1838,6 +1838,7 @@ static int video_thread(void *arg) > > SDL_Delay(10); > > #if CONFIG_AVFILTER > > ret = get_filtered_video_frame(filt_out, frame, &pts_int, &pos); > > + pts_int = av_rescale_q(pts_int, AV_TIME_BASE_Q, is->video_st->time_base); > > #else > > ret = get_video_frame(is, frame, &pts_int, &pkt); > > #endif > > Ping? (That's required by the setpts patch). we need the timebase for muxing, a lazy filter could always set that to AV_TIME_BASE_Q if it likes but we should support keeping track of it. If you disagree then which timebase should the muxer store? some containers dont like it if the timebase is 1000000 times smaller than 1/average fps [...] --: <> | http://ffmpeg.org/pipermail/ffmpeg-devel/2010-June/091665.html | CC-MAIN-2016-26 | refinedweb | 205 | 56.76 |
Tracking Visitors Who are Logged Into Social Media Accounts Using Google Analtyics In February of 2012, a Tom Anothony from Distilled contributed an excellent post to SEOMoz on how to track visitors who are logged into a social media account on your website using some custom JavaScript code and advanced segments in Google Analytics. I have seen it in action over the past 12 months and I wanted to share the step by step here on BHW. While the configuration may seem novel, it can actually provide some great insight as to the visitors that are coming to your site. For example maybe you are thinking of providing interactive plugins on your website but aren?t sure if it will be worth it because users will need to be logged into a social network for the plugins to be effective. This setup can give you the answer. Maybe you want to know which network you should be more active on. Or perhaps you need to know if you are engaging your current network enough. Note that for any of the following to work, you should have Google Analytics installed on your website. With the following script you can track whether visitors are logged into Facebook, Twitter, Google Plus or logged into a Google account in general. This snippet must be added to the <head> section of your website. If you have a template, it only needs to be added once. If not, it should go in the head section of all pages on your site. Code: <script type="text/javascript"> function record_login_status(slot, network, status) { // This code is for the async version of Google Analytics; if you're still on // the old code then you need to adjust it accordingly. if (status) { _gaq.push(["_setCustomVar", slot, network + "_State", "LoggedIn", 1]); // You may prefer to record this data with _trackEvent // _gaq.push(['_trackEvent', network + '_State', 'status', "LoggedIn"]); }else{ _gaq.push(["_setCustomVar", slot, network + "_State", "NotLoggedIn", 1]); // You may prefer to record this data with _trackEvent //_gaq.push(['_trackEvent', network + '_State', 'status', "NotLoggedIn"]); } } </script> For Google Plus, Twitter and Google, place the following code snippets before the closing </body> tag in your website. If you have a template, you should only have to place this code in the template file. If you do not, you will have to place it one every page where you want to track whether users are logged in or out of these accounts. Code: [TABLE] [TR] [TD]<img style="display:none;"[/TD] [/TR] [TR] [TD][/TD] [TD][/TD] [/TR] [TR] [TD][/TD] [TD][/TD] [/TR] [TR] [TD][/TD] [TD]<img style="display:none;"[/TD] [/TR] [TR] [TD][/TD] [TD][/TD] [/TR] [TR] [TD][/TD] [TD][/TD] [/TR] [TR] [TD][/TD] [TD]<img style="display:none;" src="" onload="record_login_status(3, 'Twitter', true)" onerror="record_login_status(3, 'Twitter', false)" /> [/TD] [/TR] [/TABLE] For the Facebook login status tracking, the procedure is a bit trickier. You have to set up a Facebook application in order to grab an app id. This is essential for the API code to work on your domain. The app just has to be the basic setup and nothing more. Follow these steps to set up an app: 1. Login to Facebook 2. Visit 3. Click the create new app button in the top right corner of the page 4. A dialogue box will appear asking for the app name, app namespace, and if you require web hosting. 5. Give your app a name (it doesn?t matter what it is), name space is optional, and if you need hosting from Facebook, click the box. If you plan to host your app on your existing domain, leave the box blank. 6. If you choose to go with hosting through Facebook, it is provided by Heroku and you can learn more about it here, 7. Click continue, fill out the captcha, click continue again 8. In the final screen, your app has been created 9. Grab the long number next to app ID at the top of the screen next to the default thumbnail image. You will past this into the code snippet below. Here is the snippet, Code: [TABLE] [TR] [TD]<script>[/TD] [/TR] [TR] [TD][/TD] [TD]window.fbAsyncInit = function(){[/TD] [/TR] [TR] [TD][/TD] [TD]FB.init({ appId:'YOUR-APP-ID-HERE', status:true, cookie:true, xfbml:true});[/TD] [/TR] [TR] [TD][/TD] [TD]FB.getLoginStatus(function(response){[/TD] [/TR] [TR] [TD][/TD] [TD]if (response.status != "unknown")[/TD] [/TR] [TR] [TD][/TD] [TD]{[/TD] [/TR] [TR] [TD][/TD] [TD]record_login_status(4, "Facebook", true);[/TD] [/TR] [TR] [TD][/TD] [TD]}else{[/TD] [/TR] [TR] [TD][/TD] [TD]record_login_status(4, "Facebook", false);[/TD] [/TR] [TR] [TD][/TD] [TD]}[/TD] [/TR] [TR] [TD][/TD] [TD]});[/TD] [/TR] [TR] [TD][/TD] [TD]};[/TD] [/TR] [TR] [TD][/TD] [TD]// Load the SDK Asynchronously[/TD] [/TR] [TR] [TD][/TD] [TD](function(d){[/TD] [/TR] [TR] [TD][/TD] [TD]var js, id = 'facebook-jssdk'; if (d.getElementById(id)) {return;}[/TD] [/TR] [TR] [TD][/TD] [TD]js = d.createElement('script'); js.id = id; js.async = true;[/TD] [/TR] [TR] [TD][/TD] [TD]js.src = "//connect.facebook.net/en_US/all.js";[/TD] [/TR] [TR] [TD][/TD] [TD]d.getElementsByTagName('head')[0].appendChild(js);[/TD] [/TR] [TR] [TD][/TD] [TD]}(document));[/TD] [/TR] [TR] [TD][/TD] [TD]</script>[/TD] [/TR] [/TABLE] Alright you have the code installed, now its time to configure the analytics account. You don?t have to set up advanced segments to view the data from this code. You can see data under audience -> custom -> custom variables. Your variables will be laid out as keys under the graph in the report and you can select among them. Visitors are identified by the variable in the code (i.e. Google_State, Twitter_State, Facebook_State, etc) By setting up advanced segments, not only is the data easier to look at, you can also compare it to other metrics on your site. Follow these steps to set up custom segments. 1. Login to your Google Analytics account 2. Click Advanced Segments in the grey bar at the top of any report 3. Click ?New Custom Segment? in the bottom right corner of the new drop down that appears 4. Enter a name for your segment and choose which facets to base it on. For this use the Custom Variable slots that the Javascript tracking code uses. Analytics allows 5 Custom Variable slots, and the code above uses 4 of these (1 = Google, 2 = Google+, 3 = Twitter, and 4=Facebook). Name your segments appropriately. For example Twitter Users, Facebook Users, Google Plus Users, etc. This will make them easy to identify in reports. 5. The menus in the segment should be set to ?include? , ?Custom Variable? (NOT Custom Key), ?exactly matching? , and ?LoggedIn?. 6. Save the segment and that?s it. (repeat for each network you are tracking) To view the data just click on advanced segments again, and you should see check boxes with the names of your new segments in the list of available segments. | https://www.blackhatworld.com/seo/track-visitors-who-are-logged-into-social-media-accounts-with-analytics.540029/ | CC-MAIN-2017-04 | refinedweb | 1,170 | 72.66 |
The Caché Perl Binding
The Caché Perl binding provides a simple, direct way to manipulate Caché objects from within a Perl application. It allows Perl programs to establish a connection to a database on Caché, create and open objects in the database, manipulate object properties, save objects, run methods on objects, and run queries. All Caché datatypes are supported.
The Perl binding offers complete support for object database persistence, including concurrency and transaction control. In addition, there is a sophisticated data caching scheme to minimize network traffic when the Caché server and the Perl applications are located on separate machines.
This document assumes a prior understanding of Perl and the standard Perl modules. Caché does not include a Perl interpreter or development environment.
Perl Binding Architecture
The Caché Perl binding gives Perl applications a way to interoperate with objects contained within a Caché server. The Perl binding consists of the following components:
The Intersys::PERLBIND module — a Perl C extension that provides your Perl application with transparent connectivity to the objects stored in the Caché database.
The Caché Object Server — a high-performance server process that manages communication between Perl clients and a Caché database server. It communicates using standard networking protocols (TCP/IP), and can run on any platform supported by Caché. The Caché Object Server is used by all Caché language bindings, including Perl, Python, C++, Java, JDBC, and ODBC.
The basic mechanism works as follows:
You define one or more classes within Caché. These classes can represent persistent objects stored within the Caché database or transient objects that run within a Caché server.
At runtime, your Perl application connects to a Caché server. It can then access instances of objects within the Caché server. Caché automatically manages all communications as well as client-side data caching. The runtime architecture consists of the following:
A Caché database server (or servers).
The Perl interpreter (see Perl Client Requirements).
A Perl application. At runtime, the Perl application connects to Caché using either an object connection interface or a standard ODBC interface. All communications between the Perl application and the Caché server use the TCP/IP protocol.
Quick Start
Here are examples of a few basic functions that make up the core of the Perl binding:
Create a connection and get a database
$url = "localhost[1972]:Samples" $conn = Intersys::PERLBIND::Connection->new($url,"_SYSTEM","SYS",0); $database = Intersys::PERLBIND::Database->new($conn);
$database is your logical connection to the namespace specified in $url.
Open an existing object
$person = $database->openid("Sample.Person","1", -1, -1);
$person is your logical connection to a Sample.Person object on the Caché server.
Create a new object
$person = $database->create_new("Sample.Person",undef);
Set or get a property
$person->set("Name","Adler, Mortimer"); $name = $person->get("Name");
Run a method
$answer = $person->Addition(17,20);
Save an object
$person->run_obj_method("%Save")
Get the id of a saved object
$id = $person->run_obj_method("%Id")
Run a query
$query = $database->alloc_query(); $query->prepare("SELECT * FROM SAMPLE.PERSON WHERE ID=?",$sqlcode); $query->set_par(1,2); $query->execute($sqlcode); while (@data_row = $query->fetch($sqlcode)) { $colnum = 1; foreach $col (@data_row) { $col_name = $query->col_name($colnum); print "column name = $col_name, data=$col\n"; $colnum++; } }
Installation and Configuration
The standard Caché installation places all files required for Caché Perl binding in <cachesys>/dev/perl. (For the location of <cachesys> on your system, see Default Caché Installation Directory in the Caché Installation Guide). You should be able to run any of the Perl sample programs after performing the following installation procedures.
Perl Client Requirements
Caché provides client-side Perl support through the Intersys::PERLBIND module, which implements the connection and caching mechanisms required to communicate with a Caché server.
This module requires the following environment:
Perl version 5.10 — Intersystems supports only the ActiveState distribution (
).
A C++ compiler — On Windows, use Visual Studio 2008 or 2010 (for more information, see Windows Compilers for Perl Modules
on the ActiveState site). On UNIX®, use GCC.
Your PATH must include the <cachesys>\bin directory. (For the location of <cachesys> on your system, see Default Caché Installation Directory in the Caché Installation Guide).
Your environment variables must be set to support C compilation and linking (for example, on Windows call VSVARS32.BAT). Otherwise, linker errors will be reported on the make step.
UNIX® Installation
Make sure <cachesys>/bin is on your PATH and in your LD_LIBRARY_PATH (or DYLD_LIBRARY_PATH on OSX). For the location of <cachesys> on your system, see Default Caché Installation Directory in the Caché Installation Guide.
For example:
export PATH=/usr/cachesys/bin:$PATH export LD_LIBRARY_PATH=/usr/cachesys/bin:$LD_LIBRARY_PATH
Run Makefile.PL (located in <cachesys>/dev/perl). You can supply the location of <cachesys>as an argument. For example:
perl Makefile.PL /usr/cachesys
If you run it without an argument, it will prompt you for the location of <cachesys>.
After making sure that <cachesys>/bin is on your path, run:
make
If make runs with no errors, test it with:
make test
If make test is successful, run:
make install
Windows Installation
Make sure <cachesys>\bin is on your PATH. (For the location of <cachesys> on your system, see Default Caché Installation Directory in the Caché Installation Guide).
Run Makefile.PL (located in <cachesys>\dev\perl). You can supply the location of <cachesys> as an argument. For example:
perl Makefile.PL C:\Intersystems\Cache
If you run it without an argument, it will prompt you for the location of <cachesys>.
After making sure that <cachesys>\bin is on your path, run:
nmake
If nmake runs with no errors, test it with:
nmake test
If nmake test is successful, run:
nmake install
Caché Server Configuration
Very little configuration is required to use a Perl client with a Caché server. The Perl sample programs provided with Caché should work with no change following a default Caché installation. This section describes the server settings that are relevant to Perl and how to change them.
Every Perl client that wishes to connect to a Caché server needs the following information:
A URL that provides the server IP address, port number, and Caché namespace.
A username and password.
By default, the Perl sample programs use the following connection information:
connection string: "localhost[1972]:Samples"
username: "_SYSTEM"
password: "SYS"
Check the following points if you have any problems:
Make sure that the Caché server is installed and running.
Make sure that you know the IP address of the machine on which the Caché server is running. The Perl sample programs use "localhost". If you want a sample program to default to a different system you will need to change the connection string in the code.
Make sure that you know the TCP/IP port number on which the Caché server is listening. The Perl sample programs use "1972". If you want a sample program to default to a different port, you will need change the number in the sample code.
Make sure that you have a valid username and password to use to establish a connection. (You can manage usernames and passwords using the Management Portal). The Perl sample programs use the administrator username "_SYSTEM" and the default password "SYS" or "sys". Typically, you will change the default password after installing the server. If you want a sample program to default to a different username and password, you will need to change the sample code.
Make sure that your connection URL includes a valid Caché namespace. This should be the namespace containing the classes and data your program uses. The Perl samples connect to the SAMPLES namespace, which is pre-installed with Caché.
Sample Programs
Caché comes with a set of sample programs that demonstrate the use of the Caché Perl binding. These samples are located in the <cachesys>/dev/perl/samples/ subdirectory of the Caché installation. (For the location of <cachesys> on your system, see Default Caché Installation Directory in the Caché Installation Guide). They are named, numbered, and implemented to correspond to the Java binding samples.
The sample programs include:
CPTest2.pl — Get and set properties of an instance of Sample.Person.
CPTest3.pl — Get properties of embedded object Sample.Person.Home.
CPTest4.pl — Update embedded object Sample.Person.Home.
CPTest5.pl — Process datatype collections.
CPTest6.pl — Process the result set of a ByName query.
CPTest7.pl — Process the result set of a dynamic SQL query.
CPTest8.pl — Process employee subclass and company/employee relationship.
All of these applications use classes from the Sample package in the SAMPLES namespace (accessible in Atelier). If you have not used the Sample package before, you should open it in Atelier and make sure it is properly compiled.
The sample programs are controlled by various switches that can be entered as arguments to the program on the command line. The sample program supplies a default value if you do not enter an argument.
For example, CPTest2.pl:
perl CPTest2.pl -user _MYUSERNAME
The CPTest7.pl sample accepts a -query argument that is passed to an SQL query:
perl CPTest7.pl -query A
This query will list all Sample.Person records containing names that start with the letter A. | https://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=GBPL_INTRO | CC-MAIN-2021-43 | refinedweb | 1,515 | 56.66 |
When a service and a client both are of .Net and reside on different machines then we can choose the preconfigured netTcpBinding to publish a WCF Service to the client. netTcpBinding is best suited when both the client and the service are of .Net and communicating to each other either over Intranet or Internet. Since both the service and client are in .Net, performance of this binding is very much optimized for the best performance. There is no overload of interoperability. netTcpBinding uses TCP Protocol and Binary Encoding. A few default properties of netTcpBinding is listed in the diagram below. Now let us create a WCF service using netTcpBinding. The following is what we are going to do.
using System;
namespace ConsoleApplication10
{
class Program
{
static void Main(string[] args)
{
ServiceReference1.Service1Client proxy = new ServiceReference1.Service1Client();
string result = proxy.GetDataUsingnetTcpBinding(99999);
Console.WriteLine(result);
Console.ReadKey(true);
}
}
}
On running we will get the output as below.
In this article we saw how we can use netTcpBinding for communication between two .Net Applications residing on different machines.
View All | http://www.c-sharpcorner.com/UploadFile/dhananjaycoder/cross-machine-communication-between-net-application-using-wcf/ | CC-MAIN-2017-09 | refinedweb | 177 | 52.97 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.