text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Pragmatic Studio’s Advanced Ruby training is coming up next month
(early registration ends soon!), and folks often ask what to expect in
our advanced classes. You might also be wondering how Rails developers
will benefit from attending this public class. So I asked Dave T.
and Chad F. (the instructors) to share a few of their thoughts on
the Advanced Ruby class in this short interview:
We also recently announced the next Rails course to be held in Long
Beach, CA on December 1-3. If you’re just getting into Rails, or
you’ve started building a Rails application but need help putting all
the pieces together, this course is for you. It’s also taught by Dave
and Chad.
For more information and registration details, check out
Thanks!
Mike | https://www.ruby-forum.com/t/adv-upcoming-advanced-ruby-and-rails-studios/175632 | CC-MAIN-2021-25 | refinedweb | 132 | 74.08 |
In this article, we will discuss how to call an activity Invoke Process by customizing the process template in Visual Studio 2010. This article is in succession with previous articles for Build Customization Build Customization: Custom Activity with code, Build Customization: Custom Activity with xaml and Build Customization by Changing process template. I would suggest you to read these articles first to understand the flow. This article uses a virtual machine which can be downloaded over here. Step 1. Login as Abu Obeida Bakhach (Dev) with password as P2ssw0rd Step 2. Create a workflow activity library named CustomActivity. Start Visual Studio 2010 from Microsoft Visual Studio 2010. Select File > New Project. Ensure that the language selected is C#.
Step 3. Select the project template of Activity Library and enter the name as CustomActivity and click the Ok button. Ensure that Add to Source Control option is checked Make sure to specify the location in Iteration 2 in Development branch Step 4. Add Code Activity to the project Step 5. Right click on the project in Solution Explorer and select properties. Change the target framework from client to full.Click on the Yes button. Step 6. Add references to following assemblies
Step 7. Add an attribute to the code activity class as [BuildActivity(HostEnvironmentOption.All)] after including the reference to the following:
using Microsoft.TeamFoundation.Build.Client;Step 8. Create a new build definition with the name as “Tailspin Toys Invoke Process”. Step 9. Keep default trigger as Manual and specify workspace as follows. If custom activity does not appear, you will have to check in. Step 10. Specify output folder in Build Defaults tab as \\WIN-GS9GMUJITS8\drops as shown below Step 11. Ensure that the Items to build dialog box contains both the solutions for custom Activity and Tailspin Toys, as shown below Step 12. Remove any automated tests to be executed on Build. In the process tab, select automated test and click on the Remove button. Click OK. Step 13. Create a copy of the default template and name it as “InvokeProcessBuildProcessTemplate.xaml” Make sure that the items to build has the selected custom activity, as well as the tailspin toys solution. Also make sure that no automated tests are to be executed as BVT. Save the build definition. Step 14. Create a branch for InvokeProcessBuildProcessTemplate.xaml in the following manner as shown below, by using the Source Control Explorer. Ensure the target folder is pointing to CustomActivity.Step 15. Let us now create a console application which will be invoked using Invoke Process Activity. Right click on the solution in Solution Explorer and select the option of Add New Project. Select the project type as Console Application and provide name as ConsoleAppCreateFile Step 16. The code will be similar to the followingstatic void Main(string[] args){using (System.IO.StreamWriter sw = newSystem.IO.StreamWriter("c:\\Test.txt")){ sw.WriteLine("Doing Code Analysis with Special tool");}} Build the console application. Step 17. Right click on the project name (CustomActivity) in Solution Explorer and select the option as Add New Item and the item of type Activity. Specify the name as InvokeProcess as shown Step 18. Drag and drop InvokeProcess activity from Toolbox on the designer. Drag and drop WriteBuildMessage and WriteBuildWarning as shown below: Configure the properties for InvokeProcess, WriteBuildMessage and WriteBuildWarning Step 19. Specify a constant value for Message property in WriteBuildMessage and WriteBuildWarning. For InvokeProcess, specify file name as follows"c:\users\abuobe\documents\visual studio 2010\Projects\CustomActivity\ConsoleAppCreateFile\bin\Debug\ConsoleAppCreateFile.exe".
This is the path for the console application. Save the xaml file.For Custom Activity add references to System.Activities.Presentation, PresentationFramework and WindowsBase. Close InvokeProcess.xaml file and Build the custom activity project. In case you get an error for the .dll file, right click on the file and specify Check out for edit.Step 20. Right click on CustomActivity and select the Add Existing Item and add the branched xaml file. Set the Build Action as None for this file. Step 21. Open the file, collapse the sequence and add the InvokeProcess activity as followsSave and close the file. Step 22. Right click on the file in solution explorer and select View Code. Change local to ssgs.Also change local to ssgs where InvokeProcess appears.Step 23. Close and save the file. Check in the branch, merge to its original and check in. Step 24. Ensure that the CustomAssembly.dll is also checked in. Right click on Builds from Team Explorer. Specify the custom assembly path as follows Step 25. Right click on the build and select queue new build. After successful execution of the build, the txt file gets created on C drive and the log shows execution of invoke process as follows. That’s it! These were the steps to call an activity Invoke Process by customizing the process template in Visual Studio 2010. | http://www.dotnetcurry.com/showarticle.aspx?ID=775 | CC-MAIN-2014-42 | refinedweb | 816 | 51.14 |
[TOPBLOCKER] file corruption on touch images in rw portions of the filesystem
Bug Description
Symptoms are that cache files in /var/cache/apparmor and profiles in /var/lib/
Workaround: remove the affected profile and then run 'sudo aa-clickhook'. This obviously is not viable on an end-user device.
The investigation is ongoing and this may not be a problem with the kernel at all, so this bug may be retargeted to another project.
The security team and the kernel team have discussed this a lot and Colin King is currently looking at this. This bug is just so it can be tracked. Here is an excerpt from my latest email to Colin:
"I believe I have conclusively ruled out apparmor_parser and aa-clickhook by creating a new 'home/bug/
http://
Specifically, home/bug/
1. wait for unity8 to start (this ensures the apparmor upstart job is finished)
2. restore apparmor_parser and aa-clickhook, if needed
3. if /home/bug/
/var/
and aa-clickhook were /bin/true during boot so they could not have changed
/var/
4. verify the profiles, exit with error if they do not
5. alternately upgrade/downgrade the packages
6. verify the profiles, exit with error if they do not
7. copy the known good profiles in the previous step to /home/bug/
8. have apparmor_parser and aa-clickhook point to /bin/true
9. reboot
10. go to step 1
In the paste you'll notice that in step 6 the profiles were successfully created by the installation of the packages, then verified, then copied aside, then apparmor_parser and aa-clickhook diverted, then rebooted, only to have the profiles in /var/lib/
IMPORTANT: you will want to update the reproducer and refollow all of these steps (ie, I updated the scripts, the debs, the sudoers file, etc):
$ wget http://
$ tar -zxvf ./aa-corruption
...
$ adb push ./aa-corruption
$ adb shell
phablet@
phablet@
phablet@
phablet@
/etc/sudoers.d/
phablet@
phablet@
phablet@
$ cd ./aa-corruption
$ ./test-from-host.sh
...
The old script is still in place. Simply adjust ./test-from-host.sh to have:
testscript=
#testscript=
The kernel team has verified the above reproducer and symptoms.
Related bugs:
* bug 1371771
* bug 1371765
* bug 1377338
Made some progress today.
On the phone, I am seeing:
/var/lib/
I copied the entire partition /dev/mmcblk0p23 over adb back to my laptop, mounted it and then mounted /mnt/ubuntu.img and the same file is sane and not corrupted. So the underlying data is OK.
corrupted data contains /usr/share/
Cannot find any symlinks that would relate to this.
Next step, I'm adding debug into the symlink name to see if this appears in the corrupt data to verify it is a symlink.
The corruption to /var/lib/
After several reboots, the data still appears corrupted on the phone, but copying the underlying raw device /dev/mmcblk0p23 to my laptop and loop mounting it and then loop mounting ubuntu.img shows an uncorrupted var/lib/
Ruled out the bind mount of /var/lib/
Sanity checked the raw data from /dev/mmcblk0p23:
1. copied raw data off the phone to may laptop
2. using sshfs, mounted the directory containing the raw data snapshot back on the phone
3. loop mounted it
4. loop mounted ubuntu.img from this
5. /ubuntu/
On the phone:
debugfs /userdata/
cat /var/lib/
# vim:syntax=apparmor
#include <tunables/global>
# Define vars with unconfined since autopilot rules may reference them
# Specified profile variables
@{APP_APPNAME}
@{APP_ID_
@{APP_PKGNAME_
@{APP_PKGNAME}
@{APP_VERSION}
@{CLICK_
..
etc
so the underlying file system is verified as sane
I've searched the entire block device for the string /usr/share/
So the underlying file system is sane. The in-memory view of the file seems borked.
After a lot of deep digging into the bind mount, loop driver, and buffer cache and tracking the corrupt pages back down the layers of the stack we've sanity checked this down to the image. The smoking gun was the kernel message:
Nov 6 12:15:16 ubuntu-phablet kernel: [ 3.940485] do_mount: /dev/loop0 -> /root [<null>]
Nov 6 12:15:16 ubuntu-phablet kernel: [ 3.941095] EXT2-fs (loop0): warning: mounting unchecked fs, running e2fsck is recommended
Nov 6 12:15:16 ubuntu-phablet kernel: [ 3.941431] do_mount return -> 0
(apologies for my extra debug).
So it appears that /dev/loop0 is being mounted and it is corrupted. I ran fsck on /userdata/
fsck /userdata/
fsck from util-linux 2.25
e2fsck 1.42.10 (18-May-2014)
/userdata/
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 3A: Optimizing directories
Pass 4: Checking reference counts
Unattached inode 3225
Connect to /lost+found<y>? yes
Inode 3225 ref count is 2, should be 1. Fix<y>? yes
Unattached inode 3709
Connect to /lost+found<y>? yes
Inode 3709 ref count is 2, should be 1. Fix<y>? yes
Unattached inode 3808
Connect to /lost+found<y>? yes
Inode 3808 ref count is 2, should be 1. Fix<y>? yes
Unattached inode 4427
Connect to /lost+found<y>? yes
Inode 4427 ref count is 2, should be 1. Fix<y>? yes
Unattached inode 4485
Connect to /lost+found<y>? yes
Inode 4485 ref count is 2, should be 1. Fix<y>? yes
Unattached inode 5889
Connect to /lost+found<y>? yes
Inode 5889 ref count is 2, should be 1. Fix<y>? yes
Unattached inode 5943
Connect to /lost+found<y>? yes
Inode 5943 ref count is 2, should be 1. Fix<y>? yes
Unattached inode 7853
Connect to /lost+found<y>? yes
Inode 7853 ref count is 2, should be 1. Fix<y>? yes
yyyPass 5: Checking group summary information
Block bitmap differences: -70903 -71144 -71201 -(71674--71675) -71727 -71852 -72689 -72757 -(74519--74520) -74869 -74961 +(92082--92087) +(92089--92092) -92102 +92104 +92114 +y92119 +(92121--92131)
Fix<y>? yes
Free blocks count wrong for group #13 (8813, counted=8820).
Fix<y>? yes
Free blocks count wrong (133222, counted=133229).
Fix<y>? yes
Inode bitmap differences: +(19989--20010) +(20013--20014) -(20545--20549)y -(20551--20569)
Fix<y>? yes
Free inodes count wrong for group #13 (3225, counted=3232).
Fix<y>? yes
Directories count wrong for group #13 (761, counted=760).
Fix<y>? yes
Free inodes count wrong (81946, counted=81953).
Fix<y>? yes
/userdata/
/userdata/
So, there are two big issues outstanding, most probably in the user space shutdown and initrd stages:
1. The file system is not being flushed and unmounted properly.
2. The file system is not being fsck'd before mounting - this is a cardinal sin IMHO
The end result is mounting a corrupt file system that is causing the garbage in the apparmor files.
A suggested fix for this would be:
initrd:
- At every boot, do a minimal fsck check for problems, if problem, write "fsck <partition>" to /cache/
recovery:
- Add support for the fsck stanza, when getting it, run fsck in fix mode, if possible, supporting the usual yes/no questions using pixelflinger and button input.
rootfs:
- If /cache/
system-
- Add a fsck run to the standard ubuntu_command prior to flashing
Adding an initramfs-
Huge thanks to Colin (and apw) for this find.
I want to state that this is a very real problem and not theoretical. I've seen it many times, it is triggerable with enough reboots, and it has been seen on krillin by non-developers (ie, just through normal reboots and system-image updates). stgraber outlined the start of a fix here: http://
IMO, we must solve #2 from Colin's comment #9 (perform an fsck using something like stgraber's approach) before the golden image. If we have that, we can solve #1 in OTA (though in an ideal world that would be fixed prior to golden image since we don't want people dropping into recovery mode if we can at all help it).
If you want a separate fsck command in ubuntu_command, please file a separate bug on upstream system-image project. Other than that, it doesn't look like this particular bug affects system-image client.
FYI, unconfirmed report via ubuntu-phone@: "Glad it isn't only me having this issue -- I have come across this issue twice and it's been files in /home/ getting corrupted."
See bug 1387214 (logrotate failing, resulting in high logfile growth) for another example of FS corruption that is seemingly caused by this issue.
I posted the comment linked in #15.
The partition I have had files corrupted on does have a e2fsck run on it before mounting, it's a LUKS encrypted partition and I added the e2fsck check after it became corrupted and I have also had files corrupted since.
I hadn't done anything to ensure that this partition was unmounted when the device is shutdown but I have now amended /etc/init.
I've had this issue multiple times yet. For me it has always been like that: The content of a file gets overriden and therefore the system fails to do something.
Last time my system settings broke completely. When doing a "cat ~/.config/
"/opt/click.
And after a reboot it now shows this:
"/opt/click.
Deleting "~/.config/
Some other time system updates didn't work and doing a "cat /cache/
"/opt/click.
your dconf issues are caused by illegal use of dconf by an app (apps should not be able to use dconf according to the security policy as i understand it, to avoid exactly this), please file a fresh bug for this.
@ogra: I don't think it's a dconf issue. Why would the same happen to /cache/
Doing a short grep, I also found that line in a lot of files, e.g. in the following one: "~/.cache/
The file started with "/opt/click.
"error: b/opt/click.
Having seen that error message, I checked the file it mentioned. Guess what its content is:
"/opt/click.
There actually seem to be a lot of corrupt files like these ones and it's the second Ubuntu (Touch) installation which gives me those results.
However, if you still think, it's a different bug, I will be happy to hide all my comments here and file a new bug. ;)
As far as I know, there is no design spec for the startup process, and no designer assigned to it. In the meantime...
An automatic filesystem check being interactive makes less and less sense over time. (As the sheer number of files on a device increases, and the proportion of apps that are document-based decreases, you're less and less likely to recognize a file by its pathname.) And it makes *much* less sense on a phone, where you barely see the filesystem at all, so the probability that you can make an informed decision about fixing an individual file is pretty much zero.
So, in the "minimal fsck reveals an error" case, I suggest just triggering a fsck -y. Don't make it interactive. Ideally, add text something like "Repairing…" to the startup screen, or (during string freeze) vary the visual design of the startup screen in *some* tasteful way, to minimize the proportion of people who will think that the phone has got stuck while starting up and try to fix it by holding down the power button.
For the "immediately pull the latest full image and reboot for flashing" case, you might find useful the design for a failed system update. You'd need to change the text a bit. <https:/
adding an android-tools task, since "adb reboot" forces a hard reset without unmounting the filesystems ... adding something along:
<ogra_> root@ubuntu-
<ogra_> root@ubuntu-
<ogra_> touch: cannot touch ‘/userdata/foo’: Read-only file system
to core/adbd/
This bug was fixed in the package android - 20141117-
---------------
android (20141117-
* New upstream release:
- Checking partitions and filesystems when updating (LP: #1387214)
* Dropping changing_
fixing-
-- Ricardo Salveti de Araujo <email address hidden> Mon, 17 Nov 2014 11:21:45 -0200
So this is not a normal fs corruption issue. I was able to run the aa-corruption test quite a few times with the extra e2fsck logic we added, and was easily able to get the corrupted files on krillin, but which is even more interesting is that the files are not actually corrupted for e2fsck (not pointing out a single fs error, even though the files are incorrect).
There's clearly a pattern with click paths under /opt/click.
So in the end it looks like with a quite bizarre error under rw filesystems (ext4) that are bind-mounted. After migrating /opt/click.
Will let it running for a few more iterations and report back, but one bad thing is that running e2fsck is really not going to help much here.
Just migrated /opt/click.
It fails again at the moment I move it back under /data (bind-mounted at /opt/click.
Used links instead of bind-mounts (for /userdata) and was still able to reproduce the issue. So not necessarily related with bind-mounts itself.
It works fine if I move the writable paths under /, and it also works fine after creating another image file (ext4) that gets stored under /userdata, to be consumed by the writable paths (using bind-mounts and everything).
But if I use bind mounts from /userdata/
The only peculiarity with /userdata is that it's shared between both containers. Ubuntu uses /userdata/
I'm also unable to reproduce this issue when mounting /userdata (and also it's bind mounts) with data=journal instead of data=ordered. This might be a consequence of it being slower, not yet sure.
Decided to try that to see if I'd get any error when running e2fsck, but still nothing. Only side effect is that I can't really reproduce the problem.
Running for almost an hour now, will let it for a few more.
after researching the adbd part for two days it seems that adbd already tries to call "echo u > /proc/sysrq-
/proc/sysrq-trigger is owned by root:system and writable for both, the only solution i see (beyond making /proc/sysrq-trigger owned by phablet or its group which would rip a giant security hole) is to make adbd start with "setguid system" and have it drop this group membership right before any adb shell call (so that the logged in phablet user is not member of system by default)
i'm trying to implement this but am constantly running into smaller issues.
Colin King is investigating the issue on the filesystem level. He got an easier way to reproduce the problem, and is currently investigating possible patch sets from upstream.
This bug was fixed in the package android - 20141117-
---------------
android (20141117-
* No change rebuild to pick up initramfs changes.
-- Ricardo Salveti de Araujo <email address hidden> Tue, 18 Nov 2014 22:21:39 -0200
This bug was fixed in the package initramfs-
---------------
initramfs-
* scripts/touch: mounting userdata with data=journal as a workaround
for bug LP: #1387214
-- Ricardo Salveti de Araujo <email address hidden> Tue, 18 Nov 2014 15:33:16 -0200
We've been exercising the rtm images that include the latest workaround now for a considerable amount of time repeating the test that originally triggered the issue.
On mako, 2 devices running:
device #1, 329 iterations - no corruption observed
device #2, 251 iterations - no corruption observed
And also on another spare device using rtm image,
device #3, 176 iterations, - no corruption observed
I can continue running these for another few days if need be, but I think it is fairly conclusive that the data=journal change prevents the apparmor metadata from ending up on the /var/lib/
@Ricardo,
I'm added copious amounts of debug into the kernel and a shim on umount too and I can't see /dev/mmcblk023 being unmounted anywhere. Can you inform me where to expect this umount is actioned on shutdown. I just can't see it.
With full journaling on I'd expect this to cope with a non-umount as it can replay the journaled data and metadata and so we don't get errors, however, with the data=ordered option I could expect to see metadata being perhaps sane where as the file data being not written back and hence we see old data and possibly the reason for this corruption.
So, I'd really like to have some idea where the umount actually occurs.
The debug shows me:
boot --> initrd --> mount /dev/mmcpblk023 onto /tmpmnt
and then
[ 5.142499] mount: /root/userdata
[ 5.142591] do_mount: /tmpmnt -> /root/userdata [<null>]
[ 5.143811] do_mount return -> 0
But for the life of me, I can't see this being unmounted. So that's my concern, it may not be umounted and hence the root cause of the data corruption.
So just to re-iterate:
1. I believe the file system is not being cleanly umounted.
2. data=journal saved us from disaster because it works so well.
My recommendation is to keep data=journal because a user can power off the device (battery death, pulling out battery, random kernel reboot, etc) and data=journal has conclusively shown it the only RELIABLE way to ensure data and metadata are sane.
I really would like to warn against fixing this bug by doing the umount and changing back to the faster but definitely less rugged data=ordered option. I think that would really be too risky IMHO.
after running with the change for one week on my krillin i must say that i dont really see any performance impact or any other bad behavior from the change ....
I don't thing we're seeing many writes to that partition, most of it's probably data pages being flushed out in the background so it's not going to bite too hard. We do have tools to figure out how much is being written if it needs some analysis.
On Fri, Nov 28, 2014 at 3:29 PM, Oliver Grawert <email address hidden> wrote:
>.
We could cover most of the possible causes for not unmounting it
properly, just wonder if we should still take the risk of moving back
to data=ordered.
Some use cases:
* Hard reboot (power + volume up): the userspace can probably be
notified by that, forcing the ro remount
* Adb: doing the emergency readonly mount as suggested by oliver
* $ sudo reboot: just making sure the halt process indeed umount the
partition successfully
* Low battery: forcing shutdown if batter lower than 3% (?)
* Kernel panic: can we force kernel to flush things into the disk when
we get a crash? This is important as well as it seems we can easily
make the kernel to crash still (specially when using bluetooth).
i think the "sudo reboot" and "low battery" case are the same, both should use init's shutdown/reboot commands ...
i dont think a kernel panic is a "normal usecase", if we ship panicking devices i think we failed ... developers should simply be aware that panics can cause corruption, this is no different to desktop development
On Tue, Dec 2, 2014 at 7:30 AM, Oliver Grawert <email address hidden> wrote:
> i think the "sudo reboot" and "low battery" case are the same, both
> should use init's shutdown/reboot commands ...
Right.
> i dont think a kernel panic is a "normal usecase", if we ship panicking
> devices i think we failed ... developers should simply be aware that
> panics can cause corruption, this is no different to desktop development
Well, it might not necessarily be a "normal" usecase, but we should
expect that to happen. It's better to better handle the crash than
expecting it to no crash at all, as we can't guarantee that with this
kernel.
And we should be thinking about normal users instead of developers :-)
I think also we need to consider security of data is the ultimate goal. Quite frankly, unpredictable things happen users can do all kinds of things like yank out the battery and trip kernel panics. Nobody has formally verified that the kernel is perfect so there is always the possibility of the device spontaneously rebooting or dieing before data is written back.
There are multiple tasks still open for this bug. Can they be closed or are there still things to do?
adbd can still not remount the filesystem readonly on "adb reboot" due to the fact that we drop all privileges writing to /proc/sysrq-trigger isnt possible and i havent found a way around this yet ... (this is a corner case and potential fs corruption should be handled by the fsck on next boot anyway, but it would be nice if adbd could work as expected here, which is why the task is still open)
setting the android-tools task to a realistic value, the remaining bit is a corner case for normal users and is already shielded from doing any damage by the fact that we do an fsck on every boot now.
marking resolved for purposes of this project
I might have hit this issue again, I have some files in a git repo on my phone (mako running devel channel) and when I tried to do a `git commit -a` just now I got:
error: corrupt loose object '4e1f9449b61f1b
fatal: loose object 4e1f9449b61f1b1
This is a 6.5K zlib compressed data file, when I moved it out of the way and tried to do a `git commit -a` I was prompted to add things to the repo that were already there, at this point I gave up and checked out a fresh working copy of the repo.
It is of course possible that this corruption was caused by something like a `git pull` failing half way though due to a network connection issue, but I don't remember this happening.
Added application-
confinement and apparmor tags since this bug affects both and it will be easier to find. | https://bugs.launchpad.net/ubuntu-rtm/+source/linux-mako/+bug/1387214 | CC-MAIN-2018-43 | refinedweb | 3,657 | 59.64 |
On 08/29/2017 02:39 AM, Ashish Mittal wrote: > Signed-off-by: Ashish Mittal <Ashish Mittal veritas com> > --- > tests/virstoragetest.c | 10 ++++++++++ > 1 file changed, 10 insertions(+) > Similarly this probably should be merged with the code that alters qemu_{command|block}... > diff --git a/tests/virstoragetest.c b/tests/virstoragetest.c > index d83db78..f8444e1 100644 > --- a/tests/virstoragetest.c > +++ b/tests/virstoragetest.c > @@ -1585,6 +1585,16 @@ mymain(void) > "<source protocol='sheepdog' name='Alice'>\n" > " <host name='10.10.10.10' port='7000'/>\n" > "</source>\n"); > + TEST_BACKING_PARSE("json:{\"file\":{\"driver\":\"vxhs\"," > + "\"vdisk-id\":\"c6718f6b-0401-441d-a8c3-1f0064d75ee0\"," > + "\"server\": { \"host\":\"example.com\"," > + "\"port\":\"1234\"" > + "}" > + "}" > + "}", > + "<source protocol='vxhs' name='c6718f6b-0401-441d-a8c3-1f0064d75ee0'>\n" > + " <host name='example.com' port='1234'/>\n" > + "</source>\n"); For consistency, I'll modify port to be 9999 and I'll add the "type:tcp," string as well to match previous change John > #endif /* WITH_YAJL */ > > cleanup: > | https://www.redhat.com/archives/libvir-list/2017-August/msg00926.html | CC-MAIN-2020-10 | refinedweb | 151 | 53.07 |
Hey I am trying to use arduino ping sensor so a user can control the movement of a bubble in processing as part of a project. I am trying to have the users movement measured by the ping and have processing pick up the distance to control the bubble.I have the code from a tutorial site modified slightly to shrink and grow a ball according to a users distance as a sample to try and get to two talking to each other but they are not communicating for some reason. The normal arduino Ping example is working fine is just the processing side of things that just won't cooperate - keep getting distance: 0.
import processing.serial.*; //DisplayItems di; // backgroundcolor, grid, etc. are controlled in the DisplayItems object // width and height should be set here int xWidth = 600; int yHeight = 600; // set framerate int fr = 24; // set up the display items you want by choosing true boolean bck = true; //boolean grid = true; //boolean g_vert = true; //boolean g_horiz = true; //boolean g_values = true; boolean output = true; // these variables are for the serial port connection object Serial port; String portname = "COM4"; // find the name of your serial port in your system setup! int baudrate = 9600; // set baudrate here int value = 0; // variables used to store value from serial port String buf=""; // String buffer to store serial values int value1 =0; // value1 is the read value // setup initializes displayItems and serial port objects void setup(){ size(xWidth, yHeight); frameRate(fr); // di = new DisplayItems(); port = new Serial(this, portname, baudrate); println(port); } // the serial event function takes the value of the event and store it in the corresponding variable void serialEvent(int serial){ if(serial!=10) { buf += char(serial); } else { //extract the value from the string 'buf' buf = buf.substring(1,buf.length()); //cast the value to an integer value1 = int(buf); buf=""; } } void createCircle(){ //Adding constraints to keep the circle within the framesize if(value1 > width){ value1 = width; } //Draw the circle ellipse(width/4, height/4, value1, value1); } // draw listens to serial port, draw void draw(){ while(port.available() > 0){ value = port.read(); serialEvent(value); } // di.drawBack(); noStroke(); fill(5, 60, 255); createCircle(); // di.drawItems(); if(output) println("Distance:" +value1); delay(1000); }
I've tried the frimata and arduino examples but had no luck. Any help would be much appriciated. | https://forum.arduino.cc/t/ping-to-processing/12409 | CC-MAIN-2022-21 | refinedweb | 387 | 54.15 |
Forum: Other Pololu ProductsHello.
Forum: Other Pololu Products.
Forum: Other Pololu ProductsHi.
Forum: Other Pololu ProductsI would like to implement your S7V8F3 switching step-up/step-down regulator (3.3v) to run a battery powered gsm module (ada fruit fona). This unit has a li ion battery charger circuit built and and a tap for the battery output. I want to run the battery out to your 3.3v buck/boost regulator to power the micro controller that will command the fona board. I was told that the noise from the S7V8F3 switching step-up/step-down regulator could potentially interfere with the cellular operation of the fona board. Is this something you might have information about? Is this type of circuit inherently noisy???
Forum: Other Pololu ProductsYes, all of the battery holders we carry are wired in series.
Forum: Other Pololu ProductsEven the back to back battery holders has all their cells in series?
Forum: Other Pololu ProductsHello.
Forum: Other Pololu ProductsHi, im looking forward to buy a battery holder, but in the specifications of the product it doesn't says the voltage it provides. I mean, i dont know if all the cells are in parallel or if some are connected in series to obtain a higher voltage supply with the same amount of cells.
Forum: Other Pololu ProductsThank you, David.
Forum: Other Pololu ProductsHello.
Forum: Other Pololu ProductsI've just tried my p-star via MPLABX and p-load.
#include <xc.h>
#define _XTAL_FREQ 48000000
#define LED_GREEN(v) { TRISB7 = !(v); }
#define LED_YELLOW(v) { TRISB6 = !(v); }
#define LED_RED(v) { TRISC6 = !(v); }
void main() {
// Set up the LEDs
LATB7 = 1;
LATB6 = 1;
LATC6 = 0;
/* Enable Timer 0 as a 16-bit timer with 1:256 prescaler: since
the instruction speed is 12 MHz, this overflows about every 1.4seconds. */
T0CON = 0b10000111;
while (1) {
TMR0L; // trigger an update of TMR0H
// Blink the green LED with a period of 1.4 s
LED_GREEN(TMR0H >> 7 & 1);
// Blink the yellow LED with a period of 0.7 s
LED_YELLOW(TMR0H >> 6 & 1);
// Blink the red LED with a period of 0.35 s
LED_RED(TMR0H >> 5 & 1);
}
}
Forum: Other Pololu ProductsI read this old thread today - 6/4/2015 and am posting the following warning about spikes upon power connections. This is a cut-n-paste from the product page:
Forum: Other Pololu ProductsThanks! Overvoltage by a bit is fine. It's below 12V that the transmitter really glitches. In fact, it's more stable at 13V, so I may end up going adjustable. Just wish you would use nonlinear pots so the high end of the adjustment range wasn't so sensitive.
Forum: Other Pololu ProductsUnfortunately, the maximum output voltage for the IC used on our U1V11F5 regulator is 5.5V, so we would not be able to make a 9V version. We have not characterized the RF noise of our regulators, so I am not sure which ones might work for you.
Forum: Other Pololu ProductsHi.
Forum: Other Pololu ProductsHi Claire,
Forum: Other Pololu ProductsU3V12F12 is rated as "VIn no more than VOUT:"
Forum: Other Pololu ProductsI ordered an EagleTree rpm sensor and attached it to my motor.
Forum: Other Pololu ProductsHi.
Forum: Other Pololu ProductsHi,
Forum: Other Pololu Products95% of the time I get a proper reading when counting the time passed on both pwm percentage and rpm. This seems to stabilize at higher rpm.
Forum: Other Pololu ProductsThere are no negative spikes shown on the SLO Scope screen; however, that does not mean there are not negative voltages on the line you are measuring. If there are spikes that are going below 0V, they would be truncated in the program since it can only measure between 0V and VCC. I suggest you measure the line with an actual oscilloscope to make sure there are not any negative spikes. If there are, using the AVR programmer to measure the voltage is not advised.
Forum: Other Pololu ProductsWhere do you see the negative spikes? At the bottom of the SLO Scope screen it says Min A 0.019 Volt and B at 0 Volt.
Forum: Other Pololu ProductsIt is neat that you are able to see the motor pulses and measure RPM from it. It is unclear from the screenshot, but it looks like there are voltage spikes that are going below ground (0V). If this is the case, I recommend not using the AVR programmer to measure the pulses since it can get damaged.
Forum: Other Pololu ProductsI tried to measure the motor pulses with a SLO Scope, the power cycle is visible although the 20 kHz of the SLO Scope is too slow to measure the 10 kHz PWM. Surprisingly the output is high and gnd during powered cycle. You can clearly see the 1/3 power and 2/3 waiting period. I expect the pulses to be better than shown on the scope and will give it a try to write some logic for rpm measurements.
Forum: Other Pololu ProductsHello.
Forum: Other Pololu ProductsI own 3, 2-cell AA size liFeP04 battery packs 6.6 volts, 800 mah. They do not have a balance connector on them. I can't seem to charge them, I keep getting a "Balance connect error" from my charger. Is it possible to charge LiFe batteries without a balance cable? If not is it possible to install a balance cable to a battery pack?
Forum: Other Pololu ProductsSo I just got my first boards in using this part and there's a mistake. I wrongly assumed that the switch pin near the grounds was also a ground when it isn't. Fortunately I figured out a work around. Attached is a script with the fixed part. I also made the holes slightly bigger so standard pin strips would fit better.
Forum: Other Pololu ProductsHere is the design:
valid =0; // next calculation is invalid for percentage, next calculation is rpm input
rise = TCNT1; // collect moment
if (valid == 1) // 2nd or later passage after overflow timer1
{
if ((rise – lastrise)<=500) // short enough after last pulse = pwm
{
percentage = (((lastfall – lastrise)*100)/(rise-lastrise)); // calculate
}
else
{
rpmrise = rise; // first rise after long period without timer1 overflow;
rpminput = (rpmrise – lastrpmrise); // calculate counts between commutations
lastrpmrise = rpmrise; // remember last first rise
}
}
else
{
valid = 1; // only once per timer1 overflow
lastrpmrise = rise; // firstrise after timer1 overflow
}
lastrise = rise;// store last rise
lastfall = TCNT1;
Forum: Other Pololu ProductsJust to follow up. I soldered a header pin to the enable pin and set it to HIGH via my ARDUINO, but I'm still not reading any output voltage...I almost had my hopes up!
Forum: Other Pololu ProductsI was measuring voltage at the terminal blocks. What happens if I bought the regulator from a Pololu vender () and not from Pololu.com directly?
Forum: Other Pololu ProductsWere you measuring the voltage at the terminal blocks? If so, it does sound like your regulator is damaged. If you contact us directly at support@pololu.com with your order information and reference this forum post, we might be able to help you with a discount on a replacement.
Forum: Other Pololu ProductsUsing a Hall sensor for rpm or Amp sensor for load works, the idea of taking one of the motor leads makes installation less complex but will challenge my programming skills.
Forum: Other Pololu ProductsI did notice that the pins on the terminal blocks are much smaller than the through holes provided on the pcb.
Forum: Other Pololu ProductsI am not sure how well measuring a lead on a BLDC motor would work with the our level shifter. I expect there to be large variations and spikes in the voltage that might make it hard to measure the signal without some kind of filter. Using a Zener diode like you suggested might work. However, since you are trying to determine RPM of your motor, it will probably be easier to use the Kv rating of the BLDC motor along with its current draw instead.
Forum: Other Pololu ProductsThank you for posting those pictures. The connections between the terminal blocks and the board look like they might not be making good connections. Could you try retouching the pins with a soldering iron to make sure the pins make good connections with the board? You might find this soldering tutorial helpful.
Forum: Other Pololu ProductsA normal RC motor has three wires. The speed controller alternates a pwm signal on these three wires. So each wire has a pwm-ground-ground-pwm-ground-ground-etc cycle.
Forum: Other Pololu ProductsHi Jeremy,
Forum: Other Pololu ProductsThe 7A version can source enough current for all your devices, so I am not sure what might caused it to stop working. Could you tell me more about your setup? What is the capacity and C rating of your 2S LiPo battery? How were all the devices connected? Could you post pictures of your setup?
Forum: Other Pololu ProductsHi Jeremy,
Forum: Other Pololu ProductsYour diagram and plan seem reasonable. For the charging station, you will probably want to mechanically/physically prevent the robot from connecting power backwards.
Forum: Other Pololu ProductsHello.
Forum: Other Pololu ProductsHello.
Forum: Other Pololu ProductsI want to determine the motor rpm of my RC motor by using one of the motor leads as input and process the signal through a Baby-O.
Forum: Other Pololu ProductsI'm using the D15V35F5S3 Step-Down Voltage Regulator with a 2S1P / 7.4v / 2Cell Lipo Battery to control two 30mm 5V fans that draw .46 AMPS, an LED chaser that draws 1.3 AMPS, two 24 pixel NeoPixel rings and two Arduinos.
Forum: Other Pololu ProductsThanks for that information! Sounds like the power mux should work great for my case.
Forum: Other Pololu ProductsHello.
Forum: Other Pololu ProductsIt sounds like you have the high-speed TM1804s. If you want to be sure, you or whoever ordered the LED strips could log in to pololu.com, click "My Account", and check the order history.
Forum: Other Pololu ProductsI have a Raspberry Pi B+ based robot built using Pololu parts and I want to add a power dock to charge the 5V battery pack (its a 5V USB battery pack) and also power the RPi.
Forum: Other Pololu ProductsWow, i can't believe that I missed that. This page shows the low-speed timings. I have referred to that page dozens of times while messing with my code. It was never once obvious to me that you guys had two different sets of these lights at two different sets of timings. That page and this one are practically indistinguishable from each other. Now I'm not sure whether I have the high-speed or the low-speed set.
Forum: Other Pololu ProductsHello.
Forum: Other Pololu ProductsI have a few projects where I want to build a bare bones Arduino into the circuit.
Forum: Other Pololu ProductsHello.
Forum: Other Pololu ProductsAttached is a Eagle Cad script which will generate the Pololu Pushbutton switch. Hasn't been tested, but I just ordered some boards that use it so I'll know in a week or so.
Forum: Other Pololu ProductsIf I wanted to plug my relay in by pins, looking at the pin assignment table, would the relay be a digital or analog output?
Forum: Other Pololu ProductsHello.
Forum: Other Pololu ProductsI would like to make a CAD part for you pushbutton power switch so I can attach it to my pcb, but the holes where the push button go aren't on the same .1" grid.
Forum: Other Pololu ProductsHello.
Our tests indicate that the pulse widths do not have to be precise: there is a threshold around 0.6 microseconds that determines whether the pulse is a 0 or a 1, and a wide range of pulse widths on both sides of the threshold will work.
Forum: Other Pololu ProductsWell I did an experiment tonight using the Arduino drivers which I had working. I built a voltage divider and was still able to get the LEDs to work with a 3.3V signal from the divider. So I am forging ahead with trying to get this to work on a BBB. Plus I ordered some shifters from you guys which I can use if need be. The problem I am facing right now is whether or not I can disable interrupts on the board while I send the signal. If I can't disable interrupts, what other strategies can I use?
Forum: Other Pololu ProductsHello.
Forum: Other Pololu ProductsHello.
Forum: Other Pololu ProductsI have some of the RGB LED strips (part # 2543) and I have been working on trying to get a BBB (BeagleBone Black) to output a signal quick enough to control these LED strips. It's proving to be time consuming, but I think I may be able to produce a signal if I dig a little deeper into the GPIOs via memory mapping.
Forum: Other Pololu ProductsHello, I am about to purchase one of pololu's "Pololu Basic 2-Channel SPDT Relay Carrier with 5VDC Relays (Assembled)" relay boards. Before I do so I want to understand how to set it up. My micro controller is a orangutan svp-1284....When hooking the relay board pins,EN1..VDD..GND and EN2..VDD..GND , to my svp, which pins do I connect them to? Do the relay board pins go into the servo pins, or do they go in the black pin block in between to the servo pins and the 6 usb/avr pins. Help!
Forum: Other Pololu ProductsHello.
I plan on providing a 5.19VDC up to 8A, input to the module and looking for a 9VDC output.
Forum: Other Pololu ProductsThere is a warning on the product page, about overheating. "Regulator may overheat at lower input currents when VIN is much lower than VOUT. Available output current is a function of VIN, VOUT, and the regulator efficiency".
Forum: Other Pololu ProductsHello.
Forum: Other Pololu ProductsThank you
Forum: Other Pololu ProductsI measured the voltage across VDD and noticed that, when I power down, the voltage falls pretty quickly to .5V but then slows down. If I power up the sensor before the voltage falls to about .3V, I experience the brownout condition. As a workaround, I attached a 10k resistor to VDD and the voltage drops much quicker when I turn the power off. I gather the pulldown resistor is the best solution?
Forum: Other Pololu ProductsHello.
Forum: Other Pololu ProductsFor a few days now, we have been struggling with some strange behavior on the compass. We have isolated at least one cause of the problem (hopefully, this is the sole cause).
Forum: Other Pololu ProductsI'm using a D24V5Fx in a project with a raspberry pi. I want to use the SHDN function of the regulator in a low power state.
Forum: Other Pololu ProductsI am glad you got it working again. You can follow the tutorial under the "Programming AVRs Using Atmel Studio 6" section in the AVR programmer user's guide to learn how to set up your system to program AVRs using Atmel Studio and your programmer.
Forum: Other Pololu ProductsAh yes thank you that worked!! It must have been where the command line was broken to the next line - I saved it to a text file before restarting. Do you know if there is any way to deploy code from atmel studio using this device? Or that software is for atmel official programmers only?
Forum: Other Pololu ProductsHello.
Forum: Other Pololu ProductsHello.
Forum: Other Pololu ProductsIf you email us at support@pololu.com with your order information, and reference this forum post, we can look into replacement options for you.
Forum: Other Pololu ProductsI was using the programmer I bought fine nearly every day. It's a brilliant product and I love the size of it. However, stupid Windows 7 made me install updates and restart. I postponed restarting by 4 hours about 10 times and then finally thought I would restart when I watched TV because it was annoying me so much.
Forum: Other Pololu ProductsCan anyone tell me the full dimensions of the S18V20F12 regulator. It has the mounting hole dimensions and board outline dimensions but no pin header dimensions. Whats the spacing between the input and output .1" header pins .Also spacing from end of board.
Forum: Other Pololu ProductsManaged to get my hands on another USB to mini USB today. Tried it with my girlfriend's laptop too, no luck. I assume that it only has to be plugged into a computer for any LEDs to come on? Either way, I had it plugged into the 3pi anyway.
Forum: Other Pololu ProductsHi, Martin.
Forum: Other Pololu ProductsOur USB AVR programmer is powered through the USB connector, not the 3pi, so a bad cable, a broken USB port or damaged contacts in the USB connector can leave the programmer unpowered. If the programmer was getting power, the green LED next to the USB connector would light up.
Forum: Other Pololu ProductsI'll try and get my hands on another USB cable, but the latter two solutions don't work. Regardless, isn't it powered by the 3pi anyway? In which case, wouldn't the LEDs come on even without plugging it into a computer?
Forum: Other Pololu ProductsHello.
Forum: Other Pololu ProductsJust a minor FYI feedback on the D24V5x voltage regulator.
Forum: Other Pololu ProductsPlugged in the USB AVR Programmer that I bought with my 3pi, turned on the 3pi but no LEDs on the programmer light up. What gives?
Forum: Other Pololu ProductsHello.
Forum: Other Pololu ProductsThanks Nathan!
Forum: Other Pololu ProductsHello Nathan,
Forum: Other Pololu ProductsHello, George.
Forum: Other Pololu ProductsHello, George.
Forum: Other Pololu ProductsHi,
Forum: Other Pololu ProductsHello Nathan,
Forum: Other Pololu ProductsHello Nathan again,
Forum: Other Pololu ProductsHello Nathan,
Forum: Other Pololu ProductsHello, George.
Forum: Other Pololu ProductsDear Pololu Team,
Forum: Other Pololu ProductsHello Nathan,
Forum: Other Pololu ProductsHello Nathan,
Forum: Other Pololu ProductsYou might try searching for automotive relays to find a low voltage high current relay, though these usually can't be driven directly from a microcontroller. We make a carrier board with the circuitry for driving relays that you could add your own relay to if you can find one in the same form factor or with a wiring harness.
Forum: Other Pololu ProductsHello Nathan, | http://forum.pololu.com/atom.php?f=3 | CC-MAIN-2015-27 | refinedweb | 3,062 | 65.01 |
0
Hi
It's been a while i'm learing C++ but still i have some issue with
Move semantics and some other stuff.
I wrote this class but i don't know why the
Copy constructor is getting called
and the output is a little strange to me, shouldn't the
Move Constructor gets called three times only? i don't know what i did wrong.
Also in the
Move Connstructor i could use
Data(other.Data) but in the
Copy Constructor i couldn't
i had to write a
for loop or use
std::copy, why?
I appreciate your answers.
Output
Move constructor is called! Move constructor is called! Copy constructor is called! Move constructor is called! Copy constructor is called! Copy constructor is called!
main.cpp
#include <iostream> #include <string> #include <vector> #include <utility> #include <algorithm> class Foo { private: std::size_t size; int* Data; public: Foo(std::size_t length): size(length), Data(new int [length]) {} //Copy Constructor Foo(const Foo& other): size(other.size), Data(new int[other.size]) { std::cout<<"Copy constructor is called!"<<std::endl; std::copy(other.Data, other.Data + size, Data); } //Move Constructor Foo(Foo&& other): size(other.size), Data(other.Data) { std::cout<<"Move constructor is called!"<<std::endl; other.size=0; other.Data = nullptr; } std::size_t retSize() const { return size; } ~Foo() { delete [] Data; size = 0; } }; int main () { std::vector <Foo> vec; vec.push_back(Foo(15)); vec.push_back(Foo(12)); vec.push_back(Foo(17)); return 0; } | https://www.daniweb.com/programming/software-development/threads/495649/move-semantics-c-11 | CC-MAIN-2017-09 | refinedweb | 244 | 61.22 |
-- posted 23 April 2007 by Duoas
Description
A simple class that produces vectors interpolating between two given vectors in a given amount of time. The vectors may be any dimention, so long as they can be parsed as a sequence. The interpolation defaults to a linear interpolation, but two parameters are provided to modify it into various smooth interpolations.
Background
I had been playing around with the LinearInterpolator and SmoothInterpolator functions here in the cookbook, and after fixing a couple (serious) bugs, I made some other heavy modifications to suit my needs. Finally, I decided to post my updates.
I don't have Numeric, and as it is getting rather old anyway I have puttered along without it. You might find some speed improvements by changing the list comprehensions into Numeric.arrays. The following apply:
Well, you get the idea.
interpolator.py
This is the module. You can run it through pydoc to get nice HTML documentation.
R"""
interpolator.py
Perform a simple interpolation between two points of any dimention, without
the use of Numeric.
2007 Michael Thomas Greer
Released to the Public Domain
"""
class Interpolator( object ):
R"""
The line actor. Returns successive points along a line until completely
traversed. Once traversed this class can do nothing more.
Care has been taken to fix errors due to floating-point arithmetic.
implementation notes
Following the docs, some simple math must be done to get the movement
desired. The first thing to remember is that we cannot change the FPS,
which is to say, we cannot change the speed of the program. So all our
manipulations work by modifying the step size between the start and
stop vectors (which, in other words, is the apparent speed of line
traversal).
For the default linear interpolation, the step size remains constant:
step = dx *( 1.0 /FPS ) /seconds
where dx represents the total distance to traverse for each dimention.
For non-linear interpolations, we must factor in our shape, which is
specified relative to the linear interpolation speed. The shape is a
simple power function which gives us a nice, spikey curve with a
vertical asymptote at the midpoint. The function is:
factor = shape *(closeness_to_asymptote **(shape -1.0))
where closeness_to_asymptote is a number in the range [0.0, 1.0]; 0.0
being at either end of the line and 1.0 being at the asymptote. (The
location of the asymptote, or "middle", is modifiable.)
The final calculation is to modify each vector position by (step *factor).
"""
#---------------------------------------------------------------------------
def __init__(
self,
start = None,
stop = None,
seconds = None,
fps = None,
shape = 1.0,
middle = 0.5
):
R"""
Create a new interpolator to produce timed vectors along a line.
arguments
start - The initial vector. If no vectors are specified the object is
treated as a placeholder object and does absolutely nothing
but return the two-dimentional vector (0, 0).
stop - The final vector. If no final vector is specified the object
is treated as a placeholder object and does absolutely nothing
but return the 'start' vector.
seconds - The number of seconds you wish the interpolation to take.
fps - The current number of frames per second. If you specify
seconds you must specify the FPS, otherwise a ValueError
exception is raised with the message "You must specify both
'seconds' and 'fps'."
shape - Modify the interpolation as non-linear.
First some quick information to explain:
- The total time to traverse the line is constant (in other
words, this function cannot take more or less 'seconds'
worth of vectors than you specify).
- For the same reason, the number of vectors produced by this
function (for a given value of 'seconds') is constant.
- Hence, the -speed- of a vector is defined wholly by the
distance between it the previously produced vector.
- In a linear interpolation, the distance between vectors (or
again, the speed of each vector) is constant; every vector
has the same speed.
- In a shaped interpolation, the speed of individual vectors
is modified: some are faster and some are slower.
The shape is the number of times greater than linear speed the
middle vector travels.
A shape of 1.0 is a linear interpolation. A shape of 2.0 has
slower vectors at the end and a vector in the middle traveling
twice as fast as it would in a linear interpolation. A shape
of 0.5 travels half as fast. There really isn't any upper-
limit, but zero is the lower. You'll get a ValueError if you
try any value less-than or equal-to zero.
middle - The location of the "middle vector" along the line, expressed
as a value from 0.0 (at 'start') to 1.0 (at 'stop').
"""
self._sec = -1
self._length = 0
if start is None: start = (0, 0)
if stop is None: self.stop = start
else:
if (seconds is None) or (fps is None):
raise ValueError( "You must specify both 'seconds' and 'fps'" )
if shape <= 0.0:
raise ValueError( "The 'shape' argument must have value > 0.0" )
if not (0.0 <= middle <= 1.0):
raise ValueError( "The 'middle' argument must be in range [0.0, 1.0]" )
self.stop = stop
self.diff = [ b -a for a, b in zip( start, stop ) ]
self.inc = 1.0 / fps
self.step = [ a *self.inc /seconds for a in self.diff ]
self._pos = start
self._sec = seconds
self.seconds = seconds
self.shape = shape
self.mid = middle
self.maxs = [ max( a, b ) for a, b in zip( start, stop ) ]
self.mins = [ min( a, b ) for a, b in zip( start, stop ) ]
self._length = None
#---------------------------------------------------------------------------
def next( self ):
R"""
Calculate the location of the next vector in the line.
The 'start' vector cannot be a "next" vector. (This is actually rather
convenient if you think about it.) That said, if your interpolation is set
up right, the first "next" vector might actually be in the same location
as the 'start' vector...
Care is taken that the 'stop' vector is always the final vector.
returns
The next vector or None if all done.
"""
def d( a, b, c ):
if b == 0.0: return c
else: return a /b
if self._sec >= 0.0:
if self.shape == 1.0: factor = 1.0
else:
percent = 1.0 -(self._sec /self.seconds) # percent complete
if percent < 0.95:
if percent > self.mid: k = d( (1.0 -percent), (1.0 -self.mid), 1.0 )
else: k = d( percent, self.mid, 0.0 )
if k in [0.0, 1.0]: factor = k *self.shape
else: factor = pow( k, self.shape -1.0 ) *self.shape
else:
# The final 5% of the line is calculated linearly to avoid any
# 'jump' or 'snap' artifacts caused by FPU arithmetic errors.
if self.mid is not None:
self.diff = [ b -a for a, b in zip( self._pos, self.stop ) ]
self.step = [ a *self.inc /self._sec for a in self.diff ]
self.mid = None
factor = 1.0
self._pos = tuple(
[ min( max(
a +(step *factor),
mina ), maxa )
for a, step, mina, maxa in
zip( self._pos, self.step, self.mins, self.maxs )
]
)
self._sec -= self.inc
return self._pos
else:
self._pos = self.stop
return None
#---------------------------------------------------------------------------
def _get_pos( self ):
R"""Return the current vector's location."""
return self._pos
#---------------------------------------------------------------------------
def _get_length( self ):
R"""
Returns the length of the line. The line's length is not calculated
until first required.
"""
from math import sqrt
if self._length is None:
sum = 0
for a in self.diff: sum += a *a
self._length = sqrt( sum )
return self._length
#---------------------------------------------------------------------------
pos = property( _get_pos, doc='The location of the current vector. Read-only.' )
length = property( _get_length, doc='The length of the line. Read-only.' )
# end interpolator
test.py
Here is a simple test program you can use to play around with it.
Click anywhere on the display to cause the colored squares to re-center themselves around the spot where you clicked, using a one-and-a-half-second interpolation.
Enjoy playing with it!
import pygame
from interpolator import *
RESOLUTION = (800, 600) # (adjust for your resolution)
SECONDS = 1.5 # slow enough to see how it goes, but not too slow...
class Sprite( pygame.sprite.Sprite ):
def __init__( self, color, pos ):
pygame.sprite.Sprite.__init__( self )
self.image = pygame.Surface( (30, 30) )
self.image.fill( color )
self.rect = self.image.get_rect()
self.rect.center = pos
self.line = Interpolator( pos )
def update( self, screen ):
screen.fill( (0, 0, 0), self.rect )
self.line.next()
self.rect.center = self.line.pos
screen.blit( self.image, self.rect )
def main():
pygame.init()
screen = pygame.display.set_mode( RESOLUTION )
x, y = RESOLUTION
x //= 2
y //= 2
shapes = [1.0, 2.0, 2.0, 4.0, 0.5]
middles = [0.5, 0.5, 1.0, 0.0, 0.5]
xoffsets = [0, -30, 30, -30, 30]
yoffsets = [0, -30, -30, 30, 30]
colors = [
( 52, 98, 166),
(244, 127, 48),
(176, 84, 175),
(229, 35, 58),
(255, 255, 162)
]
all = pygame.sprite.OrderedUpdates()
for xoffset, yoffset, color in zip( xoffsets, yoffsets, colors ):
all.add( Sprite( color, (x +xoffset, y +yoffset) ) )
clock = pygame.time.Clock()
all.update( screen )
pygame.display.update()
while True:
for event in pygame.event.get():
if event.type in [pygame.QUIT, pygame.KEYDOWN]:
return
elif event.type == pygame.MOUSEBUTTONDOWN:
fps = clock.get_fps()
x, y = event.pos
for sprite, shape, middle, xoffset, yoffset in zip(
all.sprites(), shapes, middles, xoffsets, yoffsets
):
sprite.line = Interpolator(
sprite.rect.center,
(x +xoffset, y +yoffset),
SECONDS,
fps,
shape,
middle
)
all.update( screen )
pygame.display.update()
clock.tick( 200 )
main() | http://www.pygame.org/wiki/Interpolator?parent=CookBook | CC-MAIN-2014-15 | refinedweb | 1,564 | 70.29 |
I need to write wrapper functions for fork and signal functions. These functions will handle errors by fork() and signal() and exit, if an error occurs.
The problem is I am not sure exactly what to check for on either function.
Here is some sample code for both of them. Fork does not work correctly (I'm guessing I may need to use fprintf instead of perror) and I am not sure if I am returning the right thing for signal.
Any help greatly appreciated.
pid_t Fork(void) { pid_t p; if ((p=fork())<0) { perror ("fork error"); exit(12); } return p; } void (*Signal (int sig, void (*disp) (int))) (int) { if ((signal(sig, disp))==SIG_ERR) { perror ("signal error"); exit(13); } return (*disp); } | https://www.daniweb.com/programming/software-development/threads/95272/need-help-writing-a-wrapper-function-for-fork-and-signal-functions | CC-MAIN-2018-05 | refinedweb | 121 | 72.05 |
StatusMessages getting lost after ExceptionGerman Escobar Feb 24, 2010 9:40 PM
I have this simple component:
@Scope(ScopeType.PAGE) @Name("sendMessage") public class SendMessageBean { @In private StatusMessages statusMessages; public String sendMessage() throws Exception { statusMessages.addFromResourceBundle(StatusMessage.Severity.ERROR, "app.error"); throw new UnExpectedException(e); } }
It just adds a message to the StatusMessages and throws an Exception. The relevant pages.xml configuration looks like this:
... <exception class="com.myproject.UnExpectedException"> <redirect view- <message severity="ERROR">An error ocurred</message> </redirect> </exception> ...
When the sendMessage() method is called, I'd expected both messages to get shown on error page (the message added to the StatusMessages and the message from the pages.xml configuration). However, only the message from pages.xml is shown.
AFAIK this should work! The StatusMessages is a conversation scoped component and should survive the redirect.
Thanks.
1. Re: StatusMessages getting lost after ExceptionRohit Choudhary Nov 13, 2012 1:41 PM (in response to German Escobar)
Hi,
Were you able to get an answer or workaround for this issue? In my app the StatusMessages keep disappearing when there is some exception in the app.
I cannot let all the exceptions to be handled by Seam. I have to restart the JBoss server to get the messages to appear again.
Please advise.
thanks
2. Re: StatusMessages getting lost after ExceptionMartin Kouba Nov 14, 2012 4:34 AM (in response to Rohit Choudhary)
Hi,
the example above works as expected. However you have to use long-running conversation (@Begin, conversation.begin(), etc.), since temporary conversation does not survive redirect.
I'm not sure about "StatusMessages disappearing" and restarting AS. Provide more info so that we can check :-)
3. Re: StatusMessages getting lost after ExceptionRohit Choudhary Nov 14, 2012 10:11 AM (in response to Martin Kouba)
Hello Martin,
Thanks for quick reply. I have EAP 5.0 running a JSF Seam app ( Seam 2.2.1.EAP5). The database that the app uses goes down every night. I can see exceptions during the night in the logs when JBoss is trying to renew the db connections in the pool. One side-effect (a big one) I see is that the Status Messages stop appearing on the pages once that happens.
When I restart the JBoss, the messages start appearing on the page again. If the database does not go down and if there is no major uncaught exception in the app,
the Status Messages work properly in the app for days.
Is it a bug in Seam version?
Please advise.
thanks
Rohit
4. Re: StatusMessages getting lost after ExceptionMartin Kouba Nov 14, 2012 10:36 AM (in response to Rohit Choudhary)
Hm, looks odd. What are the other side effects?
5. Re: StatusMessages getting lost after ExceptionRohit Choudhary Feb 14, 2013 11:23 AM (in response to Martin Kouba)
How to Reproduce
1# You must be using the StatusMessage.add() method which gets the message from a .properties file (The message is got via ResourceBundle).
2# You execute multiple times the part of your system that is suppose to prompt you the messages. After some executions the messages stop showing up.
Issue
After decompiling the SEAM class I notice that the ResourceBundle turns NULL after multiple executions, I haven't gone too deep in finding why the ResourceBundle in SEAM is turning NULL, so instead we placed the messages in the database and we retrieve the messages from there.
It is important to notice that this issue only happens when the application gets the messages from a properties file. If the message is hardcoded this issue never happens.
thanks | https://developer.jboss.org/thread/191385 | CC-MAIN-2018-17 | refinedweb | 598 | 57.16 |
Creating a headless app
There are several steps involved in creating a headless application. Here's a summary of the process:
Create the headless project
A headless app consists of two separate projects in the Momentics IDE for BlackBerry: make modifications to the files that are included in that template. To learn how to create this type of project, see Create a new project.
Even if you're planning to create an app that doesn't use Cascades and uses only core APIs, you can.
Your main.cpp file should look similar to this:
#include <bb/Application> #include "HeadlessApplication.hpp" using bb::Application; int main(int argc, char **argv) { Application app(argc, argv); new HeadlessApplication(&app); return Application::exec(); }. This includes" << endl; } core APIs, start with a Core Native project. If you want to create an app using Cascades APIs, see the Cascades version of Create the UI project.
Add assets
To link the headless and UI parts of the app, you must add the headless part as an asset in the bar-descriptor.xml file of the UI part. You do this by using application, application:
<permission system="true">_sys_run_headless</permission>
Long-running headless apps require the _sys_headless_nostop system permission, as well as _sys_run_headless system permission, to run at all times. You must apply for this permission to use it when you sign your app. This permission is also specified in the bar-descriptor.xml file of the UI part of the app.>
Note that in the invocation target for the headless part, the system STARTED action is used in the filter. By using this action, the headless part of the app is invoked as soon as the app is installed on the device and the device is restarted.
Build and install the app
Now that you've created the UI and headless parts of the app and configured them properly,'re building your app using the Device-Release launch configuration, you need to sign your .bar file. Because the permission for headless apps is a system permission, signing is required when you build your app using a release launch configuration. If you don't sign your app, the app isn't granted the correct permissions to run properly.
If you're building your app using other configurations (such as Device-Debug), you don't need to sign your app before installing it. However, you still need to apply for and receive the appropriate permissions for headless apps to be able to deploy and test your app on a device. The debug token that you use to test your apps on a device contains the permissions that should apply to that app, and these permissions are populated based your signing account. When you apply for and receive headless app permissions (or other restricted app permissions), these permissions are added to your signing account. If you don't have these permissions, your headless app won't work properly when you deploy and test it on a device.
Last modified: 2013-12-21 | http://developer.blackberry.com/native/documentation/core/headless_apps_create_headless_app.html | CC-MAIN-2014-15 | refinedweb | 502 | 60.85 |
Hi all,
I see a lot of books teaching beginning Python students to code like this:
def func1(): ... def func2(): ... class Blah(object): ... def main(): ... main()
It strikes me that having people "def main():" and then call main() is a bit silly. In fact, it seems to be nothing more than a hold-over from C/C++.
My main gripe (heh, get it? "main"! I'm killin' 'em) with it is this: when my code breaks, I like to be able to inspect the values of variables. But if all of my code is enclosed in "main()", then the variables have all since gone out of scope and aren't available to be inspected.
So on a pragmatic level, I oppose the "def main():" style for the simple reason that it makes debugging a bit harder.
Are there any reasons that I should reconsider my dislike for "def main():"?
Thanks,
Jeff | https://www.daniweb.com/programming/software-development/threads/93017/style-issue-def-main | CC-MAIN-2017-17 | refinedweb | 151 | 86.74 |
>> & Partners.
Sullivan: All together, now: OH MY GOD! THEY KILLED KENNY!
Okay, so Kenny just ended up injured and determined to leave the Chevy account—and possibly advertising entirely. But I wouldn't put it past Matthew Weiner to have orchestrated that entire sequence just so we'd all have a reflexive South Park response. Kenny himself got to react with an epic rant: "I hate Detroit. I hate cars. I hate guns. I don't even want to look at a steak anymore ... Did I tell you that on the way to the hospital, they tried to stop for lunch?"
That should have been my clue that this was going to be a terrific--and funny--episode, perhaps my favorite of the season. Call me old-fashioned, but I like episodes that move, that use storytelling. This has been a frustratingly uneven season, in part because Weiner chose as his season-long theme the idea that people don't really change, history repeats itself, nothing new ever happens. That's either incredibly bold or idiotic for a dramatic television series.
It also conflicts, at least on the surface, with the actual events of 1968 that have played out during season six. That violent year was so jarring precisely because so many things seemed to be happening to which Americans did not know how to react. Multiple high-profile assassinations, urban riots, the fairy-tale widowed first lady marrying a Greek shipping tycoon--none of these had precedents or prompted familiar responses.
Don mentioned the last development in a surprisingly flirty phone conversation with Betty, which puts this episode in late October 1968. That means Sally has been refusing to return to her father's apartment for several months now. "She says she's not going again," says Betty, who is uncharacteristically restrained in not pushing Don to find an explanation for Sally's abrupt change in behavior. Don looks destroyed—equal parts anxious that Sally will tell Betty or Megan and devastated that he's lost Sally, possibly forever. The episode opens with him curled up in a near-fetal position on Sally's bed, and he's back to self-medicating with alcohol.
Do we have a running count of how many times Megan has said "I don't know what's going on with you/us" this season? Megan, honey, this is the deal with Don Draper. You'll never know what's going on with him but you can guarantee it's not good. At the office, on the other hand, Don is back on his game—assuming his game is torturing Ted and pushing away Peggy. It was bad enough that Peggy left him for Ted, but now Don has to watch her laugh at Ted's jokes, see Ted putting his hands on Peggy, hear Peggy constantly telling him what a good man Ted is. Running into them at the movie theater together—that's their thing! He and Peggy see movies in the middle of the day when they're stuck!—pushes Don over the edge.
Our Don is nothing if not predictably petty, so we know almost before he does that he's going to blow up his truce with Ted, which lasted all of two months. Don tells himself—and anyone who will listen—that he's doing it for the good of the firm. And it's true that both Ted and Peggy have let their feelings impair both their creative and their business judgment. But Don's actions will have the long-term effect of making this shaky partnership unworkable. He lies directly to his partners, assuring Cutler there will be "no more surprises." He humiliates Ted in the St. Joseph's meeting. And by giving Frank Gleason credit for Peggy's commercial idea, he ensures that if the TV spot does win any awards, they won't go to her. If Don had looked at her face by the end of that meeting, he would have seen what became clear in the last scene, that Don has lost both Sally and Peggy forever.
But as sad as that final image was of Don curled up on his couch, desperately alone, this episode had some wonderfully funny scenes and exchanges. Among my favorites was the telephone call between Don and Harry (note Megan's expression of disgust as she hands the phone to Don—she hasn't forgotten good ol' Harry):
Harry: "I have good news for once."
Don: "You found the fun hooker who takes travelers checks?"
Harry: "Why did I tell you that."
We even saw a few of the characters-we-love-to-hate showing signs of personal growth. When Pete uncovers the deception of Bob Benson—or Bic Bittman, as I'll think of him—he reacts very differently than the Peter Campbell of Season One. Remember that when Pete discovered Don's secret and ratted him out to Bert Cooper, he gained ... absolutely nothing. By keeping his mouth shut for Bob, Pete has secured the very best kind of ally: one who owes him.
I can't help thinking, though, that this could potentially turn out very badly for Pete, particularly if someone else learns the truth about Bob and realizes that Pete knew already. And the last time a partner agreed to keep a colleague's secret was when Don told Lane he'd need to leave the firm because of that missing check. I don't think Bob's going to end up offing himself, but I also don't think this plotline will have a happy ending.
For the moment, though, Betty and Sally seem to have reached a detente. In fact, it's just possible that giving Sally a cigarette is the nicest thing Betty has ever done for her. What say you, ladies? Is Sally on track to become a mean girl like her mother? Or is there still hope for our girl in knee socks?
Fetters: Totally agree on Sally and Betty smoking in the car. It was like once there was a cigarette between Sally's fingers, she and Betty could finally communicate in the between-exhales language of adults. They looked like a couple of smoking-in-the-girls'-room queen bees together, so this certainly could be the pivot point that sets Sally on the path to grown-up bitchery a la Betty Draper. And those catty girls at boarding school could speed up that process even further.
Don's early phone conversation with Betty was really striking to me, too, Amy—but to me, it seemed like Don was pretty OK with Sally's distancing herself from him. When Betty says Sally's not coming up for the weekend, he even says to her, "Tell her if she does come, I'll be working the whole weekend." He offers to pay the entire tuition if Sally decides to go to boarding school, and even offers to pay to get her in.
Defense mechanism? Maybe. But at surface level, at least, it almost looks to me like Don's the one who's mad at Sally. Which is pretty inexcusable on all levels, given just how badly he screwed up and how innocently she just happened to catch him in the act of screwing up. Last week we declared that moment a new low for Don Draper, but I think this might be a step lower: He's now cutting his daughter out of his life out of spite (or, best case scenario, shame) because she may or may not have ruined the illicit affair he was having with his friend's wife. And cutting her off with an offer of money, the way we've seen him do before with his mistresses (Midge comes to mind, and his old secretary Allison).
There were so many callbacks to earlier episodes and seasons.
This conversation, it's worth adding, is what ensues right after Don's channel-surfing encounter with Megan's soap opera. Don lands on a TV station on which Megan is wearing a blonde wig and declaring in a ridiculous French accent, "I am talking to joo! Don't you dare eegnore me!"—and Don promptly flips the channel, almost defiantly. It's like a multi-layered reminder that in all possible ways, Don Draper is awful. Like the worst, sleaziest Inception nightmare ever.
Even with Don's early awfulness, though, I agree with you, Amy, that a lot of this episode was surprisingly funny. The SC&P higher-ups sharing their client-wooing horror stories was darkly comical; even Don's little prank on Peggy and Ted had me laughing in disbelief. The St. Joseph's pitch rehearsal looked like something from The Office; Bob Benson speaking Spanish on the phone came out of nowhere and was bizarre and great. (Any working theories on that, by the way? My thought is that it's related to Manolo. We know Manolo isn't into women, we know Bob isn't either, we know they know each other in some way that's likely not really related to Manolo nursing Bob's dad back to health, because—as some commenters pointed out a while back—Bob's also said before that his dad is dead.) Ted and Peggy had a few funny moments, even if it is getting more than a little out of hand (especially for an office setting). I have to keep reminding myself that Ted's married. I, like Peggy, am decently charmed by Ted, against my better conscience—but Don's right when he says Ted's judgment is impaired.
The last line of the episode, of course, is Peggy's declaration to Don that "You're a monster," after his nasty little stunt in the pitch meeting with St. Joseph's. To my ears, "monster" seemed like an odd choice of words; obviously, as we've seen in the last 11 weeks, my mind usually goes right to "sleazy" or "bastard" or "creep." But "monster" has an element of real, innate malice to it. Peggy's called him something more evil—uncontrollably evil—than I've ever thought to call him. Is that what's going on here? What do you think, Eleanor? Has Don Draper become a monster?
Barkhorn: The monster line got me, too. I saw it as a rare example of someone being unfair to Don. Yes, his behavior has been monstrous this season, but in ways that Peggy knows nothing about: neglecting his own wife, sleeping with his ex-wife, sleeping with his (only) friend's wife, freezing out his daughter after she catches him in the act, and on and on.
In contrast, I thought his behavior toward Peggy, though not perfect, was quite compassionate. Yes, it was cruel of him to deprive her of credit for the aspirin ad. (Though I have to wonder just how good of an idea it is, relying as it does on two ethnic cliches—the soup-wielding Jewish neighbor and the photo-snapping "Japanese"—and inspired as it is by a nightmare-inducing horror film.) But it's quite possible there was no other way to convince the client to use the idea at all.
More importantly, Don was right to call Ted and Peggy out for their increasingly obvious flirtation. Whenever the episode showed the two of them laughing and glowing at each other, I was embarrassed for both of them, and worried for Peggy. It's bad enough that many veteran SC&P-ers assume she's gotten to her current position by sleeping with Don. It could be disastrous for her professional reputation if Ted continued to favor her so obviously. We could already see the resentment building in this episode, when Ginsberg made the crack about how he wanted to see if Ted would respond any ideas besides Peggy's. Those sorts of jokes would only get more frequent if somebody hadn't poured water on the fire.
I also think Don was right to tell Peggy that Ted's "not that virtuous ... he's just in love with you." Despite Peggy and Ted's recent closeness, Don has known and observed him much longer. And as I've mentioned before, the Ted we've seen in past seasons is hardly virtuous: He's petty, weasely, eager to steal Don's clients and employees. Ted's hardly been a moral exemplar this season, either. We've witnessed his insecurities and competitiveness at closer range, and we've also gotten a glimpse at his home life. Remember that sweet scene last episode where Ted returned home to put his kids to bed? His behavior toward Peggy indicates that his renewed domestic devotion was short-lived. Don sees Ted more clearly than Peggy does—and we the audience arguably see him more clearly still. Peggy's judgment is impaired by the thrill of being admired by her boss. Again, the delivery was imperfect, but ultimately it's good for Don to point that out to her.
As for the episode as a whole, I agree with Amy that this was one of the better episodes of the season: funny, surprising, well-paced. I was also struck by how much it rewarded longtime viewers of the series. There were so many callbacks to earlier episodes and seasons. As we've already discussed, there was Pete's deja vu moment with Bob Benson, echoing his discovery of Don's true identity back in Season One. I also couldn't help but take Kenny's accident as sad karmic retribution for that time he drove the John Deere tractor out into the Sterling Cooper offices, which resulted in a PPL executive losing his foot. Peggy's Rosemary's Baby-inspired aspirin ad reminded me of the time SCDP tried, and failed, to model a diet soda spot on Bye Bye Birdie in Season Three. (The Rosemary's Baby pitch also reminds us that Peggy has a child of her own—something that arguably haunts her more than we've realized.)
There was the return (and vindication!) of Glen Bishop, who's evolved from creepy neighbor to protective brother figure. (Amy, you expressed concern that Don's awfulness might make Sally suspicious of men in general—hopefully Glen's kindness toward her will help prevent that from happening.) And of course there was Roger's conversation-stopping reference to Lucky Strike's Lee Garner, Jr., whom we haven't seen since Season Four: "Lee Garner Jr. made me hold his balls."
There were also some more somber callbacks to earlier this season, which show just how far Don has fallen. That breakfast scene at the beginning of the episode—eggs on the stove, orange juice in the glass—immediately made me think of Don's pitch for Fleischmann's margarine. In his "rap session" with Ted, he envisioned a hearty, homemade breakfast on the farm. For a moment this week, it looked like a similarly wholesome scene was developing at the Draper apartment—that is, until Megan burned the eggs and Don spiked his OJ. It was also rather startling to see Don's about-face on Sunkist, and by extension on loyalty. It was only a few episodes ago that he was lecturing Pete on why not to go after Heinz ketchup: "sometimes you gotta dance with the one that brung ya." It was a pretty cynical statement coming from a serial cheater (and not one he was especially devoted to, of course; he ended up going after the ketchup account). Now his professional ethics are more in line with his personal ones: He'll happily ditch Ocean Spray to pursue Sunkist.
And now there's just one episode to go in this uneven, unloved season. Where will the finale leave us? Will the messy merger of SCDP and CGC survive? What about the Drapers' marriage—or the Rosens'? Or the Pete-Bob alliance? Will Peggy and Ted finally cool off? And, of course, the eternal question: Will anyone die?
This article available online at: | http://www.theatlantic.com/entertainment/print/2013/06/-i-mad-men-i-who-is-don-draper-becomes-who-is-bob-benson/276902/ | CC-MAIN-2014-42 | refinedweb | 2,687 | 70.13 |
Ruby extension library for embedding Python in Ruby. With this
library, Ruby scripts can directly call arbitrary Python modules.
Both extension modules and modules written in Python can be used.
Ruby and Python have some differences as described here. This library
enables Ruby users to have the advantages of both languages:
- Straightforward and easy-to-learn OO functionality of Ruby.
- Plenty of modules written for Python.
If you are attracted by Python modules but not fully satisfied with
Python's syntax or type system, try this library!
Author: Masaki Fukushima <fukusima@goto.info.waseda.ac.jp>: 27
As previously announced, remove lang/ruby-python. Its dependendencies are
broken, and it relies on an antique version of Python. Per the manintainer,
"Ruby now has hundreds of native libraries, and no one would need or want
to use ruby-python to borrow libraries from python."
Mark as deprecated: its dependendencies are broken, and it relies on
an antique version of Python. Per the manintainer (knu): "Ruby now
has hundreds of native libraries, and no one would need or want to use
ruby-python to borrow libraries from python."
Add SIZE data.
Submitted by: trevor
BROKEN: Broken dependency
Set USE_PYTHON explicitly now that PYTHON_VERSION no longer implies it.
De-pkg-comment.
Use RUBY_MOD*.
cd dir && command -> cd dir; command
Bump the PORTREVISION's of the ports which install architecture dependent ruby
modules, due to the RUBY_ARCH change I've just committed.
Build and use libpython without threads support and make this work fine on
both 4-STABLE and 5-CURRENT.
Add %%PORTDOCS%%.
Some style fixes in the lang category (usual round of spaces -> tabs)
The previous problem was found to be due to mkmf.rb's bug. Now fixed.
Link against libreadline. I wonder from when and why it came to require
linking libreadline... ?-(
Convert category lang to new layout.
Now bsd.ruby.mk is automatically included by bsd.port.mk when USE_RUBY or
USE_LIBRUBY is defined, individual ruby ports no longer need to include it
explicitly.
Improve the configure script and add a knob to use another version of Python
than 1.5. (Currently this module only works with 1.5 though)
Update to 0.3.3. This should fix a problem with Ruby 1.6.
Set PYTHON_VERSION=python1.5 to depend on Python 1.5.
Fix build with Ruby 1.4. Do not use PKGNAMEPREFIX in DISTNAME.
Update with bsd.ruby.mk.
Make all these Ruby related ports belong also in the newly-added "ruby"
virtual category.
This port surely belongs to the virtual category "python".
Do The Right Thing. (R)
Set DIST_SUBDIR=ruby for all these Ruby ports to stop distfile namespace
pollution.
Follow our hier(7) policy: share/doc/ruby/*/examples -> share/examples/ruby/*
Add Ruby related ports.
Servers and bandwidth provided byNew York Internet, SuperNews, and RootBSD
14 vulnerabilities affecting 77 ports have been reported in the past 14 days
* - modified, not new
All vulnerabilities
Last updated:2017-12-10 18:59:21 | https://www.freshports.org/lang/ruby-python/ | CC-MAIN-2017-51 | refinedweb | 495 | 59.19 |
Hey, basically im trying to create a Kill/Death ratio program where the user types x number of kills and y number of deaths and this creates an answer for ratio " :1" for example " how many kills ? 3388 " "How many deaths? 1237" "Your ratio is 12.129:1" the decimal is needed if possible oh and just one more i need the program to loop [hence the end code, not that it works]
so heres the code im using up to now. and if you need it explaining any more or dont understand please tell me!
// ArenaRatio.cpp : Defines the entry point for the console application. // #include "stdafx.h" #include <iostream> #include "Converter.h" using namespace std; int x; int y; int main () { float x,y, dummy; float ratio = .046; //declare variables char rerun; cout<<"Welcome to Easy ArenaRatio finder!\n"; cout<<"\n"; cout<<"\n"; cout<<"\t Please enter the numbers of kills: \n" << endl; cin >> x; cout<<"\t Please enter the numbers of deaths: \n" << endl; cin >> y; cout<<"Your Ratio is: "<< converter(x, y) / ratio << ":1" << endl; system("PAUSE"); //prompt user if they wish to run the program again cout << "Would you like to input information for another ratio? <y/n>: "; cin >> rerun; return 0; } | https://www.daniweb.com/programming/software-development/threads/136388/c-ratio-help-please | CC-MAIN-2021-17 | refinedweb | 207 | 70.84 |
This is the 16th article in my series of articles on Python for NLP. In my previous article I explained how N-Grams technique can be used to develop a simple automatic text filler in Python. N-Gram model is basically a way to convert text data into numeric form so that it can be used by statisitcal algorithms.
Before N-Grams, I explained the bag of words and TF-IDF approaches, which can also be used to generate numeric feature vectors from text data. Till now we have been using machine learning appraoches to perform different NLP tasks such as text classification, topic modeling, sentimental analysis, text summarization, etc. In this article we will start our discussion about deep learning techniques for NLP.
Deep learning approaches consist of different types of densely connected neural networks. These approaches have been proven efficient to solve several complex tasks such as self-driving cars, image generation, image segmentation, etc. Deep learning approaches have also been proven quite efficient for NLP tasks.
In this article, we will study word embeddings for NLP tasks that involve deep learning. We will see how word embeddings can be used to perform simple classification task using deep neural network in Python's Keras Library.
Problems with One-Hot Encoded Feature Vector Approaches
A potential drawback with one-hot encoded feature vector approaches such as N-Grams, bag of words and TF-IDF approach is that the feature vector for each document can be huge. For instance, if you have a half million unique words in your corpus and you want to represent a sentence that contains 10 words, your feature vector will be a half million dimensional one-hot encoded vector where only 10 indexes will have 1. This is a wastage of space and increases algorithm complexity exponentially resulting in the curse of dimentionality.
Word Embeddings
In word embeddings, every word is represented as an n-dimensional dense vector. The words that are similar will have similar vector. Word embeddings techniques such as GloVe and Word2Vec have proven to be extremely efficient for converting words into corresponding dense vectors. The vector size is small and none of the indexes in the vector is actually empty.
Implementing Word Embeddings with Keras Sequential Models
The Keras library is one of the most famous and commonly used deep learning libraries for Python that is built on top of TensorFlow.
Keras support two types of APIs: Sequential and Functional. In this section we will see how word embeddings are used with Keras Sequential API. In the next section, I will explain how to implement the same model via the Keras functional API.
To implement word embeddings, the Keras library contains a layer called
Embedding(). The embedding layer is implemented in the form of a class in Keras and is normally used as a first layer in the sequential model for NLP tasks.
The embedding layer can be used to peform three tasks in Keras:
- It can be used to learn word embeddings and save the resulting model
- It can be used to learn the word embeddings in addition to performing the NLP tasks such as text classification, sentiment analysis, etc.
- It can be used to load pretrained word embeddings and use them in a new model
In this article, we will see the second and third use-case of the Embedding layer. The first use-case is a subset of the second use-case.
Let's see how the embedding layer looks:
embedding_layer = Embedding(200, 32, input_length=50)
The first parameter in the embeddig layer is the size of the vocabulary or the total number of unique words in a corpus. The second parameter is the number of the dimensions for each word vector. For instance, if you want each word vector to have 32 dimensions, you will specify 32 as the second parameter. And finally, the third parameter is the length of the input sentence.
The output of the word embedding is a 2D vector where words are represented in rows, whereas their corresponding dimensions are presented in columns. Finally, if you wish to directly connect your word embedding layer with a densely connected layer, you first have to flatten your 2D word embeddings into 1D. These concepts will become more understandable once we see word embedding in action.
Custom Word Embeddings
As I said earlier, Keras can be used to either learn custom words embedding or it can be used to load pretrained word embeddings. In this section, we will see how the Keras Embedding Layer can be used to learn custom word embeddings.
We will perform simple text classification tasks that will use word embeddings. Execute the following script to download need to define our dataset. We will be using a very simple custom dataset that will contain reviews above movies. The following script creates our dataset:' ]
Our corpus has 8 positive reviews and 8 negative reviews. The next step is to create label set for our data.
sentiments = array([1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0])
You can see that the first 8 items in the sentiment array contain 1, which correspond to positive sentiment. The last 8 items are zero that correspond to negative sentiment.
Earlier we said that the first parameter to the
Embedding() layer is the vocabulary, or number of unique words in the corpus. Let's first find the total number of words in our corpus:
from nltk.tokenize import word_tokenize all_words = [] for sent in corpus: tokenize_word = word_tokenize(sent) for word in tokenize_word: all_words.append(word)
In the script above, we simply iterate through each sentence in our corpus and then tokenize the sentence into words. Next, we iterate through the list of all the words and append the words into the
all_words list. Once you execute the above script, you should see all the words in the
all_words dictionary. However, we do not want the duplicate words.
We can retrieve all the unique words from a list by passing the list into the
set function, as shown below.
unique_words = set(all_words) print(len(unique_words))
In the output you will see "45", which is the number of unique words in our corpus. We will add a buffer of 5 to our vocabulary size and will set the value of
vocab_length to 50.
The Embedding layer expects the words to be in numeric form. Therefore, we need to convert the sentences in our corpus to numbers. One way to convert text to numbers is by using the
one_hot function from the
keras.preprocessing.text library. The function takes sentence and the total length of the vocabulary and returns the sentence in numeric form.
embedded_sentences = [one_hot(sent, vocab_length) for sent in corpus] print(embedded_sentences )
In the script above, we convert all the sentences in our corpus to their numeric form and display them on the console. The output looks like this:
[[31, 12, 31, 14, 9], [20, 3, 20, 16, 18, 45, 14], [16, 26, 29, 14, 12, 1], [16, 23], [32, 41, 13, 20, 18, 45, 14], [15, 28, 16, 43], [7, 9, 31, 28, 31, 9], [14, 12, 28, 46, 9], [4, 22], [5, 4, 9], [23, 46], [14, 20, 32, 14], [18, 1, 26, 45, 20, 9], [20, 9, 20, 4], [18, 8, 26, 34], [20, 22, 12, 23]]
You can see that our first sentence contained five words, therefore we have five integers in the first list item. Also, notice that the last word of the first sentence was "movie" in the first list item, and we have digit 9 at the fifth place of the resulting 2D array, which means that "movie" has been encoded as 9 and so on.
The embedding layer expects sentences to be of equal size. However, our encoded sentences are of different sizes. One way to make all the sentences of uniform size is to increase the lenght of all the sentences and make it equal to the length of the largest sentence. Let's first find the largest sentence in our corpus and then we will increase the length of all the sentences to the length of the largest sentence. To do so, execute the following script:
word_count = lambda sentence: len(word_tokenize(sentence)) longest_sentence = max(corpus, key=word_count) length_long_sentence = len(word_tokenize(longest_sentence))
In the sentence above, we use a lambda expression to find the length of all the sentences. We then use the
max function to return the longest sentence. Finally the longest sentence is tokenized into words and the number of words are counted using the
len function.
Next to make all the sentences of equal size, we will add zeros to the empty indexes that will be created as a result of increasing the sentence length. To append the zeros at the end of the sentencses, we can use the
pad_sequences method. The first parameter is the list of encoded sentences of unequal sizes, the second parameter is the size of the longest sentence or the padding index, while the last parameter is
padding where you specify
post to add padding at the end of sentences.
Execute the following script:
padded_sentences = pad_sequences(embedded_sentences, length_long_sentence, padding='post') print(padded_sentences)
In the output, you should see sentences with padding.
[[31 12 31 14 9 0 0] [20 3 20 16 18 45 14] [16 26 29 14 12 1 0] [16 23 0 0 0 0 0] [32 41 13 20 18 45 14] [15 28 16 43 0 0 0] [ 7 9 31 28 31 9 0] [14 12 28 46 9 0 0] [ 4 22 0 0 0 0 0] [ 5 4 9 0 0 0 0] [23 46 0 0 0 0 0] [14 20 32 14 0 0 0] [18 1 26 45 20 9 0] [20 9 20 4 0 0 0] [18 8 26 34 0 0 0] [20 22 12 23 0 0 0]]
You can see zeros at the end of the padded sentences.
Now we have everything that we need to create a sentiment classification model using word embeddings.
We will create a very simple text classification model with an embedding layer and no hidden layers. Look at the following script:
model = Sequential() model.add(Embedding(vocab_length, 20, input_length=length_long_sentence)) model.add(Flatten()) model.add(Dense(1, activation='sigmoid'))
In the script above, we create a
Sequential model and add the
Embedding layer as the first layer to the model. The length of the vocabulary is specified by the
vocab_length parameter. The dimension of each word vector will be 20 and the
input_length will be the length of the longest sentence, which is 7. Next, the
Embedding layer is flattened so that it can be directly used with the densely connected layer. Since it is a binary classification problem, we use the
sigmoid function as the loss function at the dense layer.
Next, we will compile the model and print the summary of our model, as shown below:
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['acc']) print(model.summary())
The summary of the model is as follows:
Layer (type) Output Shape Param # ================================================================= embedding_1 (Embedding) (None, 7, 20) 1000 _________________________________________________________________ flatten_1 (Flatten) (None, 140) 0 _________________________________________________________________ dense_1 (Dense) (None, 1) 141 ================================================================= Total params: 1,141 Trainable params: 1,141 Non-trainable params: 0
You can see that the first layer has 1000 trainable parameters. This is because our vocabulary size is 50 and each word will be presented as a 20 dimensional vector. Hence the total number of trainable parameters will be 1000. Similarly, the output from the embedding layer will be a sentence with 7 words where each word is represented by a 20 dimensional vector. However, when the 2D output is flattened, we get a 140 dimensional vector (7 x 20). The flattened vector is directly connected to the dense layer that contains 1 neuran.
Now let's train the model on our data using the
fit method, as shown below:
model.fit(padded_sentences, sentiments, epochs=100, verbose=1)
The model will be trained for 100 epochs.
We will train and test the model using the same corpus. Execute the following script to evaluate the model performance on our corpus:
loss, accuracy = model.evaluate(padded_sentences, sentiments, verbose=0) print('Accuracy: %f' % (accuracy*100))
In the output, you will see that model accuracy is 1.00 i.e. 100 percent.
Note: In real world applications, train and test sets should be different. We will see an example of that when we perform text classification on some real world data in an upcoming article.
In the previous section we trained custom word embeddings. However, we can also use pretrained word embeddings.
Several types of pretrained word embeddings exist, however we will be using the GloVe word embeddings from Stanford NLP since it is the most famous one and commonly used. The word embeddings can be downloaded from this link.
The smallest file is named "Glove.6B.zip". The size of the file is 822 MB. The file contains 50, 100, 200, and 300 dimensional word vectors for 400k words. We will be using the 100 dimensional vector.
The process is quite similar. First we have to import have to create our corpus followed by the labels.' ]
sentiments = array([1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0])
In the last section, we used
one_hot function to convert text to vectors. Another approach is to use
Tokenizer function from
keras.preprocessing.text library.
You simply have to pass your corpus to the
Tokenizer's
fit_on_text method.
word_tokenizer = Tokenizer() word_tokenizer.fit_on_texts(corpus)
To get the number of unique words in the text, you can simply count the length of
word_index dictionary of the
word_tokenizer object. Remember to add 1 with the vocabulary size. This is to store the dimensions for the words for which no pretrained word embeddings exist.
vocab_length = len(word_tokenizer.word_index) + 1
Finally, to convert sentences to their numeric counterpart, call the
texts_to_sequences function and pass it the whole corpus.
embedded_sentences = word_tokenizer.texts_to_sequences(corpus) print(embedded_sentences)
In the output, you will see the sentences in their numeric form:
[[14, 3, 15, 16, 1], [4, 17, 6, 9, 5, 7, 2], [18, 19, 20, 2, 3, 21], [22, 23], [24, 25, 26, 27, 5, 7, 2], [28, 8, 9, 29], [30, 31, 32, 8, 33, 1], [2, 3, 8, 34, 1], [10, 11], [35, 36, 37], [12, 38], [2, 6, 39, 40], [5, 41, 13, 7, 4, 1], [4, 1, 6, 10], [5, 42, 13, 43], [4, 11, 3, 12]]
The next step is to find the number of words in the longest sentence and then to apply padding to the sentences having shorter lengths than the length of the longest sentence.
from nltk.tokenize import word_tokenize word_count = lambda sentence: len(word_tokenize(sentence)) longest_sentence = max(corpus, key=word_count) length_long_sentence = len(word_tokenize(longest_sentence)) padded_sentences = pad_sequences(embedded_sentences, length_long_sentence, padding='post') print(padded_sentences)
The padded sentences look like this:
[[14 3 15 16 1 0 0] [ 4 17 6 9 5 7 2] [18 19 20 2 3 21 0] [22 23 0 0 0 0 0] [24 25 26 27 5 7 2] [28 8 9 29 0 0 0] [30 31 32 8 33 1 0] [ 2 3 8 34 1 0 0] [10 11 0 0 0 0 0] [35 36 37 0 0 0 0] [12 38 0 0 0 0 0] [ 2 6 39 40 0 0 0] [ 5 41 13 7 4 1 0] [ 4 1 6 10 0 0 0] [ 5 42 13 43 0 0 0] [ 4 11 3 12 0 0 0]]
We have converted our sentences into padded sequence of numbers. The next step is to load the GloVe word embeddings and then create our embedding matrix that contains the words in our corpus and their corresponding values from GloVe embeddings. Run the following script:
from numpy import array from numpy import asarray from numpy import zeros embeddings_dictionary = dict() glove_file = open('E:/Datasets/Word Embeddings/glove.6B.100d.txt', encoding="utf8")
In the script above, in addition to loading the GloVe embeddings, we also imported a few libraries. We will see the use of these libraries in the upcoming section. Here notice that we loaded
glove.6B.100d.txt file. This file contains 100 dimensional word embeddings. We also created an empty dictionary that will store our word embeddings.
If you open the file, you will see a word at the beginning of each line followed by set of 100 numbers. The numbers form the 100 dimensional vector for the word at the begining of each line.
We will create a dictionary that will contain words as keys and the corresponding 100 dimensional vectors as values, in the form of an array. Execute the following script:
for line in glove_file: records = line.split() word = records[0] vector_dimensions = asarray(records[1:], dtype='float32') embeddings_dictionary [word] = vector_dimensions glove_file.close()
The dictionary
embeddings_dictionary now contains words and corresponding GloVe embeddings for all the words.
We want the word embeddings for only those words that are present in our corpus. We will create a two dimensional numpy array of 44 (size of vocabulary) rows and 100 columns. The array will initially contain zeros. The array will be named as
embedding_matrix
Next, we will iterate through each word in our corpus by traversing the
word_tokenizer.word_index dictionary that contains our words and their corresponding index.
Each word will be passed as key to the
embedding_dictionary to retrieve the corresponding 100 dimensional vector for the word. The 100 dimensional vector will then be stored at the corresponding index of the word in the
embedding_matrix. Look at the following script:
embedding_matrix = zeros((vocab_length, 100)) for word, index in word_tokenizer.word_index.items(): embedding_vector = embeddings_dictionary.get(word) if embedding_vector is not None: embedding_matrix[index] = embedding_vector
Our
embedding_matrix now contains pretrained word embeddings for the words in our corpus.
Now we are ready to create our sequential model. Look at the following script:
model = Sequential() embedding_layer = Embedding(vocab_length, 100, weights=[embedding_matrix], input_length=length_long_sentence, trainable=False) model.add(embedding_layer) model.add(Flatten()) model.add(Dense(1, activation='sigmoid'))
The script remains the same, except for the embedding layer. Here in the embedding layer, the first parameter is the size of the vacabulary. The second parameter is the vector dimension of the output vector. Since we are using pretrained word embeddings that contain 100 dimensional vector, we set the vector dimension to 100.
Another very important attribute of the
Embedding() layer that we did not use in the last section is
weights. You can pass your pretrained embedding matrix as default weights to the
weights parameter. And since we are not training the embedding layer, the
trainable attribute has been set to
False.
Let's compile our model and see the summary of our model:
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['acc']) print(model.summary())
We are again using
adam as the optizer to minimize the loss. The loss function being used is
binary_crossentropy. And we want to see the results in the form of accuracy so
acc has been passed as the value for the
metrics attribute.
The model summary is as follows:
Layer (type) Output Shape Param # ================================================================= embedding_1 (Embedding) (None, 7, 100) 4400 _________________________________________________________________ flatten_1 (Flatten) (None, 700) 0 _________________________________________________________________ dense_1 (Dense) (None, 1) 701 ================================================================= Total params: 5,101 Trainable params: 701 Non-trainable params: 4,400 _________________________________________________________________
You can see that since we have 44 words in our vocabulary and each word will be represented as a 100 dimensional vector, the number of parameters for the embedding layer will be
44 x 100 = 4400. The output from the embedding layer will be a 2D vector with 7 rows (1 for each word in the sentence) and 100 columns. The output from the embedding layer will be flattened so that it can be used with the dense layer. Finally the dense layer is used to make predictions.
Execute the following script to train the algorithms:
model.fit(padded_sentences, sentiments, epochs=100, verbose=1)
Once the algorithm is trained, run the following script to evaluate the peformance of the algorithm.
loss, accuracy = model.evaluate(padded_sentences, sentiments, verbose=0) print('Accuracy: %f' % (accuracy*100))
In the output, you should see that accuracy is 1.000 i.e. 100%.
Word Embeddings with Keras Functional API
In the last section, we saw how word embeddings can be used with the Keras sequential API. While the sequential API is a good starting point for beginners, as it allows you to quickly create deep learning models, it is extremely important to know how Keras Functional API works. Most of the advanced deep learning models involving multiple inputs and outputs use the Functional API.
In this section, we will see how we can implement embedding layer with Keras Functional API.
The rest of the script remains similar as it was in the last section. The only change will be in the development of a deep learning model. Let's implement the same deep learning model as we implemented in the last section with Keras Functional API.
from keras.models import Model from keras.layers import Input deep_inputs = Input(shape=(length_long_sentence,)) embedding = Embedding(vocab_length, 100, weights=[embedding_matrix], input_length=length_long_sentence, trainable=False)(deep_inputs) # line A flatten = Flatten()(embedding) hidden = Dense(1, activation='sigmoid')(flatten) model = Model(inputs=deep_inputs, outputs=hidden)
In the Keras Functional API, you have to define the input layer separately before the embedding layer. In the input, layer you have to simply pass the length of input vector. To specify that previous layer as input to the next layer, the previous layer is passed as a parameter inside the parenthesis, at the end of the next layer.
For instance, in the above script, you can see that
deep_inputs is passed as parameter at the end of the embedding layer. Similarly,
embedding is passed as input at the end of the
Flatten() layer and so on.
Finally, in the
Model(), you have to pass the input layer, and the final output layer.
Let's now compile the model and take a look at the summary of the model.
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['acc']) print(model.summary())
The output looks like this:
Layer (type) Output Shape Param # ================================================================= input_1 (InputLayer) (None, 7) 0 _________________________________________________________________ embedding_1 (Embedding) (None, 7, 100) 4400 _________________________________________________________________ flatten_1 (Flatten) (None, 700) 0 _________________________________________________________________ dense_1 (Dense) (None, 1) 701 ================================================================= Total params: 5,101 Trainable params: 701 Non-trainable params: 4,400
In the model summary, you can see the input layer as a separate layer before the embedding layer. The rest of the model remains the same.
Finally, the process to fit and evaluate the model is same as the one used in Sequential API:
model.fit(padded_sentences, sentiments, epochs=100, verbose=1) loss, accuracy = model.evaluate(padded_sentences, sentiments, verbose=0) print('Accuracy: %f' % (accuracy*100))
In the ouput, you will see an accuracy of 1.000 i.e. 100 percent.
Conclusion
To use text data as input to the deep learning model, we need to convert text to numbers. However unlike machine learning models, passing sparse vector of huge sizes can greately affect deep learning models. Therefore, we need to convert our text to small dense vectors. Word embeddings help us convert text to dense vectors.
In this article we saw how word embeddings can be implemented with Keras deep learning library. We implemented the custom word embeddings as well as used pretrained word embedddings to solve simple classification task. Finally, we also saw how to implement word embeddings with Keras Functional API. | https://stackabuse.com/python-for-nlp-word-embeddings-for-deep-learning-in-keras/ | CC-MAIN-2019-35 | refinedweb | 3,929 | 60.04 |
Web it.
Our goal was to get our Python web services into continuous deployment to our production servers, in a manner as close as possible to what we do with our other code. Our systems team had set a high bar in that area. To be more specific, we wanted to:
- package some opensource Python modules, including some that come with wrapped C and C++ code
- build an application that imports those modules
- configure servers, FreeBSD and Linux, without build tools installed, to run the application
- deploy the app.
Modules, build, configuration, deployment. We think about these things separately, and that helps us operate effectively at a large scale. But there is a danger in mapping out such a multi-part process: it might become unwieldy. Day in and day out, I have a more immediate goal than operating at a large scale: I need to keep my development teams productive. ‘Productive’, in our group, means ‘able to evaluate new ideas and tools quickly’. So I want a frictionless process for developers, but I don’t want them working in a way so different from what we do in production that they will have to do a major redesign, when it’s time to go from the lab to the real world.
In order to have a workable system, I needed to do three things, which have led me to the three recommendations/how-tos to this article:
- Install appropriate versions of Python across systems from dev to prod.
- Adopt a coherent way of packaging and using Python modules
- Develop a Python-specific module of our push tool.
Configuration management is also an important component of the total system, but for that we are using an off-the-shelf tool, Puppet, and there’s nothing particular to Python about the way we use it, so I will not be describing that in any detail.
Throughout this article I will be referring both to Puppet and to our homegrown Wayfair push tool, which we use for code deployment. In smaller-scale operations I use Fabric for both of these purposes. Fabric is written in Python, and it’s a great place to start for this kind of thing. If you can already use ssh and install a Python module, Fabric takes about 5 minutes to learn. If your infrastructure grows to the point where Fabric doesn’t work well any more, congratulations! For configuration management you can choose from Puppet, Chef, and other tools, and there are deployment-tool frameworks out there as well.
Choosing a version of Python and installing it
I believe these instructions would work pretty well for all 2.x and maybe 3.x Pythons, but it seems like a no-brainer to me to settle on Python 2.7. It is in the sweet spot right now for availability of quality opensource libraries and mature testing components. 2.6 is relatively deficient in the latter, and the good stuff does not all work in 3.x.
I don’t believe there’s any need for a how-to, for installing Python 2.7 on Debian, Ubuntu, MacOS, FreeBSD, and most other unices. However, if you’re using a Linux that packages software with rpm/yum, such as any Redhat derivative, and the system Python is version < 2.7, you have a problem because you cannot upgrade Python without wrecking yum. In that case, I recommend this srpm for CentOS 5 by Nathan Milford, which does not interfere with the system Python or yum. We did this on our CentOS boxes, and we now have the Python we want at /usr/local/bin/python, happily coexisting with the /usr/bin/python that yum needs. The modifications to his .spec file were very slight, to get it to work on CentOS/RHEL 6.x. Here’s the patch file:
72a73,74
> Provides: python-abi = %{libvers}
> Provides: python(abi) = %{libvers}
When we’re ready to put a new version of Python into production, we definitely want an OS-level package for it. On the other hand, *after* Python is installed, although we *could* build setuptools the way he suggests, as an rpm, we have chosen to build and deploy a setuptools egg, just like all the other modules. Make sure you use the right Python to do that, if you have more than one installed! The setuptools guys, realizing they would otherwise have a chicken-before-the-egg problem, make an eggsecutable egg, so you can just run it from the shell.
I strongly prefer to build and install everything besides Python itself as an egg, not an OS-level package: setuptools, virtualenv, pip, the MySQL driver, everything. The reason is simple. I and some of my developers need to tinker with Python compilation options and point-release versions, and experiment with major version upgrades. We want to be able to compile a new Python on a development machine, try out a new feature, run a test suite on our existing code to see how much of a big deal an upgrade is going to be, etc. While we’re doing that, we do not want to get bogged down in debs, rpms, or pkgs. And if we had to build those types of packages for every module we’re using, it would take us much longer than it does with eggs. I don’t even want to build a Python rpm for that kind of experiment, let alone build rpms for a dozen modules and install them as root. I can avoid all of that with the procedures outlined below.
Python module packaging
Python module packaging is a hodgepodge, a dog’s breakfast, an embarrassment. Let’s give a brief history:
- At first, no uniform packaging at all
- Python 2: ‘distutils‘.
- Setuptools, building on distutils and providing the ‘easy_install’ command, which works with either source or pre-built binary/zip (egg-file) distributions. But there is no ‘uninstall’ command. “They must be joking!” They are not.
- Pip, another distutils derivative, and a ‘replacement’ for setuptools, in many ways superior to it, with ‘uninstall’ and version checking. But it can’t handle egg files. “They must be joking!” They are not.
- Distribute, a fork of setuptools, allegedly handling the Python 2-3 transition better. This looks very promising, but I’m not quite sure where they’re headed with their roadmap. Distribute guys: please make sure we can always install from egg files, even non-eggsecutable ones. Not everybody wants gcc on production servers! And I hope when you say ‘easy_install is going to be deprecated ! use Pip !’, you don’t mean you’re losing egg support.
This list does not even address the question of how to roll C and C++ libraries into a Python module. There are a *lot* of ways to do that. We’ll gloss over that a bit and say that Cython is currently our favorite way, and we want to start a movement of winning hearts and minds over to using it for all libraries. If the hearts-and-minds approach doesn’t work, I’m thinking cajoling, bribery, public shaming, any means necessary. But that doesn’t really matter for the purpose under discussion. We can use Python-C(++) hybrids whether or not they are built with Cython.
Here is our process for downloading a 3rd-party module from PyPI or elsewhere, and incorporating it into our build and deployment process. For most libraries, it takes about 2 minutes. Sometimes there are dependencies to sort out, but even for the hairiest science projects, we’re usually done in a short time.
First, download something, e.g. python-dateutil-2.1.tar.gz, from PyPI or wherever its publicly available source code lives. Then do:
tar zxvf python-dateutil-2.1.tar.gz
cd python-dateutil-2.1
ls setup.py
echo $?
If that shows ‘0’ (i.e. no errors, setup.py is present) then we can almost certainly use a distutils derivative; if something else (an exit code >0), then not. Basic distutils support is almost always present, but even the edge cases do not pose much of a problem in my experience, at least on widely used platforms. Now do this:
python setup.py bdist --help-formats | grep egg; echo $?
If that shows ‘0’, proceed. If something else, take a handy file we have checked into our source code repository, and which is conventionally called ‘setupegg.py’, and copy it into your working directory. This file or something like it is distributed with a lot of Python projects. Its contents are as follows:
#!/usr/bin/env python
"""Wrapper to run setup.py using setuptools."""
import setuptools
execfile('setup.py')
If you don’t know what /usr/bin/env is, look it up and start using it. It will help with ‘virtualenv’ later on.
Now do this ($setupfile is either setup.py or setupegg.py, depending on the previous steps):
python $setupfile bdist_egg
In a new ‘dist’ subdirectory, there should now be a file with one of two types of names:
- ${MODULENAME}-${MODULEVERSION}-py${PYTHONVERSION}.egg, for 100%-pure Python modules, or
- ${MODULENAME}-${MODULEVERSION}-py${PYTHONVERSION}-${OS}-{ARCHITECTURE}.egg, for modules that link to C or other code that makes shared objects and the like.
Some examples are python_dateutil-2.1-py2.7.egg (pure Python), numpy-1.6.1-py2.7-macosx-10.4-x86_64.egg, psycopg2-2.4.5-py2.7-linux-x86_64.egg, Twisted-12.0.0-py2.7-freebsd-9.0-RELEASE-amd64.egg.
Individual module packagers do not always do this in the exact standard way. You may have to read the README, INSTALL or BUILD files to figure out how to build an egg and supply the dependencies. In only one case, in all the libraries I have evaluated and wanted to install in the last year, did I have to modify setup.py or any other file, in order to build an egg I wanted. I’ll be submitting a simple patch to that author pretty soon.
So now our module builds, and maybe it’s a candidate drop-in replacement, and major improvement, for some code that is very important to us. Does it run? That’s an easy question to answer… if you have an effective set of automated tests. That’s a big if. And of course if you bite off more than you can chew, you may become mired in tests that fail but actually aren’t a problem. And then you’ll start ignoring them, and by then your good intentions will have taken you well down the road to Hell. But for Heaven’s sake, do some testing: no compiler has your back in this environment. If you’re not doing automated testing already, just get started, with the most basic test we can think of: can we do all the imports our code is trying to do? I have a script that finds all the import statements in a project, and attempts to run them in the target environment. If that returns an error, the build fails or the push is aborted.
If you want to be able to push code that imports your new module to production without any additional fuss, you’ll want to build the module on a machine that’s identical, dependencies-outside-Python-wise, to your production servers, with the possible exception that you have build tools, header files, etc., on this machine, and not in production. I make a point of building for all our platforms, when I make a module available for the rest of the group to use. I also make a note of lower-level dependencies, and have Puppet ensure that the packages are installed on the relevant boxes. To stick with the yum-oriented example, if we’re doing ‘yum install atlas-devel’ in our build environment, we will do ‘yum install atlas’ in production.
Now we have our egg. So we copy it into a directory where all the other eggs live. Since the namespace of Python projects, unlike say the Java project world of the Maven repositories, is flat rather than hierarchical, a single directory is the right thing here. Then we (deep breath!) commit it to version control. Yes, I know, built code in version control: no good! If you can’t stand it, set up a build server, commit a script that does the exact steps for building the module, and run that. In my defense, I’ll just say that I got to a rock-solid production environment faster this way than I otherwise would have, and we can always make our process dogmatically correct later if we wish.
Then we rsync the egg directory over to a web server on our internal network, that all our servers can see. Jenkins is a good tool for watching checkins and doing something like this, but you can just write a cron job that does ‘svn update’ or ‘git pull’ or whatever, and expose that with a web server, or rsync from there.
A word about PyPI and EggBasket, which are, respectively, the worldwide Python community library of publicly available projects, and a caching proxy for the same: I’m sure they can be made to work. When I was working primarily in Java, this was exactly the way I used to do things, with a combination of the publicly available Maven repositories, ant/ivy/maven, and Archiva. But I’m passing for now. To start doing that, I would need to convince myself that my production deployments can actually depend on my local EggBasket, and that such a setup would not download unverified things from the internet during a build or push. For now, I don’t want PyPI on my critical path, any more than I wanted CPAN there back in the day.
Deploying Python code into an environment with supporting modules loaded.
Now we have all the eggs we need in a place where we can find them, and our puppetmasters and puppet daemons are busily preparing the relevant servers for them. We are ready to modify our Python code, import the new module, do some development, and prepare a release for production. We can’t have our servers getting confused about which libraries, or which versions of libraries, are installed in the environment where they want to run the code. We will also need to be able to roll back in case it all goes awry. For this we create a virtual environment, with the tool virtualenv. Creating a new environment with the right libraries goes something like this:
virtualenv v1234
source v1234/bin/activate
easy_install
(several more like that)
Then we bring the first server in the pool out of the load balancer, bring the service down, create a new virtualenv with the right libraries, switch a symbolic link from the old one to the new one, bring the service back up, run some tests, and put it back in the load balancer. If that all works, we do the same for the next one, and repeat until finished. If a mixed state is undesirable, we can do half-and-half instead of a rolling deployment.
We happen to be using Tornado, and playing around with Twisted and gunicorn, as the conduit for delivering our payloads to our front end, but the techniques I have described work just as well for Django-based web sites and all manner of Python applications.
That’s it. We’ll write some more in future, about some of the science projects we have been able to deploy in this way. | https://tech.wayfair.com/2012/07/webops-for-python-part-2-the-how-to/ | CC-MAIN-2020-29 | refinedweb | 2,606 | 71.04 |
Re: How to Define a global const class variable in MFC?
- From: Joseph M. Newcomer <newcomer@xxxxxxxxxxxx>
- Date: Sun, 31 Aug 2008 10:34:48 -0400
See below..
On Sat, 30 Aug 2008 20:30:01 -0700, Electronic75 <Electronic75@xxxxxxxxxxxxxxxxxxxxxxxxx>
wrote:
Thanks a lot Joe and doug, your posts were informative for me.****
one thing that I didn't understand thoroughly about header files invc++,
Is in C it is supposed that all variables to be declared in a header
file(declared not defined) and then we define the variable in source file.
but in vc++ it seems that we have to both declare and define variables in
header file. for global variables we have to declare it in header file with
extern keyword and define it in source file that is perfectly consistent with
c style but class member variables are both declared and defined in the
header file, why is that?
No. You are confusing declaration and definition.
If I write
class A {
public:
int x; // 1
static int y; // 2
static const int z; // 3
static const int w = 3; // 4
B stuff[w]; // 5
};
then I have declared:
1. A class instance variable x. Every instance of the class has a different
address for the value x. Every reference to x is computed relative
to the current this-> pointer or relative to the class instance pointer.
I can only reference this variable via a class reference, e.g.,
A t;
t.x;
or
void SomeFunction(A * b)
{
... b->x ...
}
or
void A:OtherFunction()
{
... x ...
}
2. A static class variable y. I must later, in some module, write
int A:y;
Every reference to this variable uses exactly the same address, e.g.,
A() { y++; }
~A() { y--; }
counts the number of instances of A [this is not thread safe, but
let's deal with one complication at time]. If at any time I want to
see the number of instances, I could write, anywhere, something
that looked like this:
TRACE(_T("There are %d instances of A\n"), A::y);
3. A static class constant z. I am required to define it later, as
const int A::z = 3;
but I cannot use that value in my class declaration. I can use
this public variable in any context I want, just by naming A::z.
4. A static class constant w. This both declares and defines it, and
allows me to use it later in the class declaration (although why I
would declare an array of fixed size in C++ escapes me...). I
can use this public variable in any context I want, just by naming
A::w.
5. An array of compile-time fixed size where the number of elements
is a named constant that is scoped to the class namespace.
I could write
void A::AnyFunction()
{
for(int i = 0; i < w; i++)
Process(stuff[i]);
}
or even
void RandomClass::AnyThing(A * data)
{
for(int i = 0; i < A::w; i++)
Process(data->stuff[i]);
}
Note that except for case (4), it behaves as you expect.
The use of extern is used when you have a non-method declaration of a variable that is not
part of a class.
extern A SomeData;
which, just like C, requires that eventually some module declare
A SomeData;
****
********
This is a completely scare concept. What is a static char * variable doing here?joe you said "a fixed size array of anything is dangerous" do you mean only
Generally, as soon as you write 'char', you are writing horribly obsolete code. A fixed
size array of anything is dangerous. Note that this assumes that whatever this is, it is
known to all instances.
****
pointers to arrays or arrays themselves because I used to think otherwise for
defining arrays. I thought if I do not specify size of an array and define
for example UCHAR c[], it will cause compiler error and danger of overwriting
memory.
It depends. If you write UCHAR c[]; (something I would never do except as the last
element of a variable-length struct) it is probably syntactically illegal. If you want to
define an array of unknown size, you would either define a UCHAR * c, and allocate storage
for it (primitive, 1975 C style), which would then require you to free it in the
destructor, or using modern C++ methodologies you would declare
CArray<UCHAR> c;
or
std::vector<UCHAR> c;
or
CByteArray c;
and let the compiler handle all the deallocation. Since there is rarely any need to
declare a fixed-size array of anything in modern C++ programming style.
****
about char being old style, I didn't understand why. it is one of standard****
keywords of c why should it be old style? I also program microcontrollers in
plain C and there because limitation of memory using variables with smaller
size are more desirable. why it is old style because computer has a lot of
RAM and i could define an int even if I didn't need that?
So what? Why does being a "standard keyword of c" make it justifiable? Modern practice
says that Unicode is the way to go, and at the very least, Unicode-aware. Therefore, the
use of 8-bit char data type is obsolete in modern practice. What you do in a
microcontroller and what you do writing a Windows application are completelyl different.
It has nothing to do with the amount of RAM; it has to do with creating apps that can be
localized to any language. RAM is pretty much an irrelevant consideration compared to the
cost of retrofitting Unicode later. So get out of the habit of thinking 'char' is a
useful data type in writing WIndows apps. It is used in incredibly rare and exotic
situations, most typically, talking to embedded controllers and handling 8-bit data
streams over network connections (which are often encoded 8-bit data in UTF-8). It is not
a good habit to use in writing modern code.
****
********
If GetCollectionName is returning the m_sCollectionName, then m_sCollectionName should
probably be protected.
You don't need (void), () says the same thing.
****
you are absolutely write, thanks
Not knowing what is in it makes it hard to guess what its relationship to anything elsethanks, I learned that.
might be.
****
Second You can see in this line****
CArray <CCommandCollection> m_aCommandCollect;
I have defined an array of the same type of class. Compiler didn't get me
error for that(if I remove global definition of CCommandCollection variable
program compiles smoothly) but I suspect if it can cause trouble.
Why? It is perfectly valid syntax and has perfectly valid semantics which are
well-defined. Why would it cause trouble?
****
Since you did not include the contents of the other header file, there's no way to tell.
You did not show what is there, or give the actual compiler error messages.
I can write the whole header files and source files but I thought it will be
very big and boring to readers. I will try to change some of my code in light
of what I've learn in this thread and if I still couldn't define a global
class variable I will post all of the codes.
thank you for your kindness and being helpful.
best wishes,
Key here is that you HAVE to show the error messages AS ISSUED BY THE COMPILER, and show
us exactly what lines they occur on (some people show a message that says "error on line
282" but only show us a dozen lines, and we have to guess which one is line 282). In
addition, you must show all declarations of variables involved in the line that has the
error, which means you must show us the header files (you often do not need to show all
the details of the class contents, but we need enough to be able to understand what the
declarations are so we can figure out why the error message, that SPECIFIC error message,
came out). You need to show several lines before the error message as well (it is
surprising how many people make a syntax error on the previous line which isn't caught
until the next line, but they show us the line in question with no context)
joe
****
Joseph M. Newcomer [MVP]
Web:
MVP Tips:
.
- Next by Date: Re: Capturing USB data?
- Next by thread: Re: How to Define a global const class variable in MFC?
- Index(es): | http://www.tech-archive.net/Archive/VC/microsoft.public.vc.mfc/2008-09/msg00001.html | crawl-002 | refinedweb | 1,413 | 68.81 |
Table of Contents
- Import data from a file
- Connecting to servers
- Scraping data from the web
- Importing data from APIs
Importing data from a file
CSV files
import pandas as pd df = pd.read_csv('/Users/abc/Documents/folder/file.csv')
If somehow there is an error reading the file, try tweaking around with: header=None, sep=’delimiter’, skiprows=2 (this skips the first two rows which might not be data), or error_bad_lines=False (this skips errors).
To read in null values correctly, replace
'NAN' with the current shortform for your NA values.
To read in dates as the datetime format, replace
'date_column' with the name of your column containing dates.
To set the first column as the index, use index_col=0.
import pandas as pd df = pd.read_csv('file.csv', na_values='NAN', parse_dates=['date_column'], error_bad_lines=False, header=None, index_col=0)
Excel files
Default is
sheet_name = 0 which only takes in the first sheet of the excel. To read in all sheets, use
sheet_name = None which returns all sheets as a dictionary of dataframes. You can also specify say [1,2,5] to return the second, third and 6th sheet as a dictionary of dataframes.
To read in null values correctly, replace
'NAN' with the current shortform for your NA values.
import pandas as pd df = pd.read_excel('file.xlsx', na_values='NAN', sheet_name='0')
JSON files
import pandas as pd df = pd.read_json('file.json') df
Connecting to servers such as MSSQL
Check this out:
Using SQLalchemy to create an engine to connect to SQLite/ PostgreSQL is also possible.
Importing/ scraping data from the webdata from the web
Finance data (using Pandas Datareader)
Pandas Datareader is able to easily extract data from some sources, including: Yahoo!Finance, Google Finance, World Bank, and more. Find the full list here
from pandas_datareader.data import DataReader from datetime import date # Set your variables start = date(YYYY, MM, DD) # for example, 2010-1-1 end = date(YYYY, MM, DD) # default date is today data_source = 'google' # the source of your data. find the full list from the above link ticker = 'AAPL' # the ticker symbol of your stock stock_prices = DataReader(ticker, data_source, start, end)
If you use a list for the ticker it works as well but the result is a panel (3d array). Unless you are very comfortable with unstacking, I would advise against using a list.
Tables (using Pandas)
This automatically converts all tables in the webpage of the given url into dataframes. In this example I have saved them all to dfs. To select a particular table after this, say I want the 5th table, I can call df[6].
import pandas as pd url = 'http://' dfs = pd.read_html(url)
To loop over many urls, I break the url up:
import pandas as pd front_url = "" end_url = "&components=country:SG&key=XXXX-XXXXX" for row in df['Address']: url = front_url + row.replace(' ', '+') + end_url dfs = pd.read_html(url)
Text (using BeautifulSoup)
import pandas as pd import requests from bs4 import BeautifulSoup url = 'http://' resp = requests.get(url) html_doc = resp.text soup = BeautifulSoup(html_doc, 'html.parser')
All the information is now in the variable soup. If I want to extract certain information, I can do so like below:
title = soup.title # gives the title, including the tags title.text.strip() # strips the tags away leaving the text box = soup.find(class_="graybox") # finds the input. works for many things including class, p, etc links = soup.find_all('a') # finds all the links
For more ways to work the soup, go here
Importing data from APIs
I usually request the API to return the information in JSON format. Hence, I read it just as I would a JSON file. Below is an example to loop over numerous urls
front_url = "" end_url = "&components=country:SG&key=XXXX-XXXXX" for row in df['Address']: url = front_url + row.replace(' ', '+') + end_url address = pd.read_json(url) latitude = address['results'][0]['geometry']['location']['lat'] longitude = address['results'][0]['geometry']['location']['lng']
If you struggle to decipher your JSON, some people find it helpful to go here. I find it easier to just unwrap it layer by layer… | https://www.yinglinglow.com/blog/2020/01/26/Getting-Data-In | CC-MAIN-2021-49 | refinedweb | 681 | 56.25 |
79730/i-am-trying-to-execute-the-following-python-code
and i get the folloeing error
Hello @ cinhaw,
Could you please state what error you are getting and share your snippet so that we can help you out!
Thanks!
import pandas as pd
XYZ_web = {'Day':[1,2,3,4,5,6], 'Visitors':[1000,700,6000,1000,400,350], ...READ MORE
$ pip install pybase
ERROR: Could not find ...READ MORE
I have the same issue and is ...READ MORE
Here's the code:
check = input("Enter the character: ...READ MORE
Hi@Avinash,
I think the module name is visualization ...READ MORE
Hi, @There,
Try this:
Rollback pip to an older ... | https://www.edureka.co/community/79730/i-am-trying-to-execute-the-following-python-code | CC-MAIN-2022-21 | refinedweb | 109 | 67.65 |
This action might not be possible to undo. Are you sure you want to continue?
but output is concatenating the values instead of Sum. Ex. Envi("a")= 10, Envi("b") = 20, c= Envi("a")+ Envi("b"). msgbox c ( Ans.1020). How to overcome this pblm? Environment.Value("MyVariable")=10 A=Environment.Value("MyVariable") Reporter.ReportEvent micPass, "The value of A: ", A Environment.Value("MyVariable1")=20 B=Environment.Value("MyVariable1") Reporter.ReportEvent micPass, "The value of B: ", B C=A+B msgbox C Re: In one of the scripts the pwd in the DT was encrypted. Can any one tell me how to I decrypt or undo the same? Answer There is a tool called password encoder. It gets installed when you install QTP. Go to Start -> Programs -> Quick Test Professional -> Tools -> Password Encoder. Re: I want to open a Google page without recording a test and i do not want to use system.Util.run command as well how do i do this? Answer function as a generic function. So when ever required we will call them with differnt url"s. function navbr(url) Set IE = CreateObject("InternetExplorer.Application") IE.Visible = true IE.Navigate (url) Set IE = Nothing end function or Set IE = CreateObject("InternetExplorer.Application") IE.Visible = true IE.Navigate "" Re: What is deference between normal QTP testing and descriptive programming? when u r writing script in QTP u should store all the objects in Object Repository. for example: Browser("XXX").Page("XXX").WebEdit("XXX").Set "XXX" here the browser, page, WebEdit objects should be stored in Object repository. then only it works, else it wil throw an error. when u store objects in OR, some of the properties of that object will store in OR to identify that object. but in descriptive programming no need to store objects in OR. it means writing and executing ur scripts without using OR. for example: Browser("Name:=XXX").Page("Title:=XXX").WebEdit("name:=XXX").Set "sss" here the properties of objects are throwed in to script. So no need to store those object OR. Or Descriptive Programing is nothing but We define the Description about an object and create an object and we perform the action over the object. the basic syntax of the Descriptive Programming is µ²-Create Object²-¶ Dim oDesc Set oDesc = Description.Create µ²-Set ID properties & values²¶ oDesc("property1″).Value = "value1″
oDesc("property2″).Value = "value2″ µ²-Use and reuse the description object²¶ VBWindow(oDesc).Type "Something" VBWindow(oDesc).Close µ²-Release description object²¶ Set oDesc = Nothing One More syntax is for an object Button Set Obj = Brower("title:=Y").Page("html tag:=x").Button ("Name:=h") if obj.exist obj.Click else endif Here the Advantage is that : we skip the OR cocept so we save more space Remove machine dependency Increase clarity of the script In any case we can go to the Descriptive Programming provided we know the object Properties with\without Application. Mainly people will go for Desc Prog only if it is a Product where the Reusability of the script is high with minimal change in the script Re: According to use, how virtual object is different from object spy? If Qtp is not learning a sub-menues. What should we done? how would we manage on the expert view too? virtual objects are something like which look like standard windows widgets but are not actually. For example in MSPaint there are widgets like buttons like eraser, brush.. but actually they are not buttons. U can find that from objet spy. You can add virtual object by Tools -> Virtual Objet -> New Virtual Object. Re: How to get data line by line from web element Answer? 1. Get the value from that webelement using getroproperty 2. split that value using space " " as delimeter and move it to a array 3. use that array elements to get each line Re: How to send the qtp results file by email (Lotus notes). Answer? Re: When you are running a batch test of 5 scripts and in 2nd script appln crashed, what hpns If you are executing as batch in your framework you should execute 1 function/Script before each script execution. The use of this function is If Application Home Link is Exist it will just click on home link(Continue your Next Script)Else Close all browser, Lunch Application and Login and Continue your Next Script.
Re: What is the dis advantage of check points in QTP, if any? Answer The disadvantage is: Screen resolution is very important. Sometimes these checkpoints will also check the system configuration. When you record the checkpoint, that may work in one machine and may not work in another machine and the entire script will fail. One more important thing is that we will not be able to know on what property the checkpoint holds until the execution of the script. Re: What is an other way of "Wait" statement in QTP I dont want to use wait("Some number"). Excluding Wait 2 other methods are there. 1.Browser("micclass:=Browser").page("micclass:=Page").Sync 2.Wait Property Ex:Browser("micclass:=Browser").Page("micclass:=Page").Image ("name:=Pic).WaitProperty "Property Name","Property Value",30000 It will wait till in next page That Property Match Re: What is the disadvantage of smart identification? Answer If u enable smart identification when particular object description is mismatched it will take much time to recognize that object. And for identifying which object description is mismatched we have to verify the qtp results. If disable smart identification qtp will through error in that particular statement. e: how to open Excel sheet & write in it & save it? Answer Set excel = CreateObject ("EXCEL.Application") excel.Caption = "Test Caption" excel.Visible = True Set doc = excel.Workbooks.Add() Set selection = excel.Selection With selection .Cells(1,1).Value = "Test value" End With doc.SaveAs("E:\Excel.xls") excel.Quit Re: what are the advantages of merging of object repositories in Qtp9.0 Answer 1 .Suppose you are working on 2 different modules of a project using QTP. We design scripts for each module individually or 2 different persons work on 2 modules. finally if all the modules have to be integrated, then merging object repositories is pretty much useful and definitely needed. 2. If there are duplicate objects in 2 different repositories and if we merge them, then duplication can be avoided. 3. we can transfer entire repository if both are merged, we can easily move it in to another machine if needed. Re: What is the function of Filter in QC . give with a real time example. Answer Function of Filter in QC is to retrive the data from as per your requirement. Like : 1. If you want any particular defect from 1000 logged defect list. 2. You can filter Defect entered by some person. 3. You can filter Defect Assigned to some person. 4. You can filter Status of defect. etc
Visible=True ************** and in command prompt Run this csript openqtp."Uday Kumar" Set getname=dicObj End function Re: how many scripts r there in QTP? pls any answer this question? Answer If we develop a Test Script in QTP irrespective of Actions. How to invoke QTP or any application through Command Prompt without using Vb script and batch file. ************openqtp.Count For i = 0 To (browserCnt -2) Allbrowsers(i).Page().vbs************ Save this in a vbscript:Set App = CreateObject("QuickTest.Application") App. I want to close all browsers except one browser in QTP.Close Next Set browserColl = Nothing Set browserDesc = Nothing Re: Can a function return a dictionary object? Answer Yes. Re: Hi.set strUser Browser(). Public Function Func_Login(strUser. Can any one give the code for this? Answer Set browserDesc = Description. Only One Script in QTP Test. of Actions or Functions or procedures.strPassWord) Browser()..Click .ChildObjects(browserDesc) BrowserCnt = Allbrowsers. any no.u can write user defined function in many ways...encrypt(strPassword) Browser().Value = ³internet explorer 6" Set Allbrowsers = DeskTop.Page().Launch App.. are called as "Test Script" it means only one Script available in one Test.exe" Re: examples of user definied functions? how to write user defined functions in qtp9.Re: Suppose there are 10 browsers opened on desktop.Create() browserDesc(´application version´).. etc in one Test called Test Script.Dictionary") Set obj=getname MsgBox(obj.2? see. Dim dicObj Set dicObj = CreateObject("Scripting.vbs **************** It is possible to invoke QTP or any Application without using VBscript by simply Run it's complete path Ex: Start -> Run -> "C:\Program Files\Mercury Interactive\QuickTest Professional\bin\QTPro.WebEdit("Name:=Password").. Functions can return a dictonary object.Page().add "name".WebButton("Name:=Log In").WebEdit("Name:=UserID"). until unless create new Test.setSecure crypt.item("name")) Public Function getname() dicObj.
Absolute path means we have to mention full path in script. We can edit the scripts in expert view. Re: How to load object properties to object repository through scripting.And we can remove OR at runtime.SetTOProperty "text".tsr) or This concept will be called as dynamic handling of OR. Re: where the log files stored when using QTP? plz send answer to me . repositoriescollection.."Cance" print dialog("Login"). bcoz Runtime objects means the Objects available in AUT.vbs" But here if you want to convert this Absolute file path as relative path. editing checkpoint properties. loops.? How to access them and what are the differences between them.Here we can add the repositories in runtime through script.Value("UserName") sPassword = DataTable.GetROProperty("text") it returns "Cance" it is changing the property value during runtime..Activate dialog("Login"). retrieve and edit the runtime objects in Automation Testing using QTP We cannot edit runtime objects in Automation Testing. Example: executefile "E:\project name\libraries\main.error handling. etc. .. Re: What is absolute path and relative path in QTP. And also should associate the function library file where this function is stored. So before calling or executing this function u have to associate Object repository to the Current Action where this function is called. Example: sUser = DataTable. Inserting if conditions. Re: How do we edit the script in QTP.WinButton("OK"). comments. Anybody can explain in detail. Re: How do we Access.add (path of the object repository.WinButton("OK"). we will get one screen with named as 'Run'.vbs" Relative path means no need to give full path just mention file name Example: executefile "main. we can only use the properties available for objects and we can change the property/properties available in OR during runtime by using SETTOPROPERTY We can retrieve the property and its value by using GetRoProperty eg: dialog("Login").value("Password") ' Call the Login function for loggin in to application. After clicking on run button . in that if we want we can use the default location otherwise we can select any folder to save the log files. Func_Login sUser. you should do following steps Tools menu-->options-->folders tab here you have to add this path example "E:\project name\libraries" then you can use direct filename as above mentioned in the relative path example instead of absolute path example.sPassword Note : the function can not contain Object repository.End Function When ever u want to login to application u can call this function with parameter values for User and Password.
when QTP encounters a run time error during running the scripts..getROProperty("name") linksFileName.Page("Yahoo! India").Visible = True qApp.Libraries ' Get the libraries collection object qLibs..Settings.ChildObjects(linkDescObj) For i=0 to noOfLinks.Application") qApp.close Set fileSysObject=nothing Here i am retrieving all the links in Yahoo home page and writing the link names in a text files.Quit Set qLibs = Nothing Set qApp = Nothing First of all you have to create Quick Test object after that open the test with the help of Quick Test Object and also create Libraries object after that you can add ".Save qApp. If the total numbers of links are increased or decreased. Re: Can you give me the code to calculate the total number of Links using the child object in the web page. you can use "Standard Checkpoint".writeline(linkName) Next linksFileName.Re: how can we call an external library file in QTP apart from using the Executefile statement.Test.? is there any other way to calculate number of links with out using the Child objects? Set fileSysObject=createobject("Scripting.Add "C:\Utilities.opentextfile("D:\samplelinkfile.value="Link" Set noOfLinks=browser("Browser").?? Answer Generally. then it will show the differences. do we need to include any statement in the beginning of the script.FileSystemObject") Set linksFileName=fileSysObject.vbs" files.. If you wish to verify the number of links a web page.Resources.count-1 linkName=noOfLinks(i). Click on Record button ->Insert Menu -> Check point ->Standard Checkpoint -> Click on any place in the page and select the page in the Hierarchy and observe the number of links property/value.txt". False ' Open a test Set qLibs = qtApp. 1 ' Add the library to the collection qApp. .Open "E:\Test1".true) Set linkDescObj=description.vbs".?? is there any other way we can call the external library file in QTP? Yes we can associate the library files with out using "Executefile" statement.Launch qApp. False.2.Test.Create linkDescObj("micclass"). Re: When we use 'ERR' object to handle the exceptions in the script. it shows the error message in a pop up window and the script execution will stops. But you should use AOM method Example: Dim qApp Dim qLibs Set qApp = CreateObject("QuickTest.
but it is very difficult to maintain(update).folderexists("c:\uday") 'assume this folder doesnt exist 'QTP shows the error message here itself and stops execution. its one time . prog takes less time to execute bcz the object's information is available in the script.s).. b) som = a + b subt = a . After u will get the value u have to split the value.?? why cant we use descriptive programming instead of using the Object repository In descriptive programming the object¶s information should be available.e Addition and subtraction of two no. But it's having two results(i.Number <>0 then msgbox("Here it prints") 'Because of Err object we can give our own error message end if Re: How can we change(increase or decrease)the size of a array variable with out loosing the previous values Answer Dim a(10) is array declared if we want to change the size of array it should be declared with redin as below ReDim A(10) to preserve the contents of array we need to use preserve keyword as below ReDim Preserve A(10) above line is Answer . After u will get this value. Example: Function AddSub(a. u have to split that one for two results by using split function here delimeter is "/". Or writing the script will take more time when writing in DP. maintance of the script will be difficult. then we will use Err object. if "On Error Resume Next" statement doesnt exists If Err.FileSystemObject") fileSysObj. Re: When there is descriptive programming.We have to use "On Error Resume Next" statement before the runtime error is returned. if some of the object properties are changing from build to build u have to update same in script also.??? if so please give me the code for that. The general usage of the Err object is like: On Error Resume Next dim fileSysObj fileSysObj=createobject("Scripting. Answer Function doesn't return more than one value. desc.but through object repository we can easily write the script bcz the object's info is available in that OR.. like this we can add more than one result to the return variable(in this example AddSub is the return variable).. by declaring array as above it will preserve the values Re: can a Function return more than one value.b AddSub = som&"/"&subt End Function this function returns only one value. why do we go for Object repository for designing scripts.If you wish to display your own error message and you wish to continue with the script.. but if u are using Object Repository u can update those property values in Object Repository. if you want to get more than one result. we can write function for that.
when we assign the final value with variable that name should be the same name of the function name.? when both has the same functionality.but within functions we cant use actions we call functions in actions . Example: y = som(som(10. Actions: Actions has Object repository.VBS file.e same as function name(example: som = c)] if you don't assign. it doesn't has Object repository. Actions in QTP are used to divide the test into different logical units . function doesn't return any value (example: z = c) Example: Function som(a. we can store Functions in .b) and storing in var(C). Re: What is the difference between Action and Function. like this we can say many examples. It returns more than one value (i. 20). som is the function it returns the sum of two values when we assign that value to variable[i. In the given example. but Action we cann't store like this way. b) c=a+b som = c '(instead of these two step we can write in single step like som = a + b) End Function x = 10 + som(10. we can call this action in two ways 1. call to Existing Action.change. Re: how can we return a value from userdefined function for eg 2 functions in func1 iam getting 2 values(a. x) msgbox y Re: what is the difference between Reusable action and external action? Answer Reusable Action: we can call this action in between Actions in Test and also in between Tests. now i want to pass that var(c) to another func2 give me the script Answer Function returns only one value. or u can handle this in object repository with regular expression. We can declare more than one Output parameter. But Action is not like that we have to mention that as reusable action in Action Properties. when do we choose Action and when do we choose Function? Actions are more specific to tool Example QTP Functions are generic and part of programming language. Within actions we can write functions . 20) msgbox x you can pass this value in any other function. Major Differences: Functions: Function returns only one value.e Output parameters). Call copy of Action 2. we can call Function in any Actions in QTP test. .
But we can call this external Action to another tests by using Call to Copy Action. False Set qtObjRes = qtApp.Test.e Edit Menu-->Action-->Action Properties-->Associate Repository tab) at runtime. False. Re: How to load a object repository in QTP during runtime? Answer we can add object repository at runtime Two ways are there u can add 1.Actions ("Login").no need to add the object repository before running.Number) & " " & Err. 2.Add "E:\OR\ObjRes.Add(E:\OR\ObjRes..Description) Err. when u write below syntax in Action1 Syntax: RepositoriesCollection.Launch qtAppn.Visible = True qtApp.tsr) if write in Action1 it will automatically add the Object Respository to the Action1 (i. Add the object repository at runtime by using AOM(Automated Object Model) Ex: Dim qtAppn Dim qtObjRes Set qtAppn = CreateObject("QuickTest. 1 The above example Add the Object Repository (ObjRes. Re: Suppose there are 100 links in a web page and the number of links will be changing dynamically from time to time.e. Answer Below code will work for your case.Clear ' Clear the error. Example: On Error Resume Next Err.External Action(if it is not reusable action in test): this action we can¶t call in another Tests by using Call to Existing Action.tsr". but before finishing the test run) Answer we can find out errors before finishing the test run by using Err Object. MsgBox ("Error # " & CStr(Err.(i.Create() oDesc("html tag").Open "E:\Test\Test2". ' Set a description object to identify links. Here also no need to add the object repository in Test2.tsr) to the "Login" action in Test2.ObjectRepositories qtObjRes.Add(Path) Ex: RepositoriesCollection.Raise 6 ' Raise an overflow error. I need code such that every time i had to click on the last link of the web page. Re: how can u find the syntax errors or other script errors in your test during the execution of your QTP Test .Value = "A" ' Get a collection of links . Set oDesc = Description.Application") qtAppn. not after the execution/test run finished.
value = "Link" set n = browser("xxx"). use the below statement for i=3 to n datatable. Re: What is the command used to add an object(properties) to an object repository? Answer PropertiesColl.page("xxxx"). Directly use setcurrentrow function..click Re: What are the Mandatory Properties of WebTable Object and Link Object in Web testing using QTP Answer For the Web table object. Re: How do we run a test from the 3rd row of the datatable in QTP? Leaving the first two rows we need to test AUT from 3rd row to n'th row. but make sure that QTP identifies the object in the application uniquely. .. But we can change these properties. For all most of the web objects(ex: web edit..Count-1) obj.setcurrentrow(i) ' do some operations next The above loop starts from the 3rd row and performs your actions and continues up to 'n' iterations.Click or set odesc = decription.by using getroproperty("field name")we can retrieve values from web table.childobjects(odesc) objcount = n.Add PropColl(Position) Example The following example adds the first item from the OtherCollection Properties object to the MyDesc Properties object. password.count browser("xxxx").Item(oLinkCollection. the mandatory fields are: "html tag". Answer A web table will contains number of field with it. web link etc.Anybody can give me answer.. Or go to file->file settings->run tab and there u can specify the rows form where to start.). the mandatory field is "html tag". the environment variables are two types system defined and user defined system defined are qtp displays by default in environment tab where user defined is two types Local:we r creating these are in environment tab only for that test we cant use it to other test Global: creating in notepad and saving with extension.page("xxx").Page ("test").create odesc("micclass").link("index:="&objcount).ini what are the environments you need like user name.Set oLinkCollection = Browser("test").ChildObjects(oDesc) Set obj=oLinkCollection... For a link object.. the mandatory fields are: "html tag" and"text".. database these are we can use in any test Re: how to capture the run time values of web table. server name. Re: What is difference between Global variables and Environment variables.
COM: Component Object Model with this model we can create Excel. False.Open "C:\Tests\Test1".Launch qtApp. without opening QTP.Add OtherCollection(0) Re: What is difference b/w AOM.DOM. False qtApp. in qtp expert view wherever mouse is there that snapshot displayed in active screen but some recorded script lines are not displayed in actives screen It means those line are written manually or those lines are not user actions. we can add sheet and count the rows and columns etc.Application") qtApp. In function parameter you can't use optional option. OutLook objects etc. object repository properties are saved with .Visible = True qtApp.vbs files. While loops & check points which we wrote manually. DOM: Document Object Model this is come under HTML like that. txt="have a nice day" t1=ucase(txt) msgbox(t1) o/p: HAVE A NICE DAY Re: How to set the function parameters as optional.Resources. we can do open Test. i have n't worked on this.Page("test").Libraries.MyDesc.Add "e:\Utils. For.Test.tsr extension.vbs" Re: The interviewer asked me when u recorded one application. MS-Word.with this object we can do with out opening Excel application. Re: Anybody can tell me what are the common roles and responsibilities of a Automation Test engineer Answer 1)Check out the feasibility of the application that needs to be automated 2) Create an automation test plan 3) 4) 5) 6) 7) 8) Create the Automation test cases Choose the correct framework that needs to be used Create the scripts Execute the scripts Analyze the results Report the issues to the concerned person Re: Can we add the function library directly from scripting in qtp instead of adding from resource tab? Answer Set qtApp = CreateObject("QuickTest. Datatable(excels) etc. But you can mark the statements inside the function as optional by putting keyword "OptionalStep" before the statement. they might be If. E.If so why? Answer AOM: Automated Object Model with this model we can create QTP object..WebEdit ("test1").COM Have u ever is used ny of the models. add repositories. recovery scenarios. .g: retrieves the textbox value into var_Value variable var_Value = Browser("test"). Pls anybody can give the answer.GetROProperty("value") Re: Function to convert lowercase to uppercase in QTP Pls anybody can give the answer.Settings. Re: Which function is used to accesses the Properties from Repository Pls anybody can give You can retrieve object property values using GetROProperty method at runtime. .
Answer Open the test from folder. and more powerful data-driving capability Identifies objects with Unique Smart Object Recognition. step out short cut keys.click End Function Re: How to check the value for variables during run time. You can achieve this only when you debug the scripts.A5. change the action sequence and insert comments if you don¶t want to execute. I want to run action as follows A1. A3.Webedit("name"). Like as below. values use step in. and if u changed any property in object repository after writing the script.passwd) OptionalStep.Webbutton("ok"). A2.set passwd Brw("login").. where it should be updated in QTP? we can do in two ways first changing the property in the object repository and by using SetToProperty we can handle during runtime.A6 and A10. Original actions Action 1 Action 2 Action 3 Action 4 Action 5 After Modified ( Like below.. open script in note pad.Webedit("passwd").A10. How i can do it. easier maintenance. we need to modify) Action 1 Action 5 Action 2 'Action 3 // I don¶t want to execute this action. By default the runtime evaluates the variable values upto that breakpoint...ex: Function login(name. and from there on to know var.set name Brw("login")..Brw("login"). Thanks in Advance to u. enabling reliable unattended script execution Collapses test documentation and test creation to a single step with Auto-documentation technology Enables thorough validation of applications through a full complement of checkpoints Re: How can I find out whether a word in a string is existed or not for example "QTP IS A POWERFULL TOOL FOR AUTOMATION" How can i find out whether "powerfull" is existed in . Re: If a object property is changed in the application. Action 4 Features of QTP : Operates stand-alone. or integrated into Mercury Business Process Testing and Mercury Quality Center. go to action folder 0. Allowing for fast test creation. Put some break points in your script. it is better to run in update mode. even if they change from build to build. Re: How to send QTP scripts to our colleagues? while u send QTP scripts to ur colleagues send them along with the object repositories(SAVE THE OBJECT repository and then send them) Re: How vl i prioritize the actions when i have 10 actions A1.
user defined functions . . sourcesheet) datatable. then compare the word with the word which you want to check which should have been stored already .OpenTextFile("FILENAME" . Now. we start writing the script for the scenarios.cancel buttons Function Name: Popup_Recover() public function Popup_Recover() if Browser("title:=*").AddSheet(sheet) datatable. import.Link ("name:=Yes").Read(i) if Str2=Left(Str. After creating the folder structure.Dialog("name:=.FileSystemObject") Set objReadFile = objFSO.ReadAll() Set objReadFile_new = objFSO. datatable. First we create automation folder structure(i. Please go through the following code for this example Set objFSO = CreateObject("Scripting. we create separate folders for storing object repository files.Dialog("name:=. our Quality Leader will analyze the application and identify the scenarios for automation.ImportSheet(filename. the word exists.e. After that Read till the length of the word which you want to Check (In this case it is 9).no . This is the point where we(automation engineers) start our automation process.*")..Exist Then Browser("title:=*"). These scenarios are divided among Automation Engineers for implementation.*"). we follow the below process while writing scripts First.Click End If End Function Re: How exactly you start scripting in QTP? In the interview he was not satisfied with my In my company.OpenTextFile("FILENAME" . recovery scenarios). we start recognizing the objects using object repository manager and store the object repository file in the specified folder. test results.ExportSheet(filename. If the stringcompare is true means. We then attach the repository file to our test by using Resources->Associate Repositories.scripts.Read(len(Str)) if Str3=Str then Msgbox "FileExist" Exit For End if Next Re: can u write Script to do Data Driven Testing throuh Externel & internel XL Sheet Answer Following are the commands you can use to add.export sheet to QTP.1) then Str3=ObjReadFile_new.the above string or not Could anyone answer it? Answer Create a filesystem object . test data. sourcesheet. destinationsheet) Re: can u Give Procedure to Handle Pop Window & write Code for that Answer? you can use pop up recovery scenario to recover from problem assume a popup window contains yes .Link ("name:=Yes").1) For i = 1 to len(Str1) Str2=objReadFile_new.1) Str="Powerfull" Str1=objReadFile. Read the file till the letter "P" comes .
visible=true ie1.visible=true ie2. and we also use data driven framework where ever necessary to test the application with multiple sets of data.Application") ie1. runs the test. How to get the class of an object. opens the test. is nothing but how QTP identifies the objects in the application. Test Settings.We follow modular driven framework. how it identifies the objects during run session etc. " & c obj(i). For example. ie2 set ie1=createobject("Internet Explorer. Re: using descriptive programming how to close all opened browsers? Answer Set b=Description.. we can automate QTP Operations . what mandatory and assistive properties are used during recording the application.By using object. we can write programs that can configure QTP operations and settings. methods and properties provided by QTP.navigate "URL" Ex: Browser("browser").com" Re: what is the difference between Automation object model(AOM) and test object model(TOM) Answer By using QTP automation object model.count-1 c=obj(i). Test Object Model.Close Next Re: How can i count "spaces" in any sentence or a string if suppose " It is a Testing question" Here we have 4 gaps(spaces) Is there any function to find out spaces between words Answer Test: str="it is a testing question" Call spacecount(str. and Record and Run Settings dialog boxes. configures settings that correspond to those in the Options.d) /*Function Call*/ msgbox "Number of spaces:" & d User Defined Function : Public function spacecount(a.navigate " For i=0 to obj.navigate"http:\\localhoast\registration\index" Re: What is the vb script when the url enter into the browser Answer Browser Object.value="Browser" Set obj=Desktop.GetROProperty("title") msgbox " Closing.navigate"http:\\localhoast\enquiry\index" set ie2=createobject("Internet Explorer..Create b("micclass").yahoomail..Application") ie2. you can create and run an automation program from Microsoft Visual Basic that loads the required add-ins or a test or component. Re: how open two urls in one browser? plz urgent Answer dim ie1. starts Quick Test in visible mode. and saves the test.b) str=a p=split(str) coun=0 For i=1 to ubound(p) coun=coun+1 Next msgbox coun b=coun /*Returns Number of spaces to the calling Function / End Function .
Re: how do find current links in webpage for ex: in yahoowebsite today mainpage is having 50 links and next day same page is having 60 links.2. if i run same programe it should display total links in webpage ? Answer Above script for weblists To count No of Links in webpage: Set chobj=description.Page(" ").CaptureBitmap image_name End Function Step2: Go to Recovery manager step3: select "On Any error" or select u r own option Step4: select function to call step5: call the above mentioned file Rest QTP will do it for you Or In QTP 9.Application") testpath = qtApp.Create chobj("micclass").png" Desktop. "A" in that name ? Answer Dim txt. Arguments.of."a") cnt=0 for i=1 to ubound(p) cnt=cnt+1 next b=cnt msgbox cnt Re: How to capture screen shots when an error occurs? Answer Hi use RecoveryFunction to capture the image when error occurs. Step1: Create a VBS file using following founction Function RecoveryFunction1(Object. follow the below navigation: Tools -> Options -> Run tab -> In the drop down list box "On .Folders.ChildObjects(chobj) 'Count the no of objects n=obj. retVal) 'Find the Test Folder Path Set qtApp = CreateObject("QuickTest.count msgbox "No of Links: "&n Re: I have a string "Redfort is in Delhi" how do u write vbscript for " Delhi in is Redfort" Answer str= "Redfort is in Delhi" arr= split(str. cnt txt="ajay" p=split(txt. Method." ") For i =ubound(arr) to lbound(arr) step -1 res = res&" " &arr(i) Next msgbox res Re: There is a name "AJAY" how do u count no.item(1) 'stores the image inside the test folder image_name= testpath &"\imagename.value="Link" Set obj=Browser(" ").
Please follow the below mentioned steps on the particular machine where the web script is to be run: The registry entry WarnOnHTTPSToHTTPRedirect takes the following two values: 1: Display the Security Alert message..Exist Then 'Print "Object Exist" VerifyObjectExists= "True" Else . Re: How to use custom checkpoints in QuickTest Professional? Answer Custom check points are user specific Check points.. . 8..2? Answer var= browser(".WinButton("OK") VerifyObjectExists(Obj) Re: How will u test content of an web application with out using checkpoints in QTP 9.Error"(default) is selected for "Save step screen capture to results" And even we can also configure.Read the line on particular mentioned fine b.getroproperty("innetext") Re: How can you Open a Notepad and How can you write the test Results in Notepad by Using QTP? Answer Set a=createobject("Scripting. To add the registry entry WarnOnHTTPSToHTTPRedirect for an individual user. click Run. Protocal has been changed http: to https: what is your approach? Answer We can disable the security alert from \\https to \\http.readline ---.filesystemobject") Set b=a.writeline "I Love QTP" --. if we want to check particular object/or its any property then we make custom check points Example Function VerifyObjectExists(AppObject ) If AppObject..txt". 2writing mode. and then click OK.opentextfile("E:\raju\1. and then quit Registry Editor: Click Start.. type regedit.appending mode } While b. follow these steps: Follow these steps. 0: Do not display the Security Alert message.write the line on particular mentioned file Wend Re: In a web site.").2) {1-reading mode..").atendofline<>true y=b. 2.page(".Print "Object does not exist" VerifyObjectExists="False" End If End Function set Obj=dialog("Login").. whether we have to proceed to next step or stop the test execution by following below navigation: File Menu -> Settings -> Run Tab -> Choose the required action from the "Whenever an error occurs during run session" drop down list box.
data-drive and execute manual and automated tests without any programming knowledge. There are two types of Sendkeys are available in QTP. Type 0. Re: what is Business Process Testing plz explain? Answer Business Process Testing is a web-based test design solution that bridges the quality chasm between subject matter experts and quality engineers. without the need for programming knowledge. however they may not necessarily have the programming knowledge needed to create automated tests.txt (Text File)" or ". Or qfl is the function library file. So there will be one action for each test(action1). Re: .qfl (QuickTest Function Library)". we can store these functions any one of the above Extension. This role-based system enables subject matter experts to build.qfl extenstion. They use the Business Components and Test Plan modules in Quality Center to create keyword-driven business process tests. Integration between Quick Test and Quality Center enables the Automation Engineer to effectively create and maintain the required resources and settings. On the Edit menu. Type WarnOnHTTPSToHTTPRedirect. point to New. as well as the business processes that need to be tested. It enables Automation Engineers and Subject Matter Experts to work together to test an application's business processes during the application's development life cycle. while enabling Subject Matter Experts to create and implement business process tests in a script-free environment. It reduces the overhead required to test maintenance and combines test automation and documentation into a single effort. They use Quick Test to define the resources and settings needed to create components.We can call this action form that test . If create function library from QTP and when u save it directly from QTP. click Modify. Sendkeys Method(IMICDeviceReplay) Re: can we call a test in another test? Answer By default all the script will be recorded in the action itself. and then press ENTER. and then click DWORD Value. it will save with . if u want to see that or edit the u can open that with notepad or u can open with QTP Re: Can explain me Send keys concept Answer? Send Keys: Sends one or more keystrokes to the active window.qfl extension for which file? where vl use it Answer write all functions in a single for Reusable purpose. We can store this file as ". Subject Matter Experts understand the various parts of the application being tested. 1. Automation Engineers are experts in automated testing. Or Business Process Testing is a role-based testing model. which are the building blocks of business process tests. and then click OK.Locate and then click the following key in the registry: HKEY_CURRENT_USER\SOFTWARE\Microsoft\Windows\CurrentVersion\Internet Settings On the Edit menu. Sendkeys Method(CustomServerBase) 2.vbs (VB Script File)" file or ".
i have one combobox in that i want to see all the citynames in the combobox and i have to check weather hyderabad is present or not. .Navigate ": pwd = "GetPasswordfromSomewhere" e_pwd = Crypt.google. 7..write a vb script .example for lowlevelrecording Answer 1) i have opend 2 gmails i am working with 1 gmail i want to close other gmail by usig script. This means that each object is usable irrespective of language.Re: How can we perform Action and Component parameter. item to that environment variable ( a little complicated ) and then on execution I find out the current language and use that as part of the Environment file to load like EVF_English.i have 10 links in a page all of them have same properties& names i want to click on 5link by using script. 3.xml environment file. E.Page("title:=Google").Exist then msgbox "Page Exists" else msgbox "Page does not Exist" end if Re: How can we encrypt the username using recording mode in login window? There is 2 encrypted types what are it Answer Methods for Encryption in QTP are as follows 1)SetSecure method -used only for TextBox object like WinEdit/WebEdit etc.that page is open or not Answer SystemUtil. now i want to use same object repository for other languages(French. There u can select for action or entire test Re: when we enter url.? Answer Resources-> map repository parameters. 2. Re: This is Ajay i have few douts if anybody know pls give me reply.one page should open.xml so if it finds French then it loads the EVF_French."open" Browser("title:=about:blank"). This of cause does not work for a window or a dialog so here I create a Environment file for each language ( including one for English.Page("test").SetSecure e_pwd Re: I learned all the objects of my application in object repository thru English Language.Page("url:=about:blank"). like the Window ID ( a unique number .exe".Sync Browser ("title:=about:blank").g:Browser("test").is it possible to compare to excel sheets in qtp if possible wht is the script. 4.Run "C:\Program Files\Internet Explorer\iexplore.Encrypt(valueToBeEncripted) E. size of objectrepository.WebEdit ("test1")."". 6. Oh and one more thing the Environment variable can not have spaces in it so you can use Environment("MainWindow") but not ("Main Window").WinEdit("pwd"). it is also only 1 primary identifier). i have opend 2 gmails i am working with 1 gmail i want to close other gmail by usig script. Since its only windows I have to worry about since you can not have a window ID for a window then there are fewer vairables I have to have in the Environment files.SetSecure "1HG76BHGJ89sd8Jkl9hjs988dsnjsk" 'encryp tedtext 2)Crypt.Dutuch and Chinese) Answer First I tried to use a non language specific way of identifying the object. xml but simple) and then map the window O.Dialog("pass").Encrypt("Tester") Methods usage in Test Script E.com/" If Browser("title:=Google").R.Encrypt(pwd) Browser("dfgd")."C:\Program Files\Internet Explorer". 1.. 5.what r problems we get during writing the script.g: var_Value = Crypt.
i.WebList ("CityList"). If (Browser("title:=Gmail: Email from Google". 3.Step1: Import the first Excel sheet to Gloalsheet. Browser("title:=Gmail: Email from Google"). Ans: Use index property in descriptive programming so that u can perform click action on 5th link. the given city Hyderabad exist in the CityList combobox or not.ReportEvent micPass "City is Found in the List". If any one combination failed(verifying cell value.e For example. use only index property in Browser object 2) i have 10 links in a page all of them have same properties& names i want to click on 5link by using script. Synchromnization: this is one of the major issue while preparing and executing the script. you can get the following probles while preparing the script 1. If only one browser opened.Ans: Use index property to uniquely indentify the Browser.GetROProperty("all items") For Each aitem In all_items If aitem = CityName Then Reporter."index:=1").Click 3). then only it is easier to prepare the scripts 4. Index starts from Zero. Index starts from Zero. 2. Sometimes you may not able to find/recognize the unique property value of the object. i. For the second browser. for any object. Ans: The below code that will verify whether the Item exist in the combobox or not. Second excel sheet into Local sheet Step2: Get the rowcount of two Globalsheet and localsheet Step3: Compare the rowcounts and columncounts of two sheets if those are equal. this case the below code work. verify each value of the cell.The below code will close the second Gmail brower if it two or more Gmail Pages are opened. 5) what r problems we get during writing the script Ans: In general. In QTP. If you have good Automation framework. "Passed" End If 4) size of objectrepository Ans: The Size of the Object Repository might be the size of the . index value will be 1 and so on.Link ("name:=ABCD".Page("Cities"). Mention that two excel sheets are different else . It depends on your logic where you are going to implement in the script 6) is it possible to compare to excel sheets in qtp if possible wht is the script.ReportEvent micPass "City is Found in the List". I can explain the Steps to prepare the script.Exist) Then Browser("index:=1").e. Ans: It is possible to verify two Excel sheets in QTP.Page ("name:=Gmail: Email from Google"). "Passed" Result = 1 Exit For End If Next If Result = 0 Then Reporter. Index value is Zero. ABCD is the name of the link and page contains 10 links with the same name ABCD and you want to click on 5th. So fifth link contains the index value 4.Close End If If u want to close any broswer.i have one combobox in that i want to see all the citynames in the combobox and i have to check weather hyderabad is present or not.tsr file in your script. "index:=4"). Otherwise It won't close. CityName = "Hyderabad" Result = 0 all_items = Browser("Cities").
Winrunner doesn't support Uni-code compatability QTP supports Uni-code compatability 7. and 4 kinds of Exception Handling mechanisms. instead message box use a reporter statement or if u r writing to exeternal log use it. 9. username " (for example you will see .Page("XXX").2 kinds of Recordings.Winrunner doesn't provide the snapshot of output QTP provides the snapshot of output. . Krishna´ ) text on the top of the Inbox page .GetROProperty("innertext") getparam = Datatable.dtGlobalSheet) If GetAppval = getparam then msgbox "pass" Else msgbox "fail" End If guys here i just given the logic to use.The logical name of true and false in Winrunner is 1 & 0 The logical name of true and false in QTP is True and False 5.The default time setting of Win runner is 10000 ms The default time setting of QTP is 20000 ms 4.Winrunner has 3 kinds of Checkpoints. Re: 3. but this is the logic to check values. later on compare that value with ur parameter. Re: If you entered into yahoo mail with your valid user name and password . how can you test the user name is correct or not using QTP? Answer you can use the getROProperty method to get that value.Winrunner only supports IE and Netscape Navigator QTP supports all types of browsers 3. and i used data table for example. 6. and 3 kinds of Exception Handling mechanisms.Winrunner uses TSL script(Test Script Language) QTP use VB Script. for example: GetAppval = Browser("XXX").WebElement("html tag:=TD". Else Mention that two excel sheets are different Re: What are the advantages of QTP over WinRunner? Answer WIN-RUNNER VS QTP : 1.Winrunner has Four kinds of Add-Ins QTP have Four kinds of Add-Ins 8. ³welcome.Value("name".3 kinds of Recordings. How to handle the exceptions using recovery secnario manager in Qtp? Answer You can instruct QTP to recover unexpected events or errors that occurred in your testing environment during test run. but many people (including me) will use separate excel file for parameterization.In Winrunner GUI Spy is available In QTP Object Spy is available 10.Winrunner doesn't support web applications QTP supports all kinds of web application 2. then you will get " welcome ."index:=3"). QTP has 7 kinds of Checkpoints.Two excel sheets are equal.
. For example. in other words instructing QTP whether to stop the test run or to move on to next step. Recovery scenario has three steps 1. QuickTest uses a different value from the Data Table.Recovery scenario manager provides a wizard that guides you through the defining recovery scenario. You can then use that parameter value to parameterize a step in your test or component. For example. For example. In order to use a value within a specific action. 3) and now the third level.qrs" extension and associate the recovery scenario to the QTP tool (file->Test settings-->recovery tab---> add the recovery what ever we have saved and run the test.. ie.. Triggered Events 2. Data Table parameters enable you to create a data-driven test (or action) that runs several times using the data you supply. Recovery steps 3. post recovery when an error or a popup window is identified. Environment variable parameters enable you to use variable values from other sources during the run session. A parameter is a variable that is assigned a value from an external data source or generator.. action or component parameters enable you to use values passed from your test or component. or values that QuickTest generates for you based on conditions and options you choose. You can also parameterize the values of action parameters. where <MemName> is the name of the member. save the recovery scenario with ". any doubts regarding recovery u can raise Re: How many types of Parameters are available in QTP? and Pls Explain with example. you may want to consider using the Data Driver rather than adding parameters manually. you must pass the value down through the action hierarchy of your test to the required action. You can parameterize values in steps and checkpoints in your test or component. together with a button labeled View <MemName>'s Picture. You can parameterize the name property of the button so that during each iteration of the run session. QuickTest can identify the different picture buttons. or to move on to next iteration etc.. so for this in order to over this problem. If you wish to parameterize the same value in several steps in your test or component. or values from other actions in your test. When the user enters a member's name. or you can use one of QuickTest's built-in environment variables to insert current information about the machine running the test or component. You can pass the value from the test level to Action1 (a top-level action) to Action3 (a child action of Action1). There are four types of parameters: Test. and then parameterize the required step using this Action input parameter value (that was passed through from the external application). or iteration.. suppose that you want to parameterize a step in Action3 using a value that is passed into your test from the external application that runs (calls) your test. In each repetition. suppose your application or Web site includes a feature that enables users to search for contact information from a membership database.. you can have QuickTest read all the values for filling in a Web form from an external file. Answer You can use the parameter feature in Quick Test to enhance your test or component by parameterizing the values that it uses. Post Recovery Test-Run see recovery scenario manager is used to handle unexpected events or errors that occur during test run. the member's contact information is displayed. and suppressed successfully the next steps are nothing but post recovery steps. we have a recovery scenario wizard which is available in QTP tool. where the main objective is to 1) if any event or error occurs--"how to handle them?" i mean v can define easy steps when a pop window is displayed in between a test run then we instruct QTP how to identify that particular popup 2) As soon as identifying what to do next? suppressing that particular popup window is the next step. These may be values you supply. so we ill define some steps to suppress that particular pop up window.
GetROProperty("text") msgbox FName the runtime text which is present in FirstName field will be stored in FName variable and can be seen using msgbox.> dim a..GetROProperty("items count") Browser("Browsername").createfolder(Path of the folder) Re: How to create a run time propertyfor an object? Answer for ex.. After declaring a dynamic array..Webedit ("FirstName"). Variables declared with Dim at the script level are available to all procedures within the script.page("yahoo"). use the ReDim statement within a procedure to define the number of dimensions and elements in the array. . For example.Page("Page"). ReDim Statement .. At the procedure level. variables are available only within the procedure. Dim varname[([subscripts])][. suppose that we want to know what is the data in the text box while the application is running? FName = Browser("Browser"). Re: how to get font size of a "WebEdit" Answer 'We will use OUTERHTML Property and use split concept we will get font size Example Outerhtml=<input size=12 . an error occurs. You can also use the Dim statement with empty parentheses to declare a dynamic array. Re: how can you select random value for every iteration from a weblist Answer First get the Items count from the List and Use the RandomNumber() function to get the random Index value.ItemsCount)) Re: Could some one explain me how to create folder on the desktop from QTP?? This is an interview question Answer set a=createobject("scripting.Random number parameters enable you to insert random numbers as values in your test or component. . you can have QuickTest generate a random number and insert it in a number of tickets edit field. If you try to redeclare a dimension for an array variable whose size was explicitly specified in a Dim statement.Page("PageTile").Page("PageTile").filesystemobject") set b=a. The below code will work ItemsCount = Browser("Browsername").getroproperty("outerhtml") i=split(a.i a= Window("yahoo").Webedit ("name").WebList ("WebListName"). varname[([subscripts])]] .Select ("#" & RandomNumber(0."=") msgbox i(0) Re: What is the difference between Dim And Redim Answer Dim Statement Declares variables and allocates storage space.WebList ("WebListName"). to check how your application handles small and large ticket orders.
For example. n=f. . If you use the Preserve keyword.atendofline--> the end of file content this script not only for MS. the test data used. if your array has two or more dimensions. a Test Fusion report displays all aspects of the test run: a high-level results overview.false) while(f. and allocates or reallocates storage space at procedure level.Advantage of this testing to defect the problems during testing time.1. you can share reports across an entire QA and development team.And identify the defect of statements of programming. or Dim statement with empty parentheses (without dimension subscripts). Or debugging nothing but one example: u have written 100 lines of code but ur script is not executed properly . However. The following example shows how you can increase the size of the last dimension of a dynamic array without erasing any existing data contained in the array. You can use the ReDim statement repeatedly to change the number of elements and dimensions in an array. application screen shots for every step that highlight any discrepancies. 10) .atendoflen<>true)'reading the word doc.Word also for Notepad ie for Flat files.line by line wend. ReDim [Preserve] varname(subscripts) [. add What is the script above is absolutely correct. 15) Caution If you make an array smaller than it was originally. ReDim Preserve X(10. and detailed explanations of each checkpoint pass and failure.readline 'retrive the data from doc. Public. you can change the size of only the last dimension and still preserve the contents of the array.Declares dynamic-array variables.n set fso=createobject("Scripting. if your array has only one dimension.so then use the debugging and check out the each and every line then u can find the problem .f. and you can't change the number of dimensions at all.filesystemobject") set f=fso. varname(subscripts)] . you can resize only the last array dimension. 10. What is the script for that Answer option explicit dim fso. . an expandable Tree View of the test specifying exactly where application failures occurred. Re: What is test fusion report? Answer Once a tester has run a test.opentextfile("path of the word document". The ReDim statement is used to size or resize a dynamic array that has already been formally declared using a Private. data in the eliminated elements is lost. in 4th line 1--> Read mode of file 2--> Write mode of file 8--> Append mode of file True--> New file creation when specified file doesnot exist False--> QTP is not responsible when specified file doesnot exist in 6th line f. Re: How do you done Data-Driven Testing using MS-Word.. By combining Test Fusion reports with QuickTest Professional.. Re: what is debugging testing? Answer debugging testing means line by line testing. in script. 10. . ReDim X(10. you can resize that dimension because it is the last and only dimension.
Test harnesses allow for the automation of tests. orchestrate a runtime environment. after entering the Yahoo URL in browser address bar and pressing on enter key the yahoo login page will be displayed on the browser screen.etc For example if u take : Yahoo mail.. User interface means: Which provides interface for the user to interact with back end functions. if yahoo mail application is not providing login page. A test harness should allow specific tests to run (this helps in optimising).plz give a detail answer i need to explain it to the interviewer. and provide a capability to analyse results.. where can u enter the User Id and Password. its Function statement must include an empty set of parentheses.Re: Plz someone tell me about user interface testing and backend testing and hw did u use it in ur project. The test harness is a hook to the developed code. Execute test suites of test cases. . Generate associated test reports. so user interface is used to interact with the system. Database. using this u can enter user id and password to loin into the system. Increased quality of software components and application Re: If requirments changed then how we can teach the QTP this is the new requirment Answer if you are asking new requirement is added to existing functionalities if you are using record and play back then we are going to record the new functionality by using action object repository after that by merging it with shared object repository using objectrepository merger qtp going to identify the new functionalities otherwise by using descriptive programming we are going to add the properties of new functionalities that is the advantage of QTP Re: what's the difference in between function and sub and give some code as well Answer A Function procedure is a series of VBScript statements enclosed by the Function and End Function statements.submits the user requests and displays the server response. It has two main parts: the test execution engine and the test script repository. In Web based systems Browser acts as user agent and talks with the server . but can also return a value. which allows user to enter User ID and Password. Password Text fields are user interface elements. A Function procedure is similar to a Sub procedure. Increased probability that regression testing will occur. which can be tested using an automation framework. The typical objectives of a test harness are to: Automate the testing process. Re: what do u mean by test harnesses in qtp? Answer In software testing. A test harness typically provides the following benefits: Increased productivity due to automation of the testing process. a test harness or automated test framework is a collection of software and test data configured to test a program unit by running it under varying conditions and monitor its behavior and outputs. Testing back end testing means testing the code which produces the output for the user input. In Client-Server systems a client tool which is prepared to communicate with server acts as User agent to communicate with server. They can call functions with supplied parameters and print out and compare the results to the desired value. in yahoo mail login page user id. If we come to backend testing: Back end means the business logic(functions)server and Data base. A Function procedure can take arguments a Function procedure has no arguments.
" End Sub Re: what are the 5 types of objects in qtp Answer QTP supports 6 types of objects: Standard Windows Object ± Methods and Properties within this object can be used for testing standard windows objects ActiveX Objects ± Methods and Properties within this object can be used for testing ActiveX objects Visual Basic Objects ± Methods and Properties within this object can be used for testing Visual Basic objects Web Objects ± Methods and Properties within this object can be used for testing Web Objects Utility Objects ± Methods and Properties within this object can be used for testing Utility Objects Supplemental Objects . True) ' open to empty file and also use for open the file which is already created WriteLineToFile = f. Sub ConvertTemp() temp = InputBox("Please enter the temperature in degrees F.CreateTextFile ("c:\kamesh.OpenTextFile("c:\DIM. or expressions that are passed by a calling procedure).doc files are there (together).True) 'create new empty file . 1) MsgBox "The temperature is " & Celsius(temp) & " degrees C.FileSystemObject") Set f = fso.xls) in code.". its Sub statement must include an empty set of parentheses ().tab_2.. 1.its not proceeding from _1 to _2.see below code Here i open the file from mentioned path and read and write the values on that.txt"..tab_3.txt..Function Celsius(fDegrees) Celsius = (fDegrees . and the Tab name starts with "Tab_" and the number changes then.tab_4 while using a for loop for these tabs. the following loop may helpful for you. How can i use "for loop " Answer For example.Click) Wait 10 Next Re: i have .ReadAll 'Read all the file line to line msgbox WriteLineToFile Set MyFile1 = fso. A Sub procedure can take arguments (constants.32) * 5 / 9 End Function A Sub procedure is a series of VBScript statements (enclosed by Sub and End Sub statements) that perform actions but don't return a value.xls. i want find only .Methods and Properties within this object can be used for testing Supplemental objects Re: QTP identifying the child tabs in a maintab like tab_1.Page("Page"). for var_i = 1 to 4 Browsr("BrowserName"). If a Sub procedure has no arguments. Set fso = CreateObject("Scripting.xls file among them? how can we write function? Answer using the extention (.WebElement ("innertext:=tab_" & var_i).txt". variables. If the Tabs are recogized in QTP as Webelements.
MyFile1.WriteLine WriteLineToFile ' Write all the stuf in created file MyFile1.Close Re: what is option explicit? what is the use of it? Answer You can declare variables with the Dim, Public or the Private statement. Like this: Dim myname myname = "ritesh" However, this method is not a good practice, because you can misspell the variable name later in your script, and that can cause strange results when your script is running. If you misspell for example the "myname" variable to "mynime", the script will automatically create a new variable called "mynime". To prevent your script from doing this, you can use the 'Option Explicit' statement. This statement forces you to declare all your variables with the dim, public or private statement. Put the Option Explicit statement on the top of your script. Like this: Option Explicit Dim myname myname = "ritesh" Re: What is difference between function and procedure? Answer 1.Function is mainly used in the case where it must return a value. Where as a procedure may or may not return a value or may return more than one value using the OUT parameter. 2.Function can be called from sql statements where as procedure cannot be called from the sql statements. 3.Functions are normally used for computations where as procedures are normally used for executing business logic. 4.You can have DML (insert, update, delete) statements in a function. But, you cannot call such a function in a SQL Query. 5.A won¶t support deferred name resolution. 7.Stored procedure returns always integer value by default zero. whereas. Re: What is the difference between property and method? Answer Property is nothing like a variable and Method is like a function. You cannot pass parameters to a property and you can directly assign value to it. for a Method, set of lines of code and when it is called that executes set of lines written in it and generates the results. you can pass parameters to the method and you can retrieve the result from the method. You cannot directly assign any value to it but you can pass it by Parameterizing it. Ex: Reporter.Filter - Here Filter is a property Reporter.ReportEvent - Here ReoprtEvent is a mothod.
Reporter.Filter = 0 'this enable the results for the test results. Here you can directly assig value to it Reporter.ReportEvent micPass, "Msg1","Msg2". Here we are passing three parameters to the methos ReportEvent that send the results to Test Results. Reporter is either a predefined Abstract class/interface in QTP that has so many Properties and Methods Re: In QTP what is the difference between Step-in, Step-out, Step-over ? Answer DEBUGGING MODES step in : it can be used to execute the current line of the script. If the current line is function or method than it call the function or method but it vl not run the function.If we want to run the function line by line we can use "step in" (step out*) else if U want to run the complete function once can use the "step over". step out: it can be used to execute the function or method which is called step into mode. note:If the control is at function or method call "step out" complete function will rune at one . If the the control is already in function or method "step out" acts like "step in",that is execution will happen step by step step over: it can be used to execute the current line of the script. If the current line is function or method than it call the function or method and execute that function note: initially in design time step in only enable, remaining 2 modes r disable. when ever vl select step in mode than automatically remaining modes r enabled. Re: what is functions in qtp?i know the userdefind &bulit funation or i know that Private public function ?plz tell me what is funations in qtp?how to create a funation in qtp Answer Function: Function is a block of statements which is used to perform a particular task and we can call from any test and pass parameters from calling function and get the result from called function. Function returns Single value, we can pass no.of input parameters and get no.of output parameters and use that output parameters in tests, But returned value would be on function name, i.e. the return value at calling function We can declare function in function No need to explain about public and private functions in interviews until unless they ask about the differences. many interviews they ask differences between sub procedures and function procedures. Re: Can anybody send me the code to get the RO property of the active screen in QTP while running? Answer A=Browser("brname").page("pgname").wededit ("buname").getRoproperty("Text") msgbox a Re: How to retrieve XML file data in QTP ? using Script(Chandana) Answer We can load environment variables from an xml file. File --> Settings and click the "Environment" tab You can see a check box "Load environment variables " check it and give the path of the xml file. when u run QTP , the variables in XML file will be available. And you can retrieve those values using Environment.value("variable_name") from any actions.
Or eg: giving input in the google search dialog box thro xml data. <Environment> <Variable> <Name>googlesearchfor</Name> <Value>bike</Value> </Variable> </Environment> add the above xml file in Test -> setting -> environment -> user defined. go to keyword view of your recorded script (already recorded). click the drop down button in the 'value' column and select the environment parameter from the list. Re: How can I replace all the text from the QTP script with some other text. Is there any replace all function in QTP Any one can help me Answer I could not find any function for this. We can do by this script. Const ForReading = 1, ForWriting = 2, ForAppending = 8 Dim fso, f Set fso = CreateObject("Scripting.FileSystemObject") Set f = fso.OpenTextFile("C:\ais\Totally Data.doc", ForWriting, True) f.Write "Hello world!" f.Close this will replace everything in the given file and write the text ta u hav given. Re: how to invoke the web application through script in qtp Answer we have 2 ways in QTP to open a web application through the script, 1) systemutil.run("URL") or systemutil.run "iexplore.exe","URL" 2) systemutil.run("iexplore") set vbrowser=createobject("creationtome:=0") vbrowser.navigate"URL" Re: how qtp identify two objects having same name, suppose objects are in same page also with same name, specify spl feature. Answer This is the SPL feature for this problem. QuickTest can use the following types of ordinal identifiers to identify an object: Index. Indicates the order in which the object appears in the application code relative to other objects with an otherwise identical description. For more information, see Identifying an Object Using the Index Property. Location. Indicates the order in which the object appears within the parent window, frame, or dialog box relative to other objects with an otherwise identical description. For more information, see Identifying an Object Using the Location Property. CreationTime. (Browser object only.) Indicates the order in which the browser was opened relative to other open browsers with an otherwise identical description. For more information, see Identifying an Object Using the CreationTime Property. Re: how can i replace any text from the qtp script with some another text Answer
Einedit("AgentName").parametercount(i) for j = 0 to g k = datatable. "Link is validated" else reporter. "link is noy found" end if Re: how can i pass parameters into function? Answer We will definitely pass the parameters to Function Try with this Example First parameterize the Agent name with invalid data like sun Dialog("Login")."123") msgbox mypos k=mid ( txt.reportevent micpass.Set datatable.Export(FileName) Example DataTable.rowcount for i = 0 to n g= datatable.3) msgbox k Re: suppose i have one datatable in my datatable in 2 nd row 3rd column one link is ther i need to validate that link how do u do this?(wipro) Answer var = "<Link Name>" n = datatable.xls") Re: suppose im having a string wipro123xyz i need to get the value 123 only from the string today 123 will be in the middle from tommorow it will be changing to front or back how to get the no if it changes continously? Answer txt="123wiproxyz" mypos=instr (1.mypos.reportevent micfail. Syntax DataTable.value(j) if k = var then reporter. "validating link".Set datatable.'Replace(expression. replacewith) reptext=replace("Good Morning". "Morning".txt.Einedit("AgentName"). "Evening") msgbox reptext Re: Hoe we can export TEST RESUALT IN to XL-sheet? Answer To Save a copy of the run-time Data Table in a specified location.Value .Value ("Agent") Call the function Function Login(Sunny) Dialog("Login"). "validating link". find.Export ("C:\flights.
performs authentication or provides easy access to user data or facilities (such as a network server or RDBMS). A firewall tracks and controls communications. . the word vulnerability refers to a weakness in a system allowing an attacker to violate the confidentiality. Or SQL injection is a code injection technique that exploits a security vulnerability occurring in the database layer of an application. A vulnerability can exist either only in theory. Vulnerabilities are of significant interest when the program containing the vulnerability operates with special privileges. Vulnerabilities may result from bugs or design flaws in the system.[1] Re: What is vulnerability? Answer In computer security. There are two types of environment variables: 1. integrity.some specific sql query is sendthrough input. Re: What is the concept of firewalls? Answer A firewall ensures that all communications attempting to cross from one network to the other meet an organization¶s security policy. deciding whether to allow. The vulnerability is present when user input is either incorrectly filtered for string literal escape characters embedded in SQL statements or user input is not strongly typed and thereby unexpectedly executed. select check point and enter values For example: for data field(DD/MM/YYYY) RE:[0-3][0-9]/[0-1][0-2]/[0-2][0-9][0-9][0-9] Re: What is SQL injection? Answer to break a system database. User-defined 2. it is related to the security of database. or we can say that it is injected in the system for testing the database. it displays on edit box with Regular Expression checkbox. Built-in We can retrieve the value of any environment variable.. consistency or audit mechanisms of the system or the data and applications it hosts. or could have a known exploit. reject or encrypt communications. But we can set the value of only user-defined environment variables. availability [i.e (C. click on constant value.. it is a variable which can be used across the reusable actions and is not limited to one reusable action. access control. SQL injection attacks are also known as SQL insertion attacks.. A firewall is increasingly being deployed to protect sensitive portions of local area networks and individual PCs 8) How to use environment variable? A simple defintion could be. It is an instance of a more general class of vulnerabilities that can occur whenever one programming or scripting language is embedded inside another..A) NSTISSC's triangle].I.("Agent1") End Function next next Re: How can we add regular expression for date field (dd/mon/yyyy)? Answer Open Object Repository and selsct value .
You can record a test on Microsoft Internet Explorer and run it on any other supported browser. Whenever a new license is created.0 is now certified to work and to be tested with QuickTest Professional version 9. the license code is automatically added to this file.ini file and restart QuickTest Professional. 2.5. Include the Windows Media Player's executable in the NoBBTApps section of the mic. and then retrieves the variable value and stores it in the MyValue variable. The lservrc file is a text file.0 Beta 3.1 will not record on FireFox. Internet Explorer version 7.0 Alpha 3 (Alpha-level support for Bon Echo 2.exe in the NoBBTApps section of mic.1. The . QuickTest Professional 9.exe=rek 4. 11) Does QuickTest Professional support Internet Explorer 7.0 will not record on FireFox. such as FireFox. Close QuickTest Professional. Notes: QuickTest Professional 9.0. Example: [NoBBTApps] wmplayer.1 supports Microsoft Internet Explorer 7. You will need to restart the program.0)? Example: Window("Notepad"). such as FireFox.ini. Save the mic.ini file. Include wmplayer.0? QuickTest Professional 9. After Quick Test Professional is started."If you start Window's Media Player first.1 and 9.0a3). QuickTest Professional 8.Value("MyVariable") 10) How to rename a checkpoint (QTP 9.Value("MyVariable")=10 MyValue=Environment. An error log is being created.2 and below QuickTest Professional 8.0 provides replay support for Mozilla FireFox 1. QuickTest Professional 9. Environment. with no extension.2 and below do not include support for Internet Explorer 7.0 Beta 2.ini file 1.0:QuickTest Professional 9. Windows Media will not start. QuickTest Professional 8.0 supports Internet Explorer 7. Go to \bin\mic.Object property for web objects is not supported in FireFox.5 and Mozilla Firefox 2. . Does QuickTest Professional support Firefox? QuickTest Professional 9.exe has generated errors and will be closed by Windows. the user would like to change the name of the CheckPoint object from "Edit" to something more meaningful.0 QuickTest Professional 9. You can record a test on Microsoft Internet Explorer and run it on any other supported browser. 3.Object property for web objects is not supported in FireFox. Notes: QuickTest Professional 9. it will continue to work normally after starting QuickTest Professional.WinEditor("Edit").1 QuickTest Professional 9.10.2 and below do not have support for Firefox. 13) What is the lservrc file in QTP? The lservrc file contains the license codes that have been installed. It returns the error message "wmplayer. The .Check CheckPoint("Edit") In the above example.2 and below: QuickTest Professional 8.2 QuickTest Professional 9.1 provides replay support for Mozilla Firefox 1.
Test objects are the objects in your test that represent the objects in your website or application.QuickTest Professional is open.).WindowState = "Minimized" ' Minimize QTP wait 2 qtApp. Runtime object methods are the methods of the object in your application as defined by the object creator(the developer!). Q ) U n de r s t an d i n g e n i g m a t i c T O a n d R O p r o pe r t y ? The TO and RO property must have confused(or might be still confusing) you at some point of time. Runtime objects are the objects in your application during the test run.QuickTest Professional is displayed at full screen size.WindowState = "Maximized" ' Maximize QTP Set qtApp = Nothing . 1. Check that you have selected Allow other mercury products to run tests and components from Tools--> Options--> Run Tab. Normal . The Test object properties set is created and maintained by QuickTest. You can access and perform runtime object methods using the Object property. and that QuickTest performs when your test runs.WindowState = "Normal" ' Restore QTP wait 2 qtApp. the product developer for ActiveX objects. Minimized . Netscape for Netscape objects.How to programmatically minimize/maximize the QTP window? A. Similarly.WindowState [= value] object A QuickTest Professional application object value A string value representing the window state with the following possible values: Maximized .QuickTest Professional is displayed at the size it was prior to the last minimize or maximize operation Example: Dim qtApp Set qtApp = CreateObject("QuickTest.This is my attempt to demystify the two with the inputs from various resources. but minimized.Use QuickTest Professional's Automation Object Model (AOM) to set the WindowState property from within the script object. The Runtime object property set is created and maintained by the object creator (Microsoft for Internet Explorer objects. Test object methods are methods that QuickTest recognizes and records when they are performed on an object while you are recording a test. etc.Application") qtApp.
it hangs while trying to log into Quality Center.x versions of the add-ins. Example: [TestDirector_admin] 3. AQT/QTP searches for the literal character.Can existing licenses be used with later versions of QuickTest Professional? A. just as in mathematics and programming languages.x releases. By using special characters such as a period (. asterisk (*) .2: ReconnectOnStart=0 ReconnectToDB=0 ReconnectToServer=0 4.Instructs AQT/QTP to treat the contained sequence as a unit. If you have a 6. or within a range (when used with a hyphen "-"). instructs QuickTest to search for a character not in the specified list.ini file (<QTP installation>\bin\mic. and brackets ([ ]). With the 8. When one of these special characters is preceded by a backslash (\). you define the conditions of the search.Regular expressions enable Quick Test Professional (AQT/QTP) to identify objects and text strings with varying values.ini).ini file. plus sign (+) .The same license that was used for QTP 6. This applies to concurrent (floating) and seat (local) licenses. Regular expression syntax: AQT/QTP searches for all characters in a regular expression literally. and a asterisk (*) as described below. How can he launch QTP so he can correct the login information? A.Instructs AQT/QTP to search for any single character. He cannot launch QTP to make the needed changes for his credentials. Modify the mic. brackets ([]) . and when creating checkpoints with varying values. parentheses (()) .How to use regular expressions (or wildcards)? A.Instructs AQT/QTP to match one or more occurrences of the preceding character. When one of these special characters is preceded by a backslash (\). Change the following values from 1 to 0: QuickTest Professional 9. the licenses were separated for the individual add-ins. A regular expression is a string that specifies a complex search phrase. 1. it will not try to connect to Quality Center. The user's login credentials changed and now when he attempts to launch QTP. dollar sign ($) .Instructs AQT/QTP to match the expression only at the start of a line. The following options can be used to create regular expressions (this is not an all-inclusive list): backslash (\) .Instructs AQT/QTP to treat the next character as either a special character if it is otherwise an ordinary character.Instructs AQT/QTP to match the expression only at the end of a line. except for the regular expression characters. such as a period (. Open the mic.x add-in license.).QTP hangs during startup when attempting to log into Quality Center? QuickTest Professional (QTP) is set to automatically connect to and log into Quality Center on startup.). When used with brackets.Instructs AQT/QTP to match zero or one occurrences of the preceding character.x. Save the mic. question mark (?) . brackets ([ ]).Instructs AQT/QTP to search for any single character within a list of characters.ini file. caret (^).Instructs AQT/QTP to match zero or more occurrences of the preceding character.0: LoginAutomatically=0 ReconnectToDB=0 ReconnectToServer=0 For QuickTest Professional 8. Locate the [TestDirector_<user>] section. 3. or a literal character. 4. not in a list (when used with the caret). You can use regular expressions when defining the properties of an object. period (.ini file to set QTP to launch without connecting to Quality Center You can deactivate the automatic connection to Quality Center by modifying the mic. asterisk (*). AQT/QTP searches for the literal character. when parameterizing a step. or before a .) . if it is otherwise a special character. Now when you launch QuickTest Professional.5 and 8.2. You can launch QuickTest Professional and make the necessary modifications to log into Quality Center automatically.0 can be used with QTP 6. caret (^) . or after a newline character. you will need to request a new license for the 8. 2.
select "Local Objects. If an object with the same name and description is located in more than one shared object repository associated with the same action. maintenance number. In the "Subject" line insert "License Request". 6. select the object to be removed. In the case of a demo license it can only be a Seat license. include that you are requesting a 14-day temporary license since the demo license of a previous version of QTP has expired. For additional regular expression options and a more detailed explanation of the above options. In the object tree. the object definition is used from the first occurrence of the object. 5. Removing objects from a local object repository: 1. Click the delete toolbar button.com. the action uses the local object definition. Click the delete toolbar button.How to delete objects from the object repository (QTP 9. Save the updated shared repository.0 Readme file: "You cannot install QuickTest Professional 8. 4. vertical line (|) . 7. Save the test to save the updated local repository. will be removed. 3.If an object with the same name and description is located in both the local object repository and in a shared object repository associated with the same action.0)? A. according to the order in which the shared object repositories are associated with the action. Go to Resources -> Object Repository. You can also submit the request by sending an email to support@mercury. Click <Yes> to confirm the deletion. In the Filter combobox. In the body of the email include the locking code.What happens if multiple associated repositories have objects with the same name (QTP 9." 3. 4. The object. and the type of license you are requesting. the repository will open in read-only mode. 5. Make sure you specify you are requesting a 14-day temporary license since the demo license of a previous version of QTP has expired. Go to Resources -> Object Repository Manager. Click <Yes> to confirm the deletion.0) A. Go to File -> Open.From the QTP 8. Also. will be removed 7. In the object tree.com">Google</a></td> .Instructs AQT/QTP to match one of a choice of expressions. Note: QuickTest Professional will not generate an error when multiple repositories containing objects with the same name are associated with the action. The object. 6. please refer to the User's Guide. Removing objects from a shared object repository: 1. 6. 5. and any children it may have.google. 2. select the object to be removed. Example: You have the following webpage: <html> <body> <table> <tr> <td><a href=". 2.0 with a demo license over a previous version of QuickTest Professional whose demo license has expired and a regular license was never installed for it. and any children it may have." Submit a temporary License Request. Go to File -> Enable Editing. By default. Navigate to the desired Shared Object Repository file.newline character.Unable to use demo license on machines whose previous demo license expired? A.
Run the script. Example: Dim qtApp Dim tests(3) Set qtApp = CreateObject("QuickTest.Is there a tool that will automatically convert QTP 8. It is not guaranteed to work and is not supported by Mercury Customer Support. then the shared2. The first Google link will highlight. 10.tsr 5. 2. Each repository should now contain a Google object. enter the following line: Browser("Browser"). Remove the Google link from the local repository.Link("Google"). Run the script. Learn the third link into the action's local repository. then save it.Page("Page"). Select the Associated Repositories tab. For more information on the AOM To convert the scripts. Associate the shared1.com">Google</a></td> </tr> </table> </body> </html> 1. You are responsible for any and all modifications that may be required. 3. 7. This is the link in the shared2.tsr.Highlight 6.tsr shared2. This is the link in the shared1.0. Learn the second link into a different shared repository.tsr 12.tsr repository. The object hierarchy and name for each link will be the same (the objects are uniquely identified by the index identifier property).There is no utility. shared1. Learn the first link into one shared repository. In the Keyword View. they cannot be run from within QuickTest Professional.0 A.vbs file. You can use QuickTest Professional's Automation Object Model (AOM) to convert the tests. The associated repository list should look like the following: <local> shared1. Run the script.google.tsr repository to the action.2 scripts to QTP 9.tsr repository.tsr shared1. For information on removing objects from the repository 8. all the AOM needs to do is open the test in write mode. In your script. Note: The following code is an example. 13. 4.tsr.tsr repository. 9.google. It is not part of QuickTest Professional.Application") . The third Google link will highlight. 11. 8.com">Google</a></td> <td><a href=". use the AOM to convert the scripts There is no tool or utility that will automatically convert QTP 8. The associated repository list should now look like the following: <local> shared2. Click <OK> to close the Action Properties window. The second Google link will highlight. These statements need to be in a . shared2.tsr file in the list. right-click on the Action and select "Action Properties". Select the shared1. then click the down arrow button.2 or earlier scripts for use with QTP 9.<td><a href=". This is the link in the Local repository.
Scripts created in newer/higher versions are not quaranteed to replay . Replay the test. 2.Save Next qtApp. In other words. Stop the test and remove the wait statement.01.Launch qtApp. This document describes: y y y y y Description of the interface Find the servers that are on your network Check the available features Check the license usage Add a new license key Note: This document is not part of the QuickTest Professional documentation set.Quit Set qtApp = Nothing 10.1 ' Convert test qtApp.How to set the size and position of the QuickTest Professional window for replay? A.Open tests(i). False.The size/position which is set for the QTP window during replay will be used for the next test run The Astra QuickTest/QuickTest Professional (AQT/QTP) window will resize itself to its previous size when it was last used when replaying a test. False ' wscript.Visible = True ' Tests to convert tests(0) = "C:\Program Files\Mercury Interactive\Tests\addinproptest" tests(1) = "C:\Program Files\Mercury Interactive\Tests\checkpoint" tests(2) = "C:\Program Files\Mercury Interactive\Tests\checkpoint2" For i = 0 to Ubound(tests) . 12. 11.Using the WLMadmin application Please see the attached PDF document. Replay the test. The following table contains the additional supported QuickTest Professional versions for the specified features. if you resize the AQT/QTP window while replaying a test. The given information is not guaranteed to work and is not supported by Mercury Interactive Technical Support. While AQT/QTP is executing the wait statement. resize and position the AQT/QTP window. Insert a wait(10) statement at the beginning of the test. Update: The document was created with QuickTest Professional 8. AQT/QTP will adjust itself to the same size and position that you set it to in step 3. which provides specific information on how to use the WLMadmin tool in order to execute some maintenance on the License Manager application. AQT/QTP will resize itself on the next test run to the same size that you set previously. 3. 4. Any user (does not have to be admin) can use the WlmAdmin utility.What is the WLMadmin application and how is it used? A. The information in the "Check the available features" section does not contain information on newer versions of QuickTest Professional.Test.qtApp.scripts created in newer versions of QTP be replayed in earlier versions? A.sleep 2000 ' Save converted test qtApp. 5. To set the size and position for replay: 1.
Each machine will generate a different locking code. the browser window displays the URL address set for the test in the Web tab of the Record and Run Settings dialog box.Scripts that are created or saved within a higher version of QTP are not guaranteed to run in earlier version. 9.. If no value is set in the Web tab of the Record and Run Settings dialog box. under the Appearance(Web) section 3. If the above does not work then an authentication mechanism has been implemented to protect the websites resources. The locking code is contained in that list. Choose Test -> Settings -> Web tab. Accessing the license server machine .e. 6. the browser window displays a local QuickTest page. Under Run Options. The structure of the code is 8-XXXXX. The first time you open this dialog box for a given test. Otherwise. Enter your login information in the page displayed in the Advanced Authentication browser window. This license code fits ONLY the locking code you sent (i.The locking code is a code required to get a license code The locking code is a code that is generated specifically for a machine. 13. 7. The QuickTest Professional License Installation dialog will appear. Insert the "Mercury Concurrent License Server" CD. If you close and reopen QuickTest and then reopen your test.What is a locking code? A. 4. Disable the scripts running in the Active 14. 2. 3." 3. you must log in to your site using the Advanced Authentication dialog box. select the "Disabled" radio button. the license will only work on the machine that generated the locking code). If the displayed Web page is not the correct page for logging in to your site. Refresh the Active Screen by selecting a new step in the test tree or toggle the Active Screen button to redisplay the Active Screen. that address becomes the default Advanced Authentication page for this test. When the login process is complete. you must log in again.The user is asked to log in when trying to access Active Screens? A. 2. Send the locking code to Support and a license code will be generated for you. The Advanced Authentication dialog box opens. The scripts created or saved in earlier versions can be replayed in higher versions.exe application. QTP is designed to be backward compatible. but the login session remains open for the remainder of your QuickTest session (or until the Web site¶s inactivity timeout is exceeded). Click <Advanced>. Click the Advanced button. In the middle of this dialog. Confirm that the pages are displayed correctly. When you log in this way.Disable the scripts running in the Active Screens 1. The Advanced Authentication dialog box closes. To enable the Active Screen to access the resources on such a site. enter the correct URL address in the Address box and click Go. 1. Note: This needs to be done on the actual license server machine. Click on "Install Floating License. QuickTest¶s automatic Active Screen login mechanism may not be sufficient. proceed to step 5. you remain logged in to the site for the duration of the QuickTest session. 2. there is a section listing information you will need to have a license generated. If you navigate to a new URL address using this dialog box. 5. Depending on the authentication mechanisms used to password-protect resources on a Web site. Example: 8-20B0D To obtain locking code from a QuickTest Professional machine (Seat license) You will get the locking code when you try to install a license or you can retrieve the locking code by running the <QuickTest Professional>/bin/inst_key. 8. click Close. To get the locking code for the License Server machine (Concurrent license): 1. Choose Tools -> Options -> Active Screen tab.
Refer to Microsoft`s MSDN Library for a complete listing of Excel Object`s methods and properties You can refer to "Microsoft Excel 2000 Visual Basic for Applications object model" for a complete listing of Excel methods and properties that can be used within a QuickTest Professional (QTP) script. Attached you will find a working example of a QTP test that uses the sub-procedure below. start with step 4.When recording on an object that has one or more parameterized properties. This is why QTP must refer to it as a different object.0. the object is added to the Object Repository with a different name (e. delete the "MyObject_2"). Check your System environment variables for the LSERVRC variable. . You can use these Excel object methods within a QTP script to create workbooks. 19. 2.0 work with Concurrent License Server 8. This is especially evident when the property is parameterized using the data table. In this case.Will QuickTest Professional 9.xls) using the ReportInformation sub-procedure. installation.How to use Excel objects in a QuickTest Professional test? A. etc. you should: Record the whole script.g.5 and above.Microsoft Excel Object Model. If this variable exists. Because the actual value of the parameterized property may change between recording sessions.If you upgraded from a previous version of QTP and were already using the Java add-in to test Oracle applications. when you are done. finish recording. delete the redundant objects from the Object Repository.0 requires Functional Testing Concurrent License Server 9.QuickTest Professional with the Java add-in no longer works with Oracle applications or Forms? A. For a complete listing of Excel object's methods and properties. and parameterize the descriptions only when you are finished recording.QuickTest Professional 9. the lservrc file will be located in the same directory the license server was installed to (the location from step 1). OR If your test already contains objects with parameterized properties.0. To avoid this situation. refer to MSDN Library . 16. If the System environment variable does not exist. Make note of this directory.0 to a non-default location? A. so QTP must record a different object. and finally change logical names in the script to match the logical name of the parameterized object in the Object Repository (i. Why does this happen? A..0 Note: The following steps are the recommended procedure for an upgrade. 20..Installing/upgrading the License Server version 9.e.QTP does not know if the current description will match the description in the Object Repository This is the way QuickTest Professional behaves by design.remotely can generate an invalid locking code. the property has no "current value" at all. the lservrc file will be located in the directory specified by the LSERVRC variable. and adds a new entry for it to the Object Repository. Locate and back up the LSERVRC file that is used by the License Server. 1. The following is a sub-procedure that uses Excel object methods to output data from a dictionary object to an Excel file. Locate the directory where the older version of the License Server is installed. 18. then you should have received the Oracle add-in with your upgrade to QTP 6.0? A. If installing a new installation of the Function Testing Concurrent License Server 9. input data.How to install the License Server 9. The example will retrieve information from a webpage and output it to an Excel file (info. You cannot use an earlier version of the License Server. not necessarily a new. create new sheets. "MyObject_2"). 15. QTP has no way of knowing whether the current physical description will always match the description in the Object Repository.
Sheets.Sheets.Columns("A:A").Columns("B:B").How to add formatting to the cell in an Excel document? A.Columns("A:A"). unzip the attached file to a temporary folder.Workbooks.2).1) = key NewSheet. filename) ' create the Excel object Set ExcelObj = CreateObject("Excel. For more information on using the Excel object model with QuickTest Professional.ActiveWorkbook.HorizontalAlignment = -4108 ' xlCenter ' save the Excel file ExcelObj. 255) ' Add the text .Bold = true ' Add the text .2) = "Hello From QTP!" NewSheet.Font. 0. and start QTP with Web support loaded.In order to run the test.ColumnWidth = 60 NewSheet.format bold NewSheet.Cells(2.Add Set NewSheet = ExcelObj.format italic NewSheet.2) = "Hello From QTP!" NewSheet.Application") ExcelObj.format color NewSheet.SaveAs filename ' close the application and clean the object ExcelObj.Cells(2.2).Open filename Set NewSheet = ExcelObj.Cells(1.Font. Example: Sub ReportInformation(dictionary.xls" Set ExcelObj = CreateObject("Excel.Cells(row.Item(1) NewSheet.2) = dictionary(key) row = row + 1 Next ' customize the Sheet layout NewSheet.Formatting Excel output from QuickTest Professional You can use the Excel object model to automate formatting the values in a cell in an Excel worksheet. Example: filename = "C:\temp\Book1.Cells(row.2) = "Hello From QTP!" .Name = "Page Information" ' loop through all the information in the Dictionary object ' and report it to the Excel sheet row = 1 For Each key In dictionary.Font.Quit Set ExcelObj = Nothing End Sub 21.Application") ' add a new Workbooks and a new Sheet ExcelObj.Columns("B:B").Cells(1.keys NewSheet.Color = RGB(0.Cells(3.Workbooks.ColumnWidth = 20 NewSheet.Item(1) ' Add the text .Bold = True NewSheet.
format Subscript NewSheet. 2). Use the Test object to specify the test to be uploaded and the Test.Italic = true ' Add the text . Set qtApp = CreateObject("QuickTest.How to get the current QTP result folder path in Quality Center? A.Subscript = True ' Add the text . Dim runExtendedStorage set runExtendedStorage = qcutil. 2).Cells(7.Strikethrough = true ' Add the text .Cells(3.ExtendedStorage CurrentResultPath = runExtendedStorage.Launch ' Start QuickTest .2) = "Hello From QTP!" NewSheet. Example: Dim qtApp 'As QuickTest.Underline = true ' Add the text .Application") ' Launch QTP and make it visible qtApp.format underline NewSheet.Application.Test ' Declare a Test object variable ' Create the QTP Application object. 2).Characters(2.Size = 25 ' Add the text .CurrentRun. 2).Save ExcelObj. 23. You can use the AOM to connect to Quality Center and save (upload) the tests. For more information regarding the ExtendedStorage object. CurrentResultPath will hold the current result folder path of the Quality Center server.Cells(5.Cells(4.Quit Set ExcelObj = Nothing 22.format Subscript NewSheet.NewSheet.format Strikethrough NewSheet. 2).Font.ActiveWorkbook. which handles storage operations on the server and the client.Cells(7.Cells(8.Application ' Declare the Application object variable Dim qtTest 'As QuickTest.2) = "1st test" NewSheet. You can use the TDConnection object to control the Quality Center connection.Font.Cells(8.Font.format size NewSheet.Superscript = True ExcelObj.Font.Cells(6.2) = "Hello From QTP!" NewSheet. Is there a way to upload the tests programmatically so he does not need to manually connect to Quality Center and upload the tests? A. Example: ' This code should be written in the Expert View of QTP. 2).How to programmatically upload QTP scripts to Quality Center?The user has a large number of QuickTest Professional (QTP) scripts he wants to upload TestDirector for Quality Center (Quality Center).ServerPath Msgbox CurrentResultPath In this example.Cells(6.2) = "1st test" NewSheet.Cells(4. please refer to the Mercury Quality Center Open Test Architecture guide.Characters(2.2) = "Hello From QTP!" NewSheet.Font. 2).Font.Use the QC ExtendedStorage object Use the QC ExtendedStorage object.Cells(5.Use QuickTest Professional's AOM to connect to Quality Center and save the tests QuickTest Professional's Automation Object Model allows you to automate QTP's operations. 2).SaveAs method to specify the path in Quality Center to upload the test.
SaveAs "[QualityCenter] Subject\FolderName\QTPTestScript1" ' Disconnect from Quality Center qtApp.Test ' Use the SaveAs method to upload the test to Quality Center qtApp.20 that can be requested by contacting Customer Support. Where QuickTest Professional for mySAP. Mercury released the mySAP.? A. Note: There currently is a replay patch for SAP GUI Front 6. which supports record/replay of SAP GUI Front 6. the only differences were defect fixes and support for newer SAP Fronts and Servers.IsConnected Then ' If connection is successful ' Open the test in read-only mode qtApp.20. True ' Get the Test object Set qtTest = qtApp.TDConnection. .com Windows Client. The latest release. 24.com Windows Client version 7.com add-in.qtApp.com add-in?There is a confusion in the naming terminology for Mercury's SAP products. QuickTest Professional for mySAP.Can object repositories be dynamically loaded at runtime in QTP 9.com Window Client and QTP with the mySAP. display an error message. What is the difference between QuickTest Professional for SAP.10.Connect "". at the time of writing this KB article is QuickTest Professional 6. This is a separate product from the product stated above.Summary for QTP for mySAP.com add-in.Visible = True ' Make the QuickTest application visible ' Connect to Quality Center/Test Director server qtApp." The overall application was the same. which supports up to SAP GUI Front 6.com Window Client could only test against SAP.0.0 with the mySAP. AOM statements are not designed to be executed from within a test script 25.Test.31. _ "DOMAIN". "MyProject".QuickTest Professional does not support dynamically loading object repositories at runtime. QuickTest Professional for mySAP.com add-in In Q2 of 2003.TDConnection. End If ' Exit QTP and release the objects qtApp. and QuickTest Professional with the mySAP.Disconnect Else MsgBox "Cannot connect to Quality Center" ' If connection is not successful.com Window Client.com Window Client Mercury initally released QuickTest for SAP (R/3).com add-in The following is a short summary of the terminology changes for Mercury's SAP testing products. "userID". You can modify the code so that it uploads several tests at one time. which supports SAP GUI Front 6.Quit Set qtTest = Nothing Set qtApp = Nothing The above example demonstrates how to upload one test.Open "C:\Temp\Project\QTPTestScript1". QuickTest Professional with the mySAP. both the SAP Window and Web Client (Front).TDConnection. In order to test against SAP using QuickTest Professional you would need to install the mySAP. False ' Connect to Quality Center If qtApp. which was designed to only test SAP. "password".20.com add-in for QuickTest Professional 6.Terminology clarification for QTPSAP and QTP with the mySAP. This product went through a number of name changes as support for newer SAP Fronts were released: "QuickTest for SAP(R/3)" to "QuickTest Professional for SAP(R/3)" to the final name "QuickTest Professional for mySAP. The relevant lines are in bold.com add-in? A. QuickTest Professional could test against a number of Window applications. The latest release is called QuickTest Professional for mySAP. as well as the SAP Web Client.
QTP checks for specified events to handle before it executes any action against the application.vbs": wscript. Note: The Recovery Scenario Manager replaces and enhances the functionality of the Exception Editor found in earlier versions of QuickTest.sleep 1000 msgbox "hello there" wscript.Unable to call methods or access properties that belong to the WScript object?QuickTest Professional (QTP) is unable to execute methods belonging to WScript objects. Example: ' The following would be in the .a definition of an unexpected event and the operation(s) necessary to recover the test run. Refer to the "Defining and Using Recovery Scenarios" section under "Creating Tests. such as the sleep method. QTP just continues on with the script as expected. then QTP will check to see if any specified events have occurred." 27. QTP executes the specified recovery scenario. Example: Wait 10 If it is required to call WScript specific methods. you can now define how QuickTest Professional(QTP) recovers from unexpected events and errors that occur in your testing environment during a test run. Echo Use the msgbox function. If no event has occurred.Application") Notice that the WScript is not needed before the CreateObject method. Example:wscript. If an event has occurred.QTP should have an equivalent method for the methods available to the WScript object You can also call some WScript methods directly. "test. If the next step is a wait statement.sleep 1000.What is the Recovery Manager? A. A.echo "done" . The error that is generated is typically caused because the object has not been created. QTP does not check for the events.sleep 1000 wscript. You can find detailed information on using the Recovery Scenario Manager in the QuickTest Professional User's Guide (Help -> QuickTest Professional Help). they can be placed in a . As the script replays.QuickTest Professional Summary for QTP for mySAP 26. The Recovery Scenario Manager provides a wizard that guides you through the process of defining a recovery scenario -. Example: msgbox "Hello There" Sleep Use the Wait method.vbs file and the file can be exectued from within QTP.vbs file.The above statement will generate the error below when executed: "Object required: 'wscript'" Diagnosis: WScript is the root object of the Windows Script Host object model hierarchy. You can also set your recovery scenario as default for all new tests. Examples: CreateO This method can be called directly. For example. bject Example: obj = CreateObject("QuickTest. if the next step in your script clicks a link. It never needs to be instantiated before invoking its properties or methods.The Recovery Manager is an exception handling and compound recovery tool Using the new recovery mechanism.
0 Window("Notepad").WebList("name:=perProtocolID[1]".Check CheckPoint("Edit") Window("Notepad").WinEditor("Edit").Run "c:\temp\test.Exist = False wait 1 Wend Dialog("Windows Script").The Checkpoint object does not support descriptive programming The Checkpoint object.vbs file has completed execution.WinEditor("Edit_2").SetCaretPos 0.vbs" ' Wait for the .Type micCtrlDwn + "a" + micCtrlUp + micDel + "1" Window("Notepad")."html tag:=SELECT").Page("Manage"). This name refers to non-text based file that is used to store the checkpoint properties and the expected values and needs to be created using QTP's UI.Close ' continue on with script 28. "Edit_3"). If you need the .Run "c:\temp\test. Example: Browser("Welcome").vbs file to complete execution before continuing.Check CheckPoint("Edit") Window("Notepad"). "Edit_2". The Checkpoint object is applied to multiple WinEditor objects ("Edit".vbs file to finish execution first.Frame("Frame"). While Dialog("Windows Script").vbs file which uses WScript methods SystemUtil. Example: Window("Notepad"). you will need to use regular expressions or parameterization to make sure those values update as needed for each control. If the values selected to be verified will change based on the object the checkpoint is replayed against. you would execute the file by doing the following: SystemUtil.WinEditor("Edit_2"). You can apply the same Checkpoint object to multiple controls (as long as the controls have the same attributes). Checkpoint(<name>). Example ' Call a .WebList("name:=perProtocolID\[1\]". synchronize on something within the .Can descriptive programming be used with a Checkpoint object?The user would like to use descriptive programming for checkpoints.SetCaretPos 0. does not support the use of descriptive programming.vbs" Note: QTP does not wait for the .Page("Page").Check CheckPoint("Edit") Note the Checkpoint statements (bolded) in the above example.Page("Manage Network_4").Type micCtrlDwn + "a" + micCtrlUp + micDel + "2" Window("Notepad"). Example: . You can replace the logical names of the object classes with descriptive programming. Example:Browser("Welcome").QTP is unable to locate an object when descriptive programming is used?QuickTest Professional (QTP) is able to replay a test as long as descriptive programming is not used for some objects.Use an escape character ( \ ) if the description contains any wild card characters If you want to enter a value that contains a special regular expression character (such as * ? + ] [ ).Select "ftp"? A. use an escape character (\) to instruct QTP to treat the special character as a literal character.WinEditor("Edit_2").WinEditor("Edit").vbs file (a wscript. then QTP will treat the description as a regular expression."html tag:=SELECT"). Diagnosis: If the description of the object contains characters that are used as wild cards in a regular expression (such as * ? + ] [ ).echo dialog for example) or on something in your application that will change after the .WinEditor("Edit").check checkPoint("text:=mytext") Can this be done? A.Select "ftp" 29. The name in the Checkpoint object is not the control in the application that the checkpoint is applied too.vbs file to finish executing.0 Window("Notepad").WinEditor("Edit_3").' In the QTP script. Example:Browser("Browser").
add set xlWorkSheet=xlWorkbook.Evaluate("COUNTA(A:A)") 'Will count the # of rows which have non blank value in the column A . How can i check if a parameter exists in DataTable or not? code: on error resume next val=DataTable("ParamName".1)."text:=1").value="text" 'Will set values of all 10 rows to "text" xlWorkSheet.Check CheckPoint("Edit") Window("Notepad").colorindex = 34 'Change the color of the cells xlWorkSheet.value="Text" 'Will set the value of first row and first col rowsCount=xlWorkSheet.dtLocalSheet) for Local data sheet If we change any thing in the Data Table at Run-Time the data is changed only in the run-time data table.WinEditor("nativeclass:=edit".Window("Notepad").add xlWorkSheet.0 Window("Notepad").WinEditor("nativeclass:=edit".Application") set xlWorkBook=xlApp."text:=1"). The only work around is to share the spreadsheet and then access it using the Excel COM Api's. The run-time data table is accessible only through then test result.WinEditor("nativeclass:=edit". Below code will explain some expects of Excel COM APIs code: Set xlApp=Createobject("Excel.Check CheckPoint("Edit") Data Table Two Types of data tables:Global data sheet: Accessible to all the actions :Local data sheet: Accessible to the associated action only Usage: DataTable("Column Name"."text:=2").WinEditor("nativeclass:=edit".Cells(1."text:=0").SetCaretPos 0.dtGlobalSheet) for Global data sheet DataTable("Column Name".number<> 0 then 'Parameter does not exist else 'Parameter exists end if How can i make some rows colored in the data table? Well you can't do it normally but you can use Excel COM API's do the same.workbooks.WinEditor("nativeclass:=edit".ExportSheet How can i save the changes to my DataTable in the test itself? Well QTP does not allow anything for saving the run time changes to the actual data sheet.0 Window("Notepad").Type micCtrlDwn + "a" + micCtrlUp + micDel + "2" Window("Notepad").dtGlobalSheet) if err.Type micCtrlDwn + "a" + micCtrlUp + micDel + "1" Window("Notepad")."text:=0").Range("A1:B10").SetCaretPos 0."text:=0").Check CheckPoint("Edit") Window("Notepad").WinEditor("nativeclass:=edit"."text:=1").WinEditor("nativeclass:=edit".interior.worksheet.Range("A1:A10").Export or DataTable. The run-time data table can also be exported using DataTable.
and add your library file to resources tab. But the thing is that you should disable SI while creating your test cases. But the developer of the script should always check for the test results to verify if the SI feature was used to identify a object or not.. So it kind of PI (Programmed intelligence) not AI.xls" xlWorkBook. But there is something that is still the same. the SI should be enabled. this is advisable when you use SetTOProperty to change any of the TO properties of an object and especially ordinal identifiers like index. The file can contain function. Run time object means the actual object to which a test object maps. so that the script does not fail in case of small changes. classes etc.. Add-ins Test and Run-time Object? What is the difference between Test Objects and Run Time Objects ? Test objects are basic and generic objects that QTP recognize. Library Files or VBScript Files How do we associate a library file with a test ? Library files are files containing normal VBScript code.. But when we use Executefile function to load a library file. Can i change properties of a test object . then the function are available in the action that called executefile.SaveAs "C:\Test.Evaluate("COUNTA(1:1)") 'Will count the # of non blank columns in 1st row xlWorkbook.colsCount=xlWorkSheet. sub procedure. We can use executefile in a library file associated with the test to load dynamic files and they will be available to all the actions in the test. You can also use executefile function to include a file at run-time also.Close Set xlWorkSheet=Nothing Set xlWorkBook=Nothing set xlApp=Nothing SMART Identification Smart Identification is nothing but an algorithm used by QTP when it is not able to recognize one of the object. To associate a library file with your script go to Test->Settings. now when both are 10 years old then QTP would not be able to recognize the girl. Sometimes SI needs to be disabled for particular objects in the OR. When to associate a library file with a test and when to use execute file? When we associate a library file with the test. When the script has been created. So that you are able to recognize the objects that are dynamic or inconsistent in their properties. that is there is only one girl in the photograph. By associated a library to a test we share variables across action (global variables basically). location and creationtime.. A very generic example as per the QTP manual would be. When should i use SMART Identification? Something that people don't think about too much. A photograph of a 8 year old girl and boy and QTP records identification properties of that girl when she was 8. then all the functions within that library are available to all the actions present in the test. using association also makes it possible to execute code as soon as the script runs because while loading the script on startup QTP executes all the code on the global scope..
.). If you want to use the OR feature then you have to go for Action only. Where to use function or action? Well answer depends on the scenario.. But you can use WebElement()..Filter = rfDisableAll 'Disables all the reporting stuff chk_PassFail = Browser(. What is checkpoint? Checkpoint is basically a point in the test which validates for truthfulness of a specific things in the AUT. You can use SetTOProperty to change the test object properties..WebEdit(. Yes..).object... If the functionality is not about any automation script i.Check (Checkpoint("Check1")) Reporter.). Can i validate a checkpoint without my test failing due to checpoint failure? Reporter. a function like getting a string between to specific characters.).. attributes. Code specific to QTP can also be put into an function using DP.. It is recommended that you switch off the Smart Identification for the object on which you use SetTOProperty function. But when we use .Check (Checkpoint("Check1")) if chk_PassFail then MsgBox "Check Point passed" else MsgBox "Check Point failed" end if My test fails due to checkpoint failing. XML etc. Decision of using function/action depends on what any one would be comfortable using in a given situation.xml" How can i check if a environment variable exist or not? When we use Environment("Param1").WebEdit(..Page(.value then QTP expects the environment variable to be already defined... Can i change properties of a run time object? No (but Yes also)..outerText="Something" to change the property.Page(. so this should be done in a function and not an action. There are different types of checkpoints depending on the type of data that needs to be tested in the AUT.Filter = rfEnableAll 'Enable all the reporting stuff if chk_PassFail then MsgBox "Check Point passed" else MsgBox "Check Point failed" end if Environment:How can i import environment from a file on disk Environment. image/bitmap. You can use GetROProperty("outerText") to get the outerText of a object but there is no function like SetROProperty to change this property.e..LoadFromFile "C:\Env. It can be text. How can i check if a checkpoint passes or not? chk_PassFail = Browser(. What's the difference between a checkpoint and output value? Checkpoint only checks for the specific attribute of an object in AUT while Output value can output those attributes value to a column in data table.). now this is something not specific to QTP and can be done on pure VB Script..).
Open 'This will execute your Query If ObjRecordset.RepositoriesCollection.tsr' file during the run time >>FIND .Pos = RepositoriesCollection.no = RepositoriesCollection.Count --stores the number of repository items to 'no' >>ITEM . Makes object repository empty >>COUNT .Source="select field1.Remove(5) --it removes the 5th object repository file from the OR >>REMOVEALL . In this case it will move 2nd item to 5th position >>REMOVE .recordcount>0 then Field1 = ObjRecordset("Field1"). Re: Which MS Excel formulas are possible to use in the DataTable? I am searching a formula for searching a field in the datatable Answer but 2 open excell sheet which is already created we use set obj=createobject("excell.Item(4) --returns the path of the 4th object repository file.field2 from testTable" ObjRecordset.Value Field2 = ObjRecordset("Field2").Find("D/OR/test.RepositoriesCollection.CursorType = adopenstatic objRecordset.UID=<UID>.RemoveAll --it removes the full items from OR.Environment. >>ADD .2 is Dynamic Management of OR.CursorLocation = adUseClient objRecordset.Connection") Set objRecordset = CreateObject("ADODB.Add("D/OR/test.tsr") -.workbooks.LockType = adlockoptimistic ObjRecordset.here 2 is the current index and 5 is the new index position.MoveToPos(2.Recordset") objConnection.it will add the 'test.Value END if Re: How to load the object repository at run time? Answer One of the new feature of QTP 9.application") obj.value("Param1") then QTP will create a new internal environment variable if it does not exists already.PWD=<PWD>" objRecordset.open"path of excel sheet" . How to connect to a database? Const adOpenStatic = 3 Const adLockOptimistic = 3 Const adUseClient = 3 Set objConnection = CreateObject("ADODB.ActiveConnection=ObjConnection ObjRecordset.Open "DRIVER={Microsoft ODBC for Oracle}.value.it will return a numeric value to the variable 'Pos' this is nothing but the index value of the specified file >>MOVETOPOS .RepositoriesCollection.desc = RepositoriesCollection.5) -.tsr") -.RepositoriesCollection. So to be sure that variable exist in the environment try using Environment("Param1").
of. if situation comes.. The difference between both the specification is the functional requirement is techinical document Re: if full in Object Repostoiory then how to load other Object Repository Answer Hi.like this u can associate number of repositories for same test.sheets(1). Business Requirement Specification is given by client.sheets(1).rows.and u r using 9. public function cal_action(strdata) if strdata = "" then runAction "Action1[test1]".coloums. which describes the Requirements for Business(Project) or Functional Requirements:..sheets(1).we used v=obj...there u can create another object repository.of... It contains the business logic which is going to execute into the software..it is prepared by the business analysts.OneIteration else if Close function Re: What is the difference between functional spec.j) msgbox v i means rows j means columns to insert data in to particular cell. and Business requirement specification? Answer Functional specification (FRS . ...cells(i. we use obj..j)="vikram" to count no.usedrange..to get data from excell sheet.we use v=obj. u can go for Object repository Manager.Business Requirement specification:.. if want to execute that repository at run time. for example.count msgbox v to count no..cells(i.columns.rows.0 or above.sheets(1).usedrange.and by using associate repositories tool u can associate that repository to current test.Functional Requirement Specification) is one which is prepared by Project Analyst by using Business Requirement Specifications.remember one thing OR never gets full.Functional requirements are prepared by the system analyst. Main purpose of this is to understand the functionality of the project..count msgbox v Re: how can i call reusable action in a function could any one explain me? Answer by using if then else statements.. Its all about the functionality for the software.we used v=obj.OneIteration else RunAction "Action1[test2]"....
GetROProperty("items count") msgbox Count Re: take one example and write vbscript on one web application in qtp? explian descriptive programe with one example? Answer Example for VB Script Function fib(n) Dim fab(70) fab(0)=0 fab(1)=1 For j=2 to n fab(j)=fab(j-1)+ fab(j-2) Next For i=0 to n-1 msgbox fab(i) Next end function Next Save this function in ."items count")) msgbox intFieldItemsC or You can use this: Count =Browser("title:abc").value ="WebEdit" .Value = "Welcome to Rediff.value= true Set PG =Description..com/index.html" Set UN =Description.rediff.value ="Welcome to Rediff.repositoriescollection.vbs file and call the function in QTP Action Example for Descriptive Programme systemutil.com India" BRO("openedbytestingtool").Weblist ("weblistname").Page("Pagename").com India" PG("url").and enables new repository given by u..add "<repository file path>"this will execute the repository file at run time only.com India" BRO("name"). intFieldItemsCnt =CInt(GetProperty(Browser ("Browsername").com") set BRO =Description.Create() BRO("title"). e: How can i count the list box elements in QTP ?using script plz explain me (Chandana) Answer The Following script displays the number of items in the list box.rediffmail.Create() UN("name").value= "login" UN("Class Name").Create() PG("title")..value="Welcome to Rediff..value=" ("name:=xyz").Run(" u used this second time in same test it will over write the old repository.Page("title:abc").
value="submit" Login("html tag").click browser("xx").page("xx").value="rmailgobtn" temp= Browser("Welcome to Rediff.WebButton(Login)..com").value="password" PWD("html tag").xlsx" //path of excel where pwd is stored v2 = objxls.open "C:\Documents and Settings\ranantatmula\Desktop\data1.workbooks. password is from excel2 whether the page is opened or not that checkpoint is result is should be stored in excel 3.com").set(v2) To export the result in Excel 3 Set objxls3 = createobject("Excel.webedit("username").cells(1.webedit("pwd")..value="INPUT" Login("value").click browser("xx"). End If Browser(BRO). this qus i have faced in IBM technical round.workbooks.Create() PWD("name").Add write a condition if page is opened browser("xx").com").webedit("username").Page(PG).open "C:\Documents and Settings\ranantatmula\Desktop\data..Application") objxls.page("yyy").value="INPUT" Set Login =Description.WebEdit(UN).WebElement("ShockwaveFlash1").clcik .value="INPUT" Set PWD =Description.webbutton("submit").Exist If temp= true Then 'Browser("Welcome to Rediff.page("xx").Create() Login("name")..Page(PG).cells(1.page("yyy").value="GO" Login("class").WebEdit(PWD)..Click Re: suppose 3 excel sheets are there * we are trying to check for login credentials for a page.UN("type").1) // where user id is stored in Excel browser("xxx").set(v1) Get PWD from excel 2 Set objxls1 = createobject("Excel.page("xx").Application") set objworkbook = objxls3.value="WebEdit" PWD("type").value ="Go" Login("Class Name").workbooks.value= "text" UN("html tag"). userid from excel1 . please tell script for above query Answer Getting User id from Excel 1 Set objxls = createobject("Excel.Page("Welcome to Rediff.Page(PG).WebElement("ShockwaveFlash1").value ="passwd" PWD("Class Name").1) // where PWD id is stored in Excel browser("xxx").Set "shyamshyam" Browser(BRO).Application") objxls.Set "testingwithshyam" Browser(BRO).value="WebButton" Login("type").xlsx"//path of the excel sheet where user id is stored v1 = objxls.webedit("userid").Page("Welcome to Rediff.com").
2) File->Settings Select resource tab.sheet(1). Your script is ready with the Library function. ForWriting = 2 Dim fso.cells(1. "QTPRO") Set textFile = fso.vbs files can contain the variable declaration.OpenTextFile("c:\working\replace.vbs files and assosiating it with the main script. all the time a new title bar is appeared on next page. 3.vbs file path. ForReading) ReadAllTextFile = f.If page opens objxls3.FileSystemObject") Set f = fso.1) = "PASS" else objxls3. ³FindWindow´. "QTP". Steps are: 1.dll -the DLL from where you wish to call the method FindWindowA -The actual method name inside the DLL Last two are the data types of the arguments that will be passed to the procedure Re: I want to open a text file and then search some specified text in it and then replace that text with some other text i found that text in .cells(1.Close Re: A question was asked in a company-suppose. functions&Procedures etc.txt". user32.vbs files. reusabilitu and maintainance is very easy.1) = "FAIL" Endif Re: How i can use and create Library functions in QTP and what is the proces . From QTP (I am talking wrt QTP 9.txt". You can set it to anything as long as it¶s a valid syntax. Re: How can the functions inside DLL be called from QTP? i mean How can i use those functions(Inside DLL) in QTP Answer #1 This part is a two step process« Declare the method using Extern. Answer Creating library is nothing but writing some .OpenTextFile ( "C:\working\replace. Associate the libs by browsing to the .Declare micHwnd.txt". ForWriting ) textFile.dll´. f Set fso = CreateObject("Scripting.WriteLine newText f.ReadAll newText = Replace( ReadAllTextFile. Create us .Declare Example Extern. ³user32. ³FindSomeWindow´. micString Here: micHwnd -the data type of value returned by method FindWindow -the user supplied procedure name. micString. I am testing a website on QTP. 2. ForWriting.sheet(1). It is benificial in the case you can use the same library for different scripts. Which (Variables and Functions&Procedures) can be directly referred from your QTP Scripts. The . True) f.txt but do not know how to replace that text Const ForReading = 1.OpenTextFile("c:\working\replace. Trying to use regular expression under key word .Write "QTP QTP RFT QTP QTP RFT QTP QTP" Set f = fso.
I am trying to parametrize a script.SetSecure "460205b87f0543" Browser("Yahoo! Mail .Set "TestUser" Browser("Yahoo! Mail . that is unable to clear my doubt.WinRadioButton("C").The best"). Its clear to me that.Page(). column name is "FieldName1" retreive the data from datatable dtFieldName1 = DataTable("FieldName1".Page().Click Browser("Yahoo! Mail .driven testing but expert view is also appearing unchanged and error is also generating. if for first iteration I selected radio button 'A' and for second iteration I want to use 'B' radio button for second iteration) then how can I make it happen.WebEdit("login").The best").Page().WebEdit("passwd").The best").The best"). we use data table to enter various text data.filesystemobject). Re: How can we use the "CreateObject ("Scripting.Page("Yahoo! Address Book -").The best"). Answer Tile bar is completely getting changed on the next page what syntax we will use for regular expression for Browser("Yahoo! Mail .Page("Yahoo! Mail .Page("Yahoo! Mail TestUser@").Link("Mail").Link("Addresses").The best").Select dtFieldName1 'For Windows If dtFieldName1 = "A" Then Browser().Page("Yahoo! Mail . Its uses to get Test data from flatfile(notepad) in Dtadriven framework Re: How can we use the "CreateObject("Wscript.WebRadioGroup("RadioGroup").dtLocalSheet) 'For Web based Browser().Set ElseIf dtFieldName1 = "B" Then Browser(). add a cloumn for the radio button field e.Sync Re: Hello everyone. # 1 In the datatable.Set End If This way you can parameterize the data for RadioButton. please dont give example of yahoomail.g.WinRadioButton("B").Page("Yahoo! Mail TestUser").The best").The best").Page("Yahoo! Mail .Page().SendKeys "{F5}" Re: How will you send values to a cell in a webtable using QTP? Answer . The purpose of this one is for creating object.The best").g.Set ElseIf dtFieldName1 = "C" Then Browser(). do you have any best resolution for that kindly explain in detail. but if I want to change the radiobutton (e.WebButton("Sign In").Shell") Object.Shell")" in QTP and what is the definition and purpose of it? Answer #1 For perfoming keyboard operations "Wscript" is use full.Click Browser("Yahoo! Mail .Click Browser("Yahoo! Mail . Eg:For Pressing "F5" Set Object=CreateObject("Wscript.WinRadioButton("A").FileSystemObject")" in QTP and what is the definition and purpose of it? Answer #1 set fso=createobjective("scripting.
abcd. Re: How can i open 5 multiple browser at once through QTP VB script and i want to login with 5 different credentials i already tried with this code but its entering credentials only for first browser and am using datatable to parameterize Answer Using Data parameterization. we can do that or in other purpose also Method1: var_RowCount = DataTable.Run "iexplore".e.ChildObjClassname.WebTable(). Set ChildObj = Browser("ab").index of the child object).WebTable().WebTable ("de"). you can fetch the child object of the cell.Set psd Browser("index:=" & i).Page("title:=.Page()..Click .Set uname Browser("index:=" & i). you can set the value to that child objet.Set "abcd" E.For WebTable object.ChildItem(row..Page("cd").Page("title:=.Set psd Browser("index:=0").WebEdit. Using ChildItem method.Page("title:=.GetSheet("Action1").Set "abcd" or Set ChildObj = Browser(). val = Browser().Page().Page().*").abcd.Click Next Method2: For i = 1 to 5 SystemUtil.col.WebButton ("name:=Go").Set "abcd" If the cell has more than multiple similar child objects.*").WebEdit ("name:=Usname").dtLocatSheet) Browser("index:=0").Run "iexplore".*").column) To set the value.GetRowCount For i = 1 to var_RowCount DataTable.WebEdit ("name:=Pwd").col. Browser(). To retreive data.WebEdit ("name:=Pwd").*").If you want to set any value on cell.com" uname = DataTable("Username".col.dtLocatSheet) psd = DataTable("Password"."www. Using the method.Page("title:=.*").com" Next For i = 1 to 5 Browser("index:=" & i)."www. then you need to take care of the object index.Set uname Browser("index:=0").g.index of the child object) ChildObj.ChildObjClassname. you can retreive the cell data but can't set any value directly.ChildItem (row. you can do it on its child objects.Page("title:=.WebEdit ("name:=Usname").Page("title:=.WebButton ("name:=Go").ChildItem (row. either WebEdit or WebCheckBox or WebRadioButton etc.*").WebTable().0) ChildObj.SetCurrentRow(i) SystemUtil.GetCellData(row.So make sure what is the child object of that cell i.
atendofstream c= b. so.WaitProperty "text".Value("servername") Now we can compare the above value with the required data in your separate page.readline systemutil. at that time you can not capture the objects from application. they will come to visible only in some situations.c.true) i=1 Do while not b.Run"iexplore" browser("name:=Google").1.readline i=i+1 systemutil.1.Next Re: How can i open 5 multiple browser at once through QTP VB script Answer Give URLs in URLS. when you are capturing some objects to Object Repository some of the objects may be invisible.true) i=1 Do while not b.Run c browser("text:=. and another situation. what you can do is you can create the test objects manually based on the test case says.22.atendofstream c= b.c.10000 loop Re: What is the use of "Define new test object" in QTP 9. and can prepare the scripts.txt".10000 loop Or else try with it Set a= createobject("scripting.txt documents and save Set a= createobject("scripting.*"). . you will create one user defined test object even that object is invisible.filesystemobject") set b=a. Ex: <Environment> <Variable> <Name>servername</Name> <Value>10. once the application is ready we will debug the script to check whether QTP is identifying the objects with the user defined test object description.opentextfile("c:\ray\URLS.22</Value> </Variable> </Environment> If this XML format is like this then we can write script :Environment. A particular value is assigned to a particular tag.txt".opentextfile("c:\ray\URLS.Navigate c browser("text:=. if not identifying u can change the properties of those objects.filesystemobject") set b=a.xml") dataToCompare = Environment. They asked me to write a QTP script to check the XML is having those particular 15 values.33.LoadFromFile("c:\data. Answer My understanding is that the value stored in an XML file interms of "Tag" and "Value".1 When should we use? Explain? Answer in some situations we have to prepare the scripts before the application is delivered for testing. right? so. Re: How can we test an XML using QTP? I have been to an interview.WaitProperty "text".*"). where they have given me an XML which had 15 values and they have given those values on a separate page.
Create() oImageDes("micclass").Note:The above solution is also perfect. 2.*").WinButton("Yes"). not with repository window). i want scroll down and up but thing the holl page is taking webtable. tell me the script in QTP to find the number of links.Page("title:=. 1 .value="Link" a=Browser("title:=. 6.u confirm the scroll bar is page scrollbar or not.Page("title:=. 4. Add in should be Java. Ex: Browser("title:=.Dialog("Task Manager Warning").*").Childobjects (oLinkDes).WinButton("End Process"). Prepare the test data.EXE" Dialog("Windows Task Manager"). I am trying to Automate a HRMIS application. i need to scroll down and up using scroll tab any bodu knows plz help its urjent regards balaji Answer #3 what i mentioned is u have to do some operation in that page where u have to perform those operations. .SendKeys "{UP} 'For pressing up arrow oObject.*"). First Identify the scenarios and test cases to be automated.value="Image" a=Browser("title:=.Run "taskmgr" Dialog("Windows Task Manager").WinListView ("SysListView32"). Now come to QTP. In my solution the constraint is that (you have to know the name of all 15 tags) Re: how i can delete the excel process for the task manager using QTP Answer #2 SystemUtil. I am a beginner in QTP and have understood the basics of QTP by going through tutorial.SendKeys "{DOWN} 'For pressing down arrow Set oObject=Nothing Re: Pls anyone. Identify the install the add in. #2 'For Links Dim oLinkDes Set oLinkDes=Description.Create() oLinkDes("micclass"). and images in awebpage. What is the right approach to Automate this application. For Example: Application is developed with Java.Count() Msgbox a 'For Images Dim oImageDes Set oImageDes=Description.Childobjects (oImageDes). and save that object repository as shared repository(do this by using repository manager window. which we can use across in all modules. 3.*").*"). Separate the Reusable scenarios.Shell") oObject. 5.Select "EXCEL.Refresh Set oObject=CreateObject("Wscript.Click Dialog("Windows Task Manager").Count() Msgbox a Re: Hi All. Capture required objects to Object Repository (some times u have to Descriptive also). Do naming conventions to all scenarios. that application supports.Click Re: hi all need one help . and module specific scenarios.
If scenario is reusable.FullScreen for example Browser("name:= Google"). which is identified to automate in step 1. And later u have to import that XML file in to script. how to create it Answer #1 Action Parameters can be created in this way:Right-Click on Action in Keyword View Click on Action Properties Select Parameters Tab ***********Action input parameters********* Click on "+" and add the input Variables with their respective datatype and their Values ***********Action output parameters********* Click on "+" and add the output Variables(o/p Var name) with their respective datatype Note. in general it will be in external excel sheet) 10.exe" SystemUtil. 15. 14.Run "iexplore. 13. 12.exe" 'Below Statement will Close all 3 opened web Browser SystemUtil. getROProperty and Different types of operators. 11. These variables should be stored in external XML file. by all the testers and in all environments).exe" SystemUtil. it is better to put those script in functions. 16. if u r using some lines of script many more times. 8.Run "iexplore. Associate the shared repository to test. If u r getting any unexpected pop ups use recovery manager. for validation of every step use exist method. Do not use any checkpoints. For ex: Writ this code in QTP Editor ' Below 3 statement will invoke 3 web Browsers SystemUtil.Here there is no Default Value tab ++++++++++++++++++++++++++++++++++ Once you have created Action Output Parameters write in the Expert View of the same action Parameter("o/p Var name")="pass"(you can give any values) .FullScreen How to use outputof the one function input to the another function pls give the script for above 2. Write the script for each step by step. else do not. Paste the required test data in Data Table( Data table will be used only in basic level.CloseProcessByName "iexplore. 9. make sure that each script should contain only one test case or scenario. Use Environment variables to store global variables(Global variables means which can be used in all the scripts.Run "iexplore.what is the purpose of action parameter .7. Re: How can we close all webbrowsers which are opened in out desktop? Answer #3 Please Use Following Method: SystemUtil. Take one scenario.CloseProcessByName By this method Not only webbrowser even any porcess can be close.exe" Re: What is the method for maximize the application while we using Decriptive programming in QTP? Answer #2 Browser("PropertyName:= Property Value"). then make that automation script as reusable.
) Scenario 2: Str = "SairamSairam Sairam" SubStr = "sai" .x Msgbox(x) 'First action output parameter comes to action2 automatically Exit run Re: Is it possible to return multiple values from a function. SubStr) SubStrOccur = ubound(Occr) End Function Scenario 1: Str = "SairamSairam Sairam" SubStr = "Sai" msgbox "Given SubString("&SubStr&") appeared "&SubStrOccur (Str. 'Action1 script dim var var=Parameter("t") 'Input parameter msgbox(var) Parameter("a")=89 'Output parameter Create anothere action action2 RunAction "Action1". SubStr) Occr = split(Str.oneiteration. 4) msgbox mysum 'return 9 msgbox mysub 'return 1 Re: Is there a function to find the number of occurrences of sub strings within a string? The below function returns number of occurrences of substring within a string Function SubStrOccur(Str..? Then how. mysub. Ex : mysum = 0 msgbox mysum ' return 0 mysub = 0 msgbox mysub ' return 0 Function fnmul(byref mysum. mysub.. 5. SubStr)&" time(s) in given String("&Str&")" Function return 3 (bcz "Sai" appears 3 times in the given string. x .and then yopucan call this action in any other actions within same or different test or Passing the parameters from one action to another. Here is an example. y) mysum = x + y msgbox mysum 'return 9 mysub = x .? Answer One way of getting mutiple values from function is by passing values "ByRef" to the funtion and storing the return values in the one or more arguemnts passed to the functions."20".y msgbox mysub 'return 1 End Function Call fnmul( mysum.
SetCurrentRow(l) If DataTable("A".SendKeys("+{F10}")'simulate a mouse right-click by sending a shift F10) Other way of doing this would be " analog recording" where is caputre the mouse movements(like scroll. Write down the following code : .GetSheet("Result").Value("TC_Status".strResult) DtRwCnt= DataTable..msgbox "Given SubString("&SubStr&") appeared "&SubStrOccur (Str."Result")=strTCNo Then datatable."Result")=uCase(strResult)&"_"& Now() End If Next strTcNo="" strResult="" end function Re: IN EXCEL SHEET OF qtp SUPPOSE I ENTERED 10 NUMBERS RANDOMLY I HAVE TO GET TOTAL OF TEN NUMBERS IN THE 11 COLUMN USING qtp GIVE CODE TO IT USING qtp Answer Pre Condition : Enter 10 randon numbers in the datatable of QTP.GetRowCount For l=1 to DtRwCnt DataTable. click.Shell") myshell. etc) Re: How many tables r created during the recording in QTP? Answer we can create the 255 tables( local) and along with one global sheet total=global+local =1+255 =256 Re: How u call functions in QTP Function to calculate the length of characters in a wor Answer calculatelength("this is to test") function calculatelength(string) calculatelength = len(string) End Function Re: How can we redirect QTP results in to a excel sheet after the execution Answer You have to create a function for that for example Pass the below values strTCNo=23 (Test Case No) strResult=Pass (Result public function FnTcStatus (strTCNo. SubStr)&" time(s) in given String("&Str&")" Function return 0 (bcz here "sai" is in small letters doesn't appear in the given string..) Re: hi i have one issue while automating the script using QTP i want scrool down .the object will take WebElement plz is there I think this can be done by creating the shell object and then using the sendkeys option(i'm just guessing).maybe the code goes like this Set myshell = CreateObject("Wscript.sendkeys "down" ' take the cursor down (WshShell.i cannot do even using scrool down methqad is not working .
If the data is in the excel sheet then you have import the data. plz give the code using flight We can change the property value of an object in Object repository at runtime by using "Settoproperty".Value(1.settoproperty(text."save") Next time if we execute this script statement---Window("Window Name").SetCurrentRow(i) total=cint(datatable..value(1. Similarly we can change the property values of the objects in Object Repository using this settoproperty.1))+total Next datatable.1)=total The total of 10 random numbers get posted in the run time data table.need real time explanation breifly? Environment variable is a global variable where ever you want you can use Defining Environment variable: Environment("variablename")=34 Using Environment variable: msgbox Environment("variablename") Note: 1)it is used when we want value from differnt action 2)it is used when we the value generated in one function is used in different function Ex: First_Fun a.page("Page Name"). Re: what is the command (keyboard command) to switch from expert view to key word view to move from keyword to expert apply ctrl+pageup/pagedown Re: how are environment variables used in real time projects (testing)?what answer should be given about this in interview.b public function First_Fun(a. then we should use this property to let the tool identify that the property is changed so that Object Identification issues do not occur.webbutton ("submit")..click tool doesn't face any difficulty in clicking on save button."submit". You can view it through automation -> results -> run time data table.For i=1 to 10 datatable.. This is for QTP data table.. For example If we want to change the text property of submit button to save at run time.page("Page Name").SetCurrentRow(i) datatable.. Re: how to change the properties of object during run time .webbutton ("submit").b) b=a+b Environment("c")=b Second_Fun() end function . Syntax : Window("Window Name").as the text property value of submit button is changed to save.
Visible = True Set objWorkbook1= objExcel. and then you can use the TAB keys.g: var_Num = RandomNumber(500. SHIFT+F8 to add a new step after a conditional or loop block. In the Item column. unless you are in a cell that is in edit mode.2? ---F8 to add a new step below the currently selected step. Debug Mode: The 2) option above is an example of debug mode.xls") .Workbooks.. ---You can type a letter or sequence of letters to move to a value that starts with the typed letters. If so. "Value: " & i. "Number between 500 to 1000" Next 'gives the output as 501.1000) Re: DIFFERENT RUN MODE IN QTP. The typed sequence is highlighted in white. use the function "RandomNumber(inbuilt function of QTP.APPLICATION") objExcel. E. but not VB Sscript function) E. When we run the test qtp show two option . etc.g.. the list must be open before you can use the arrow keys. Update Mode: when we want to update the vale of checkpoint in the test. My question is in whic sitution we use these modes Explain me with example Answer Verify Mode : This mode is used when we want to compare the result of our test in the future with the same test run. ENTER to exit edit mode. 1) save the test result in new folder 2) save the test in temporary folder The 1) one is the verify mode.g: For i= 501 to 1000 Reporter.Workbooks.xls") Set objWorkbook2= objExcel.1-VERIFY MODE 2-UPDATE MODE 3.dEBUG MODE. If you want generate random number between 500 to 1000. ---The TAB and SHIFT+TAB keys move the focus left or right within a single row.502. 503. you can generate numbers between specified range E. Re: How to Compare the 2 xl-sheets in QTP? ple write the vb Script for the comparison? Set objExcel = CreateObject("EXCEL.public function Second_Fun() msgbox Environment("c") end function Re: how to generate numbers in between to numbers suppose numbers in between 500 to 1000 in sequencies using vbscript Answer Using loops in vb Script.ReportEvent micPass. Example: While recording the test the date was 11/1/09 and when we run the test it is different so we should run it in update mode otherwise test run will throw an error Re: can anyone please tell me what are all the shortcut keys used for qtp9. ---F7 to use the Step Generator to add a new step below the selected step. ---When a cell containing a list is selected: ---You can SHIFT+F4 to open the list for that cell.Open("C:\Documents and Settings\user\Desktop\sriya2.Open("C:\Documents and Settings\user\Desktop\sriya1. ---You can change the selected item by using the up and down arrow keys..
Value Then cell. Iteration. Make script1 as External action and save the test.webEdit("Password").Connection") Set rs = createobject("ADODB.PWD=sample123" .Set objWorksheet1= objWorkbook1.ReportEvent micFail.UsedRange If cell.We can call the External Action in any action in any test Note:."Check for the Feild validation". 2."The Feild is accepting Lesserthan lowerBoundary value" ElseIf strTextValue>20 Then Reporter.Parameters Re: How to connect to data base? Answer QTP supports interaction with Database using ADODB.Address).open "DSN=MyDsn.ReportEvent micFail.Worksheets(1) Set objWorksheet2= objWorkbook2.How to write script? Answer Dim strTextValue strtextValue = Browser("Browser"). Synatx:-Run Action "Action Name[Reusable Action]"."The Feild is not accepting lesserthan or Greater than the Boundary value" EndIF Re: How to call script1 into script2? Answer 1."Check for the Feild validation"."The Feild is accepting Greater than UpperBoundary value" Else Reporter.If we use Reusable Action."Check for the Feild validation".GetRoProperty("value") If strTextValue <8 Then Reporter.Pge("Page").ColorIndex = 0 End If Next set objExcel=nothing Re: How To write script in QTP For Field Validation Example: password Field is accepting A range 8-20 characters only.Interior.ColorIndex = 3 'Highlights in red color if any changes in cells Else cell.Recordset") 'Create DSN(DataSourceName) for your database and specify the DSN here 'You can also establish connection using Provider name con.UID=ravi.Frame ("Frame").Interior. Set con = createobject("ADODB.Value <> objWorksheet2.ReportEvent micFail. We can use this action within the test but not other test.Worksheets(1) For Each cell In objWorksheet1.Range (cell. Call script1 in script2 using Call to Copy of Action or Call to Existing Action.
movenext wend Re: How can you find Local Host Name by Using QTP? Answer Hi let u try this code to know u r user name and local host name.Value = PropValue End If Next ' Get all child objects with the given description Set children = parent. ":=") If UBound(arProp) = 1 Then PropName = Trim(arProp(0)) PropValue = arProp(1) oDesc(PropName). x= Environment.Value("UserName") msgbox x Set sys = CreateObject ("ADSystemInfo") msgbox "UserName: " & sys.open "select * from emp". Example for how to use This method Public Function CheckObjectDesription(parent.Value("UserName") Re: What is child object method? When we will go for child object method? How to use child object method? Answer Child object method Returns the collection of child objects contained within the object.UserName msgbox "Computer: " & sys.execute "Insert into emp values(2500.Count = 1 Then CheckObjectDesription = "Object Unique" .") For i = 0 To UBound(arProps) arProp = Split(arProps(i).ComputerName x=Environment. con while not rs.eof msgbox "ID = " & rs(0) & " Name = " & rs(1) rs. 'Ravi')" 'Retrieve data from database rs. descr) Dim oDesc ' Create description object Set oDesc = Description.ChildObjects(oDesc) If children. ".con.Create() arProps = Split(descr.
Click on Save As button.2 and SQL Server Database. 9)Click on Save button Script can be uploaded successfully .2 . so you can place the result in the datatable using the following steps: 'Retrieving data from the database to the data table count1 =1 While Not objRecordset." 'Running the query sql1 = "GIVE UR QUERY HERE" 'Create record set for the query Set objRecordset = CreateObject("ADODB.movenext count1 = count1+1 wend Re: What is TD plugin? For what purpose they are used ." & "UID=USERID. I am new for QTP and this would be of great help.Recordset") objRecordset.SetcurrentRow (count1) Datatable("C". Ineed to connect QC9. Hi Ram I was your answer for this. i am using QTP9. where the scripts will be uploaded.1)= objRecordset("nbr").Thanks Answer 'Declaration for SQL Dim objConnection. Can you kindly let me know in details steps on How we can connect to database. click on "Qualtiy Center Connection" 3)Enter the valid QC URL 4)Enter valid Username/password and DomainName/Project 5)Click on Connect 6)Goto File menu.Open sql1.Count = 0 Then CheckObjectDesription = "Object Not Found" Else CheckObjectDesription = "Object Not Unique" End If End Function Re: how we connect oracle or sql data server database to qtp. Select the QC folder.value objRecordset. objConnection As this is a Back-End process.EOF datatable." & "Database= DB NAME.2 with QTP 9.Connection") objConnection.Open "Driver={SQL Server}." & "Server=YOUR SQL SERVER NAME. you cannot view the result set. 7)Click on Save to Quality Center button 8)in the wizard.objRecordset." & "PWD=DB PASSWORD. Steps to upload the test scripts into QC.ElseIf children. Need to run some scripts IN QTP having the QC connectivity.count1 'Connecting to the database Set objConnection = CreateObject("ADODB. 1)Open the test script to uploaded in QTP 2)Goto file menu.
Over and above features provided with QTP 8. but different test object descriptions. or application area) contains a resource that cannot be found. and so forth. QuickTest opens the Missing Resources pane and lists the missing resource(s). the content is copied to a new. central location. You can also store test objects in one or more shared object repositories that can be used in multiple actions and components. maintaining and organizing repositories.0 provides following features: Object Repository Manager:You can use the Object Repository Manager to manage all of the shared object repositories in your organization from one. which is unique for each action and component. parameterizing test object property values. modules. or a testing document may use a object repository parameter that does not have a default value. You can also use the Object Repository Merge Tool to merge objects from the local object repository of one or more actions or components into a shared object repository.2 to 9. QTP 9. and then call their functions from your test or component. you can use a combination of objects from the local object repository and one or more shared object repositories. You choose the combination that matches your testing needs. You can import and export files either from and to the file system or a Quality Center project (if QuickTest is connected to Quality Center). enabling you to map a missing resource to an existing one. component.0 . If any conflicts occur during the merge. as required Re: How to Display last item of a Combobox by using QTP? Answer . QTP 9. subroutines. When you merge objects from two source object repositories. XML Object Repository Format:QuickTest now enables you to import and export object repositories from and to XML format. or remove it from the testing document. Object Repository Merge Tool:You can use the Object Repository Merge Tool to merge the objects from two shared object repositories into a single shared object repository. if two objects have the same name and test object class. Dynamic Management of Object Repositories:QuickTest now has a new RepositoriesCollection reserved object that you can use to programmatically manage the set of object repositories that are associated with an action during a run session. Handling Missing Actions and Resources: Whenever a testing document (test. for example.0 surya Answer Over and above features provided with QTP 8. a testing document may use a shared object repository that cannot be found.2 . and then view your movie from the Test Results window. modifying objects and their descriptions. This enables you to compare the content of the repositories. In all of these cases. Each object repository opens in its own resizable document window. QuickTest indicates this in the Missing Resources pane. target object repository. a test may contain an action or a call to an action that cannot be found. and so forth. which enables you to create and edit function libraries containing VBScript functions. This includes adding and defining objects. Function Library Editor: QuickTest now has a built-in function library editor. You can open multiple object repositories at the same time. Multiple Object Repositories per Action or Component:QuickTest provides several options for storing and accessing test objects. ensuring that the information in the source repositories remains unchanged. You can store the test objects for each action or component in its corresponding local object repository. For example. to copy or move objects from one object repository to another. and importing and exporting repositories in XML format.2 provides following features: Mercury Screen Recorder :Capture your entire run session in a movie clip or capture only the segments with errors. the relevant objects are highlighted in the source repositories. Alternatively. This enables you to modify object repositories using the XML editor of your choice and then import them back into QuickTest. and the Resolution Options pane details the conflict and possible resolutions.Re: what are the difference between qtp8.
Calling User defined Functions:In QTP --> type Useer defined function name along with argument values --> file menu --> settings --> Resources tab --> click + icon -->browse the path of the file --> click apply --> click OK --> Resources menu in QTP --> Associate Repositories --> Click + icon --> browse the path of references file --> select current action --> click OK. First . this is an important factor in effort estimation. MS. Answer . how you are going to develop the qtp scripts.Getitem(lastitemindexnumber) count") msgbox y Re: how to create user defined functions in QTP? can any one expalin me with example.Notepad".Word etc. Creating User defined Functions:Open QTP and Application --> type function header along with Unique function name and arguments --> Record repatable operations in application as function body --> follow above navigation to create more then one functions --> Save that functions in a file using 3rd party s/w with Extension .page("micclass:=Page).winedit("Password").i think . What factors should be considered while calculating the estimation time for QTP.y) Dialog ("Login").. Example:( Creating User defined Function ) Public function login(x. Repository or mix of both .."Location:=1").are u going to use complete DP or Obj. 8)know the product(business) knowledge with us Re: Could some one help me the difference between GetTOProperty and SetTOProperty and when we will use these properties.vbs( Copy the script. Can anyone help me in esimating time for an application using QTP tool.Click End Function Re: If there are 10 notepads opened on desktop. and what is meant by SetToProperties.) --> now goto Resource menu in QTP --> object Repository --> file menu --> Export Local Objects --> Enter file name with Extension . genarated in QTP and paste in Notepad. how can we close a particular 2nd notepad If you want to close the second Notepad. of resources 2)No of complex test cases 3)No of medium test cases 4)No of small test cases 5)Aprox. Re: Hi.Close or by using Descriptive Programming you can use: Window("text:=Untitled . 1) No.tsr --> click Save.page("micclass:=Page).encrypt (y) dialog ("Login").set x Dialog ("Login"). time required for each type of test case 6)Identify a sample set of test cases of three types and see that how much time it takes for the automation 7)Identify the technologies used for the development and knowledge avialabe with us.few of the factors which i consider for estimation ..setsecure crypt. you can use the below script: Window("Notepad_2")..close Here the Location value keeps changing..winbutton("OK").Getroproperty("items count") msgbox x ' Here u knows how many items r there in combobox ' after that which item u want u can get it y=Browser("micclass:=Browser).winedit("User name").Example: x=Browser("micclass:=Browser).
Popup Text_MsgBox. button) If u wants to know any property of that "OK" button u can get this GetTOProperty method.. SetToProperties--for example the "OK" button 2 mandatory properties are stored in OR.e. by using the time out property of the vbscript. Failure to identify complex functionalities and time required to develop those functionalities. All resources like staff. Effective analysis of software risks will help to effective planning and assignments of work. if u want to add one more property at Run Time that is temporarily. Schedule risks mainly affect on project and finally on company economy and may lead to project failure.. Text_MsgBox = "Test" Timeout = "2" 'in seconds Set WshShell = CreateObject("WScript.Actually GETTOProperty stands for "Get Test Object Property". GetToProperty --i will one example one "OK" button is there that button contain some some properties i. Unexpected project scope expansions. you can set the property using SetToProperties method. if we want to change the property values during run time(Only). Re: Can we set a timeout for the msgbox. All the objects which are identified & stored in Object repository by QTP during design time are "Test Objects" In order to retrieve the above test object's properties --.. Risk identification and management are the main concerns in every software project. Answer Ya we can make the msg box disappear withou even clicking on the msgbox. 2).Shell") wshShell. Categories of risks: Schedule Risk:Project schedule get slip when project tasks and schedule release risks are not addressed properly. skills of individuals etc.then we will use this "setTOProperty" 1). 24 properties but some of mandatory properties are stored in to the OR. Schedules often slip due to following reasons: 1) 2) 3) 4) Wrong time estimation Resources are not tracked properly. systems. failed system or some external events risks. Budget Risk: 1) Wrong budget estimation. seTOProperty Stands for "Set Test Object Property". Causes of Operational risks: 1) 2) 3) 4) Failure to address priority conflicts Failure to resolve the responsibilities Insufficient resources No proper subject training .For the above test object.we will use GETTOProperty method. 2) Cost overruns 3) Project scope expansion Operational Risks:Risks of loss due to improper process implementation.I want the msgbox to disappear after 2 seconds during the script execution without clicking on the OK button manually. (Like native class. TimeOut 'Prompt Message box Re: What is the Risk Analysis? Waht types of risk analysis are there? Answer ³Risk are future uncertain events with a probability of occurrence and a potential for loss´.
5) No resource planning 6) No communication in team. These external events can be: 1) 2) 3) 4) Running out of fund. Programmatic Risks:These are the external risks beyond the operational limits.com ::"). type "hgfo" . Dialog ("Login"). Market development Changing customer product strategy and priority Government rule changes.WinEdit ("Agent Name:").sendkeys "dir *. pls give me solution how we can do this in QTP.run "cmd.create() oDesc("html tag"). MicTab is used to move the cursor to next tab.count() for i=0 to NumberOfLinks Print CollectionOfLinks(i). Causes of technical risks are: 1) 2) 3) 4) Continuous changing requirements No advanced technology available or the existing technology is in initial stages.shell") SystemUtil.sendkeys "cd\" app. MicReturn has the functionality of "Enter" key.childobjects(oDesc) NumberOfLinks=CollectionOfLinks.value="Link" set CollectionOfLinks=Browser("vBrowser").GetROProperty("openurl") msgbox x or Set oDesc=Description. Product is complex to implement.sendkeys "~" app.page("vPage").value="INPUT" oDesc("micclass").sendkeys "cd Foldername" app.These are all uncertain risks are outside the control of the program. Re: how to use command prompt using qtp? Answer dim app set app=createobject("wscript.GetRoProperty("href or url") Next Re: Can u please clarify my doubt Where are micTab and micReturn used. Difficult project modules integration. For Instance. Technical risks:Technical risks generally leads to failure of functionality and performance.exe" 'the command prompt open app. thanks Answer x=Browser("ALL Interview .xls" app.sendkeys "~" app.sendkeys "~" Re: i am trying to capture the URL from the open browser and store it.
vbs and .micreturn should be used to press the "OK" button after providing the password. WinButton ( "OK").vbs.txt . For Login . For this go to your application properites.set "hdsfajf" browser("XXXX")..txt on the other hand can be used by QTP and by any application other than QTP. go to Step Generator.where we can get this in qtp tool i. WinEdit ( "Password:"). If that is exist than what's the difference between them.qfl (not . But. . you can open notepad and write a function and associate it with your test as a .vbs file. The below Example will copy the text from PDF file to notepad file: .exe path for the application in the value column.page("XXXX"). mictab should be used to make the cursor move to the next tab (i. Re: write vbscript on veb applications in qtp with exapmles? Answer Do u want simple script. Either way.qfl and debug your function written in the .. (Or)Just press F7 key and select involkeapplicaion "pah" or systemuti. Re: How do you invoke an application using the step generator in qtp? Answer Click on Insert.page("XXXX").Type "mercury" Dialog ( "Login").qfl rather than in .run "path Re: What is the another extension name of library file..create objlink('html tag"). browser("XXXX").click to count No.clipboard" is method to create a clipboardobject. Step Generator dialog box opens. Of Links in WebPage. There is a difference between the .qfl and .vbs file.qfl is local to QTP and can only be called and used by QTP. Click on row below "Value" in the Arguments section. Enter the .webedit("dsadsagd").e Password).webImage("dasada"). set objlink = description.vbs and .value = "A" set n = browser("XXXX"). all three can be used as QTP function libraries which contain a function that you can use in your test.e tabs Answer "mercury.page("XXXX"). Because of this reason.set "dlskajdl" browser("XXXX").. you cannot do this in a . Type micTab Dialog ( "Login"). Select "Invoke Application" from Operation combo list. The difference is you can put a breakpoint in . .childobjects(objlink) c = n.. Type micReturn After typing the username.page("XXXX"). Answer QTP can use 3 different types of function library extensions: .webedit("hgshad").WinEdit ("Agent Name:").WinEdit("Password:"). and copy the path. I prefer writing function libraries in . and use to copy and paste the text. type mictab Dialog ("Login").Dialog ("Login").qfl file.clipboard in qtp. Another way to use functions is to directly type them in your respective action.qfi).txt file. So.count msgbox c Re: what is mercury. what I am trying to say is that...
LTrim(string) RTrim(string) Trim(string) The string argument is any valid string expression. or both leading and trailing spaces (Trim).WinMenu("Menu").filesytemobject") file_path = "c:\nanda.Select "Edit.WinMenu("Menu").writeline "this is created by Nanda" notepad. The following example uses the LTrim. Window("Adobe Reader").txt" for_reading = 1 for_appending = 8 'Please Open any pdf file in ur system.readline msgbox n Next Re: what is L-trim function will do? Answer Returns a copy of a string without leading spaces (LTrim).clipboard") 'Give notepad file path according to ur local system path.opentextfile(file_path.close Set notepad = fso. trailing spaces.note : the given files paths are in my local system. respectively: Dim MyVar .txt" forwriting =2 forreading = 1 set notepad = fso.atendofstream n = notepad. trailing spaces (RTrim). Set clipboard = createobject("mercury. Null is returned.write nanda notepad.for_appending) nanda = clipboard.<Item 8>" Window("Adobe Reader").gettext notepad. and both leading and trailing spaces.forwriting) notepad. at the time of running it should be opened and in maximize.readall msgbox reading Re: OPening of notedpad in QTP to write and execute the coding? Answer set fso = createobject("scripting. If string contains Null.writeline "this is created on "& now notepad.for_reading) reading=notepad. Please give paths in ur local system.createtextfile(file_path.opentextfile(filepath.createtextfile(filepath.Select "Edit.close set notepad = fso. filepath = "C:\Documents and Settings\Madhu Sudhan\Desktop\example.forreading) for notepad. and Trim functions to trim leading spaces.<Item 4>" Set fso = createobject("scripting.filesystemobject") Set notepad = fso. RTrim.
" 8: How to create a run time property for an object? for ex. suppose that we want to know what is the data in the text box while the application is running? FName = Browser("Browser").sourcesheet. Re: how to use import and export sheet methods in qtp? whenever we want to use the data from excel on the time we can import the data from excel to QTP Ex: datatable. Details [.winedit ("username"). "The user-defined step failed. color.MyVar = LTrim(" vbscript ") ' MyVar contains "vbscript ".ReportEvent EventStatus. Syntax Reporter. ReportStepName. Details:-Description of the report event.setToProperty "Property". Reporter] Event Status:-Status of the report step: 0 for MicPass 1 for MicFail 2 for MicDone 3 for Mic Warning ReporterStepName:-Name of the intended step in the report (object name).ReportEvent in QTP and also please give the brief description about Reporter.. Example The following examples use the ReportEvent method to report a failed step.Page("Page").GetROProperty("text") msgbox FName the runtime text which is present in FirstName field will be stored in FName variable and can be seen using msgbox.ReportEvent 1. Reporter. 2: What is the purpose of the Reporter. Value Property can be any physical property of the object such as Name. example : browser(" ")." or Reporter..importsheet "path". after completion of work we send to the result to the higher person on time we can use Export method .ReportEvent? The Main purpose of Reporter. type.ReportEvent in QTP is it is a Reporting Mechanism in Qtp.Webedit ("FirstName"). use 'SetToProperty' method of an object to change the property of object at run-time.page(" "). "The user-defined step failed. MyVar = RTrim(" vbscript ") ' MyVar contains " vbscript". destination sheet. "Custom Step".ReportEvent micFail. MyVar = Trim(" vbscript ") ' MyVar contains "vbscript". It means Reports an event to the test results.. "Custom Step". The string will be displayed in the step details frame in the report.
Add "e:\Utils. False qtApp. via an integrated scripting and debugging environment that is round-trip synchronized with the Keyword View.vbs" another way we can add to our script is AOM(Automated object model) first of all create Quick Test object than we can add with that object.Webedit ("name").Visible = True qtApp. .. application screen shots for every step that highlight any discrepancies.Settings. Involved in writing test cases and executing test cases. the test data used. False.ex: datatable.vbs" Re: how to get font size of a "WebEdit"? 'We will use OUTERHTML Property and use split concept we will get font size Example Outerhtml=<input size=12 .Libraries. EX: Set qtApp = CreateObject("QuickTest. an expandable Tree View of the test specifying exactly where application failures occurred. tree view---> Once a tester has run a test.> dim a.page("yahoo").Test. test automation experts have full access to the underlying test and object properties. and detailed explanations of each checkpoint pass and failure.i a= Window("yahoo"). Developed and executing the automated test scripts using QTP. By combining Test Fusion reports with Quick Test Professional.getroproperty("outerhtml") i=split(a..Open "C:\Tests\Test1". What is your role and responsibilities in QTP with your current organization? Participated in Manual Testing. you can share reports across an entire QA and development team.exportsheet "path".. a Test Fusion report displays all aspects of the test run: a high-level results overview..localsheet no Re: Can we add the function library directly from scripting in qtp instead of adding from resource tab? through script we can do executefile "give the path" Ex: executefile "d:\functionlibrary.Resources."=") msgbox i(0) Definitions for keyword view and tree view? keyword view ---> Quick Test¶s Keyword Driven approach.Application") qtApp.Launch qtApp.
Re: Why we have to split actions in a test? To achieve more efficiency. or with compact mathematicalnotation. To achieve Modularity.then We can make any action as reusable at any time & we can copy or call that actions easily whenever required. RunAction Action3.Setting up impractical deadlines. The programming language is augmented with natural language descriptions of the details. where convenient. Conducted various types of testing like FunctionalityTesting. We can run each action rather than entire test. Select the "Action" you want to call 4. make sure the "Action" is marked as a "Reusable Action". Insert > Call to Existing Action 2. but typically omits details that are not essential for the understanding of the algorithm. 1-3 (For iterations from 1 to 3) Note: To Call an "Action" multiple times or to Call an "Action" from Other Tests. AllIterations (For All Iterations) 3. Click "OK" Or you direct write the script as below in the line where you want to call the action: "RunAction <ActionName>. "<ActionName>" is the Action you want to call and the "(Iteration)" is optional where you can define number of iterations you want to execute. The purpose of using pseudo code is that it may be easier for humans to read than conventional programming languages. and that it may be a compact and environment-independent description of the . variable declarations and system-specific code. (Iteration)" Here. RunAction Action4. Select the "Test From" (If "Action" is being called from another Test) 3. Then select the Location. Re: Hi. where the "Action" needs to be called in the current script 5. To decrease code complexity. such as subroutines. RunAction Action2. by using "Run from Step" in any action. Re: What is meant by Pseudo Code? Pseudo code (derived from pseudo and code) is a compact and informal high-level description of a computer programming algorithm that uses the structural conventions of some programming language. Analyzing test results and prepared bug reports. GUI Testing. Eg: 1. It is HR question. Regression Testing and System Testing. What are the qualities you like and dislike in your Project Manager? Likes:Very dynamic and shows great leadership qualities Openness to listen to others and take input from team members Updates about the strength n weakness of each individual on a regular basis that helps an individual to grow backs the team in difficult situation Dislikes:Demanding all the time for perfection wants team members to follow in his footsteps . Experience in bug tracking and solving with project teams. if we splitted. Re: How to call actions in QTP? You can call a action in QTP by the following: 1. OneIteration (For single Iteration) 2. Extensively performed Manual Testing process to ensure the quality of the application.
No standard for pseudo code syntax exists. as a program in pseudo code is not an executable program. Re: After running scripts how you report results .stepname.details Argument-1 : event status are 4 types 1)micpass 2)micfail 3)micdone 4)micwarning Argument-2 : The status of object we are going to report Argument-3 : The details of the object.reportevent event status.there is any specific report form? Using QTP utility statement we can send report to Test Result Window. .key principles of an algorithm.Here the utility statement is syntax is report. | https://www.scribd.com/document/47810130/New-Microsoft-Office-Word-Document | CC-MAIN-2017-09 | refinedweb | 28,932 | 60.61 |
Utility wrapper around Kubectl
Project description
Python library to simplify kubernetes scripting. Minimal test coverage.
TODO: The current plan is to rebuild this around <>.
Quickstart
Import Kubelib and config:
import kubelib kube = kubelib.KubeConfig(context='dev-seb', namespace='myspace')
List all namespaces:
for ns in kubelib.Namespace(kube).get_list(): print(ns.metadata.name)
List all resource controllers:
for ns in kubelib.ReplicationController(kube).get_list(): print(ns.metadata.name)
(you get the idea)
Get a specific pod:
pod = kubelib.Pod(kube).get(podname) print(pod.toJSON())
Upgrading Kubernetes
Upgrade kubernetes based on a directory of yaml files:
import kubelib kube = kubelib.KubeConfig(context='dev-seb', namespace='myspace') kube.apply_path("./kubernetes", recursive=True)
This will look at every yaml file and act based on the “Kind” field. Deployments are replaced, replication controllers are deleted and re-created. Other “Kind” resources are created if a resource with that “Kind” and “Name” is not already present.
Command Line Utilities
This package provides a few command line utilities, the most helpful (to me) is see_limits which displays the resource limits for all pods and namespaces within a context.
Initial package setup borrowed from
A reasonable approach to getting sphinx output into github pages from
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/kubelib/ | CC-MAIN-2018-39 | refinedweb | 222 | 50.23 |
Hot questions for Using Neural networks in random forest
Question:
I want to run some experiments with neural networks using PyTorch, so I tried a simple one as a warm-up exercise, and I cannot quite make sense of the results.
The exercise attempts to predict the rating of 1000 TPTP problems from various statistics about the problems such as number of variables, maximum clause length etc. Data file is quite straightforward, 1000 rows, the final column is the rating, started off with some tens of input columns, with all the numbers scaled to the range 0-1, I progressively deleted features to see if the result still held, and it does, all the way down to one input column; the others are in previous versions in Git history.
I started off using separate training and test sets, but have set aside the test set for the moment, because the question about whether training performance generalizes to testing, doesn't arise until training performance has been obtained in the first place.
Simple linear regression on this data set has a mean squared error of about 0.14.
I implemented a simple feedforward neural network, code in and copied below, that after a couple hundred training epochs, also has an mean squared error of 0.14.
So I tried changing the number of hidden layers from 1 to 2 to 3, using a few different optimizers, tweaking the learning rate, switching the activation functions from relu to tanh to a mixture of both, increasing the number of epochs to 5000, increasing the number of hidden units to 1000. At this point, it should easily have had the ability to just memorize the entire data set. (At this point I'm not concerned about overfitting. I'm just trying to get the mean squared error on training data to be something other than 0.14.) Nothing made any difference. Still 0.14. I would say it must be stuck in a local optimum, but that's not supposed to happen when you've got a couple million weights; it's supposed to be practically impossible to be in a local optimum for all parameters simultaneously. And I do get slightly different sequences of numbers on each run. But it always converges to 0.14.
Now the obvious conclusion would be that 0.14 is as good as it gets for this problem, except that it stays the same even when the network has enough memory to just memorize all the data. But the clincher is that I also tried a random forest,
... and the random forest has a mean squared error of 0.01 on the original data set, degrading gracefully as features are deleted, still 0.05 on the data with just one feature.
Nowhere in the lore of machine learning is it said 'random forests vastly outperform neural nets', so I'm presumably doing something wrong, but I can't see what it is. Maybe it's something as simple as just missing a flag or something you need to set in PyTorch. I would appreciate it if someone could take a look.
import numpy as np import pandas as pd import torch import torch.nn as nn # data df = pd.read_csv("test.csv") print(df) print() # separate the output column y_name = df.columns[-1] y_df = df[y_name] X_df = df.drop(y_name, axis=1) # numpy arrays X_ar = np.array(X_df, dtype=np.float32) y_ar = np.array(y_df, dtype=np.float32) # torch tensors X_tensor = torch.from_numpy(X_ar) y_tensor = torch.from_numpy(y_ar) # hyperparameters in_features = X_ar.shape[1] hidden_size = 100 out_features = 1 epochs = 500 # model class Net(nn.Module): def __init__(self, hidden_size): super(Net, self).__init__() self.L0 = nn.Linear(in_features, hidden_size) self.N0 = nn.ReLU() self.L1 = nn.Linear(hidden_size, hidden_size) self.N1 = nn.Tanh() self.L2 = nn.Linear(hidden_size, hidden_size) self.N2 = nn.ReLU() self.L3 = nn.Linear(hidden_size, 1) def forward(self, x): x = self.L0(x) x = self.N0(x) x = self.L1(x) x = self.N1(x) x = self.L2(x) x = self.N2(x) x = self.L3(x) return x model = Net(hidden_size) criterion = nn.MSELoss() optimizer = torch.optim.Adam(model.parameters(), lr=0.1) # train print("training") for epoch in range(1, epochs + 1): # forward output = model(X_tensor) cost = criterion(output, y_tensor) # backward optimizer.zero_grad() cost.backward() optimizer.step() # print progress if epoch % (epochs // 10) == 0: print(f"{epoch:6d} {cost.item():10f}") print() output = model(X_tensor) cost = criterion(output, y_tensor) print("mean squared error:", cost.item())
Answer:
can you please print the shape of your input ? I would say check those things first:
- that your target y have the shape
(-1, 1)I don't know if pytorch throws an Error in this case. you can use
y.reshape(-1, 1)if it isn't 2 dim
- your learning rate is high. usually when using Adam the default value is good enough or try simply to lower your learning rate. 0.1 is a high value for a learning rate to start with
- place the optimizer.zero_grad at the first line inside the for loop
- normalize/standardize your data ( this is usually good for NNs )
- remove outliers in your data (my opinion: I think this can't affect Random forest so much but it can affect NNs badly)
- use cross validation (maybe skorch can help you here. It's a scikit learn wrapper for pytorch and easy to use if you know keras)
Notice that Random forest regressor or any other regressor can outperform neural nets in some cases. There is some fields where neural nets are the heros like Image Classification or NLP but you need to be aware that a simple regression algorithm can outperform them. Usually when your data is not big enough.
Question:
I’m analyzing a medical dataset containing 15 variables and 1.5 million data points. I would like to predict hospitalization and more importantly which type of medication may be responsible. The medicine-variable have around 700 types of drugs. Does anyone know how to calculate the importance of a "value" (type of drug in this case) in a variable for boosting? I need to know if ‘drug A’ is better for prediction than ‘drug B’ both in a variable called ‘medicine’. The logistic regression model is able to give such information in terms of p-values for each drug, but I would like to use a more complex method. Of cause you can create a binary variable of each type of drug, but this gives 700 extra variables and does not seems to work very well. I’m currently using r. I really hope you can help me solve this problem. Thanks in advance! Kind regards Peter
Answer:
see
varImp() in library
caret, which supports all the ML algorithms you referenced.
Question:
I'm currently making a machine learning model for a student project, and I'm still deciding what model I should use. Here's the brief I was given:.
The data frame has:
- 134 columns, about 100,000 rows
- many of the columns have missing values
- I've only been given 5 days to submit my final work, so I can't spend a prolonged period training the model
I'm leaning towards using a backpropogation neural network, as I believe it can handle the missing values, though a random forest might also be viable given the limited amount of time I have to train it. I've done a lot of research on the various pros and cons of common ML models, but any additional advise would be greatly appreciated.
Answer:
It would be easier to answer this question if you tried several candidate methods and described why they don't suffice, but here's one place to start... If you didn't have access to a computer and someone gave you this table and asked you to qualitatively describe how terrorism works, you might notice very quickly, say, that Irish Republican Army doesn't operate in Afghanistan and only ISIS is involved in attacks that kill more than 1000 people (let's stipulate). This observation is akin to how a random forest operates on categorical and continuous data respectively.
The point is that your brain gravitates towards a random forest when trying to qualitatively describe the fundamental reality behind data like this. (Multiple splits would look like... well there was no terrorism in America before 1991 and after 1991 most terrorist attacks in America have involved groups X, Y, and Z -- and so forth) A corollary of this is that you will have a lot to say about what your trained random forest is telling you, where it fails, and why it fails for where it fails.
If you use a neural network, without knowing a lot about the details of how it works, you might end up mindlessly tuning things until something seems to work and have no idea what to say about how well it works for various situations or which features are informative.
why not use a random forest, find out where it does and does not work, contemplate this result, and iterate on that? | https://thetopsites.net/projects/neural-network/random-forest.shtml | CC-MAIN-2021-31 | refinedweb | 1,526 | 62.88 |
Key Takeaways
- Kotlin brings compile time null checks, functional aspects and expressive syntax to the JVM platform
- Kotlin is interoperable with Java and can be introduced incrementally to an existing Java project
- Projects heavy on boilerplate and logic are good candidates adopting Kotlin
- Kotlin integrates well with popular frameworks including Spring and HIbernate
- Kotlin can significantly reduce the size of a Java code base by eliminating boilerplate
Introducing Kotlin
Kotlin is one of the newer languages on the JVM from JetBrains, the makers of IntelliJ. It is a statically typed language which aims to provide a blend of OO and FP programming styles. Kotlin compiler creates bytecode compatible with the JVM, allowing it to run on the JVM and interoperate with existing libraries. It got its big break when it was endorsed by Google in 2017 for Android development.
JetBrains has a stated goal of making Kotlin a multi-platform language and provide 100% Java interoperability. Kotlin’s recent successes and maturity level put in a good place for it to make inroads into the server side.
The case for Kotlin
A number of languages have attempted to better Java. Kotlin gets a lot of things right, both in terms of the language and the ecosystem. It is a long overdue evolution to a better Java while conserving the JVM and the vast library space. This approach combined with the backing from JetBrains and Google, make it a serious contender. Let’s take a look at some of the features Kotlin brings.
Type inference – Type inference is a first class feature. Kotlin infers the type of variable without the need to explicitly specify it. In instances where types are needed for clarity, they can still be specified.
Java 10 has moved in a similar direction by introducing the var keyword. While this looks similar on the surface, its limited in scope to local variables. It cannot be used for fields and method signatures.
Strict null checks – Kotlin treats nullable code flow as a compile time error. It provides additional syntax to handle null checks. Notably it provides protection against NPEs in chained calls.
Interoperability with Java – Kotlin is significantly better than other JVM languages in this area. It interoperates seamlessly with Java. Java classes from frameworks can be imported and used in Kotlin and vice versa. Notably, Kotlin collections can interoperate with Java collections.
Immutability – Kotlin encourages immutable data structures. The commonly used data structures (Set/ List/ Map) are immutable, unless explicitly declared as mutable. Variables are also designated as immutable (val) and mutable (var). These changes add up and the impact on manageability of state is noticeable.
Clean and expressive syntax – Kotlin introduces a number of improvements which make a significant impact on the readability of the code. To mention a few:
- Semicolons are optional
- Curly braces are optional in instances where they are not useful
- Getter/Setters are optional
- Everything is an object - primitives are used behind the scenes automatically if needed
- Expressions: An expression returns a result when its evaluated.
In Kotlin all functions are expressions, since they return Unit at least. Control flow statements like if, try and when (similar to switch) are also expressions. For example:
String result = null; try { result = callFn(); } catch (Exception ex) { result = “”; } becomes: val result = try { callFn() } catch (ex: Exception) { “” }
- Loops support ranges. For example:
for (i in 1..100) { println(i) }
There are several other improvements which we shall discuss as we go.
Introducing Kotlin to your Java project
Small steps
Given the Java interoperability, it is recommended to add Kotlin to an existing Java project in small steps. Supporting projects for the main product are typically good candidates. Once the team is comfortable, they can evaluate whether they prefer switching completely.
What sort of a project is a good candidate?
All Java projects can benefit from Kotlin. However, projects with the following characteristics can make the decision easier.
Project containing lots of DTO or model/entity object – This is typical for project dealing with a CRUD or data translation. These tend to be cluttered with getters/setters. Kotlin properties can be leveraged here and simplifies the classes significantly.
Projects heavy on utility classes – Utility classes in Java typically exist to compensate for the lack for top level functions in Java. In many instances these contain global stateless functions via public static. These can be factored out into pure functions. Further. Kotlin’s support for FP constructs like Function types and higher order functions can be leveraged to make code more maintainable and testable.
Projects with logic heavy classes – These tend to susceptible to null pointer exceptions (NPE) which are one of the problem areas Kotlin solves well. Developers get support here by letting the language analyze code paths leading to potential NPE(s). Kotlin’s `when` construct (a better `switch`) is useful here to break up nested logic trees into manageable functions. Immutability support for variables and collections helps simplify the logic and avoid hard to find bugs arising from leaky references. While some of the above can be accomplished with Java, Kotlin’s strength is in promoting these paradigms and making in clean and consistent.
Let's take a pause here to look at a typical Java logic snippet and it’s Kotlin counterpart:
public class Sample { public String logic(String paramA, String paramB) { String result = null; try { if (paramA.length() > 10) { throw new InvalidArgumentException(new String[]{"Unknown"}); } else if ("AB".equals(paramA) && paramB == null) { result = subLogicA(paramA + "A", "DEFAULT"); } else if ("XX".equals(paramA) && "YY".equals(paramB)) { result = subLogicA(paramA + "X", paramB + "Y"); } else if (paramB != null) { result = subLogicA(paramA, paramB); } else { result = subLogicA(paramA, "DEFAULT"); } } catch (Exception ex) { result = ex.getMessage(); } return result; } private String subLogicA(String paramA, String paramB) { return paramA + "|" + paramB; } }
Kotlin counterpart:
fun logic(paramA: String, paramB: String?): String { return try { when { (paramA.length > 10) -> throw InvalidArgumentException(arrayOf("Unknown")) (paramA == "AB" && paramB == null) -> subLogicA(paramA + "A") (paramA == "XX" && paramB == "YY") -> subLogicA(paramA + "X", paramB + "X") else -> if (paramB != null) subLogicA(paramA, paramB) else subLogicA(paramA) } } catch (ex: Exception) { ex.message ?: "UNKNOWN" } } private fun subLogicA(paramA: String, paramB: String = "DEFAULT"): String { return "$paramA|$paramB" }
While these snippets are functionally equivalent, there are some distinct differences.
The logic() function does not need to be in a class. Kotlin has top level functions. This opens up a big space and encourages us to think if something really needs to be an object. Stand alone, pure functions are easier to test. This gives the team options for adopting cleaner functional approaches.
Kotlin introduces `when`, a powerful construct for organizing conditional flow. It's a lot more capable that `if` or `switch` statements. Arbitrary logic can be cleanly organized using `when`.
Notice that in the Kotlin version we never declared a return variable. This is possible since Kotlin allows us to use `when` and `try` as expressions.
In the subLogicA function we were able to assign a default value to paramB in the function declaration.
private fun subLogicA(paramA: String, paramB: String = "DEFAULT"): String {
Now we have the ability to invoke either function signature:
subLogicA(paramA, paramB)
or
subLogicA(paramA) # In this case the paramB used the default value in the function declaration
The logic is now easier to follow and the line count is reduced by ~35%
Adding Kotlin to your Java build
Maven and Gradle support Kotlin via plugins. The Kotlin code is compiled to Java classes and included in the build process. Newer build tools like Kobalt also look promising. Kobalt is inspired by Maven/Gradle but written purely in Kotlin
To get started, add the Kotlin plugin dependencies to your Maven or Gradle build file.
If you are using Spring and JPA, you should add the kotlin-spring and kotlin-jpa compiler plugins too. The project should compile and build without any noticeable differences.
The plugin is required to generate JavaDoc for Kotlin codebase.
IDE plugins are available for IntelliJ and Eclipse studio but as might be expected, Kotlin’s development and build tooling benefits greatly from the IntelliJ association. The IDE has first class support for Kotlin, starting from the community edition. One of the notable features is the support for automatically converting existing Java code to Kotlin. The conversion is accurate and a good learning tool for writing idiomatic Kotlin.
Integration with popular frameworks
Since we are introducing Kotlin to an existing project, framework compatibility is a concern. Kotlin fits seamlessly into the Java ecosystem, since it compiles down to Java bytecode. Several popular frameworks have announced Kotlin support – including Spring, Vert.x, Spark and others. Let's take a look at what it's like to use Kotlin with Spring and Hibernate.
Spring
Spring has been one of the early supporters of Kotlin, first adding support in 2016. Spring 5 leverages Kotlin for proving cleaner DSLs. You can expect existing Java Spring code to continue working without any changes.
Spring annotations in Kotlin
Spring annotations and AOP work as out of the box. You can annotate Kotlin classes just the same as you would annotate Java. Consider the service declaration snippet below.
@Service @CacheConfig(cacheNames = [TOKEN_CACHE_NAME], cacheResolver = "envCacheResolver") open class TokenCache @Autowired constructor(private val repo: TokenRepository) {
These are standard Spring annotations
@Service: org.springframework.stereotype.Service
@CacheConfig: org.springframework.cache
Notice the constructor is a part of the class declaration.
@Autowired constructor(private val tokenRepo: TokenRepository)
Kotlin refers to this as the primary constructor and it can be a part of the class declaration. In this instance tokenRepo is a property which is declared inline.
Compile time constants can be used in annotations and these generally help with avoiding typos.
Handling final classes
Kotlin classes are final by default. It advocates the approach of allowing inheritance as a conscious design choice. This doesn’t work will with Spring AOP but is not hard to compensate for. We need to mark the relevant classes as open – Kotlin’s keyword for non-final.
IntelliJ gives you a friendly warning.
You can get around this by using the ‘all open’ maven plugin. This plugin makes the classes with specific annotations open. The simpler option is to mark the class as ‘open’.
Auto wiring and null checks
Kotlin enforces null checking strictly. It requires all properties marked as not nullable to be initialized. These can be initialized at the declaration site or in the constructor. This runs contrary to the Dependency Injection – which populates properties at runtime.
The lateinit modifier allows you to specify that the property will be initialized before use. In the following snippet, Kotlin trusts that the config object will be initialized before first use.
@Component class MyService { @Autowired lateinit var config: SessionConfig }
While lateinit is useful for auto wiring, I would recommend using it sparingly. On the flip side, it turns off the compile time null checks on the property. You still get a runtime error if its null at first use but you lose a lot of the compile time null checking.
Constructor injection can be used as an alternative. This works well with Spring DI and removes a lot of clutter. Example:
@Component class MyService constructor(val config: SessionConfig)
This is a good example of Kotlin coaxing you to follow best practices.
Hibernate
Hibernate works well with Kotlin out of the box, and no major changes are required. A typical entity class would look like:
@Entity @Table(name = "device_model") class Device { @Id @Column(name = "deviceId") var deviceId: String? = null @Column(unique = true) @Type(type = "encryptedString") var modelNumber = "AC-100" override fun toString(): String = "Device(id=$id, channelId=$modelNumber)" override fun equals(other: Any?) = other is Device && other.deviceId?.length == this.deviceId?.length && other.modelNumber == this.modelNumber override fun hashCode(): Int { var result = deviceId?.hashCode() ?: 0 result = 31 * result + modelNumber.hashCode() return result } }
In the above snippet we leveraged several Kotlin features.
Properties
By using the properties syntax we do not have to define getters and setters explicitly.
This cuts down on clutter and allows us to focus on the data model.
Type inference
In instances where we can provide an initial value, we can skip the type specification since that can be inferred. For example:
var modelNumber = "AC-100"
modelNumber property is inferred to be of type String.
Expressions
If we take a closer look at the toString() method, there are a few difference from Java
override fun toString(): String = "Device(id=$id, channelId=$modelNumber)"
The return statement is missing. We are using Kotlin’s expressions here. For function returning a single expression, we can skip the curly braces and assign via ‘=’.
String templates
"Device(id=$id, channelId=$modelNumber)"
Here, we can use templating more naturally. Kotlin allows embedding ${expression} in any String. This eliminated the need for awkward concatenation or external helpers like String.format
Equality testing
In the equals method you might have noticed this expression.
other.deviceId?.length == this.deviceId?.length
Its comparing two Strings with an == sign. This has been a long standing gotcha in Java, which treats String as a special case for equality testing. Kotlin finally fixes it by using == consistently for structural equality (equals() in Java). Referential equality is checked with ===
Data class
Kotlin also offers a special type of class known as Data class. These are especially suited to scenarios where the primary purpose of the class is to hold data. Data classes automatically generate equals(), hashCode() and toString() methods, further reducing boilerplate.
A Data class would change our last example to this:
@Entity @Table(name = "device_model") data class Device2( @Id @Column(name = "deviceId") var deviceId: String? = null, @Column(unique = true) @Type(type = "encryptedString") var modelNumber: String = "AC-100" )
Both the attributes are passed in as constructor parameters. The equals, hashCode and toString are provided by the data class.
However, Data classes do not provide a default constructor. This is a problem for Hibernate, which uses default constructors to create the entity objects. We can leverage the kotlin-jpa plugin here, which generates an additional zero-argument constructor for JPA entity classes.
One of the things which set Kotlin apart in the JVM language space is that it's not just about engineering elegance but deals with problems in real world.
Practical benefits of adopting Kotlin
Reduction in Null Pointer Exceptions
Addressing NPEs in Java is one of the major objectives for Kotlin. Explicit null checking is the most visible change when Kotlin is introduced to a project.
Kotlin tackles null safety by introducing some new operators. The ? operator Kotlin offers null safe calls. For example:
val model: Model? = car?.model
The model attribute will be read only if the car object is not null. If car is null model evaluates to null. Note that the type of model is Model? - indicating that the result can be null. At this point flow analysis kicks in and we get a compile time check for NPE in any code consuming the model variable.
This can be used in chained calls too
val year = car?.model?.year
The following is the equivalent Java code:
Integer year = null; if (car != null && car.model != null) { year = car.model.year; }
On a large codebase, a lot of these null checks would be missed. It saves considerable development time to have these checks done automatically with compile time safety.
In cases where the expression evaluates to null the Elvis operator ( ?: ) can be used to provide a default value
val year = car?.model?.year ?: 1990
In the above snippet if year ultimately is null, the value 1990 is used instead. The ?: operator takes the value on the right is the expression on the left is null.
Functional programming options
Kotlin build on top of Java 8 capabilities and provides first class functions. First class functions can be stored in variables / data structures and passed around. For example, in Java we can return functions:
@FunctionalInterface interface CalcStrategy { Double calc(Double principal); } class StrategyFactory { public static CalcStrategy getStrategy(Double taxRate) { return (principal) -> (taxRate / 100) * principal; } }
Kotlin makes this a lot more natural, allowing us to clearly express intent:
// Function as a type typealias CalcStrategy = (principal: Double) -> Double fun getStrategy(taxRate: Double): CalcStrategy = { principal -> (taxRate / 100) * principal } Things change when we move to deeper usage of functions. The following snippet in Kotlin defines a function generating another function: val fn1 = { principal: Double -> { taxRate: Double -> (taxRate / 100) * principal } }
We easily can invoke
fn1 and the resulting function:
fn1(1000.0) (2.5)
Output
25.0
Although the above is achievable in Java, it’s not straight forward and involves boilerplate code.
Having these capabilities available encourages the team(s) to experiment with FP concepts. This leads to a better fit for purpose code ultimately resulting in more stable products.
Note that the lambda syntax is subtly different in Kotlin and Java. This can be a source of annoyance in the early days.
Java
( Integer first, Integer second ) -> first * second
Equivalent Kotlin
{ first: Int, second: Int -> first * second }
Over time it becomes apparent that the altered syntax is needed for the use cases Kotlin is supporting.
Reduce project footprint
One of the most understated advantages of Kotlin is that it can reduce the file count in you project. A Kotlin file can contain multiple/mix of class declarations, functions and other constructs like enum classes. This opens up a lot of possibilities not available in Java. On the flip side it presents a new choice – what's the right way to organize classes and functions?
In his book Clean Code, Robert C Martin introduces the newspaper metaphor. Good code should read like a newspaper - high level constructs near the top with detail increasing as you move down the file. The file should tell a cohesive story. Code layout in Kotlin can take a cue from this metaphor.
The recommendation is – to group similar things together – within the larger context
While Kotlin won’t stop you from abandoning structure, doing so can make it difficult to navigate code at a later time. Organize things by their ‘relation and order of usage’. For example:
enum class Topic { AUTHORIZE_REQUEST, CANCEL_REQUEST, DEREG_REQUEST, CACHE_ENTRY_EXPIRED } enum class AuthTopicAttribute {APP_ID, DEVICE_ID} enum class ExpiryTopicAttribute {APP_ID, REQ_ID} typealias onPublish = (data: Map<String, String?>) -> Unit interface IPubSub { fun publish(topic: Topic, data: Map<String, String?>) fun addSubscriber(topic: Topic, onPublish: onPublish): Long fun unSubscribe(topic: Topic, subscriberId: Long) } class RedisPubSub constructor(internal val redis: RedissonClient): IPubSub { ...}
In practice this reduces mental overhead significantly by reducing the number of files you have to jump to form the complete picture.
A common case is the Spring JPA repositories, which clutter up the package. These can be re-organized in the same file:
@Repository @Transactional interface DeviceRepository : CrudRepository<DeviceModel, String> { fun findFirstByDeviceId(deviceId: String): DeviceModel? } @Repository @Transactional interface MachineRepository : CrudRepository<MachineModel, String> { fun findFirstByMachinePK(pk: MachinePKModel): MachineModel? } @Repository @Transactional interface UserRepository : CrudRepository<UserModel, String> { fun findFirstByUserPK(pk: UserPKModel): UserModel? }
The end result of the above is that the number of Lines Of Code (LOC) in shrinks significantly. This has a direct impact on speed of delivery and maintainability.
We measured the number of files and lines of code in a Java project which was ported to Kotlin. This is a typical REST service containing data model, some logic and caching. In the Kotlin version, the LOC shrunk by ~50%. Developers spent significantly less time navigating between files and writing boilerplate code.
Enable clear, expressive code
Writing clean code is a wide topic and it depends on a combination of language, design and technique. However, Kotlin sets the team up for success by providing a good toolset. Below are some examples.
Type inference
Type inference ultimately reduces noise in the code. This helps the developers focus on the objective of the code.
It is a commonly voiced concern that type inference might make it harder to track the object we are dealing with. From practical experience, this concern is valid only for a small number of scenarios, typically less that 5%. In a vast majority of the scenarios the type is obvious.
Example:
LocalDate date = LocalDate.now(); String text = "Banner";
Becomes
val date = LocalDate.now() val text = "Banner"
Kotlin is also fine with the type being specified.
val date: LocalDate = LocalDate.now() val text: String = "Banner"
It is worth noting that Kotlin offers a comprehensive solution. For example, we can define a function type in Kotlin as:
val sq = { num: Int -> num * num }
Java 10 on the other hand, infers type by looking at the type of the expression on the right. This introduces some limitations. If we tried to do the above operation in Java, we get an error:
Typealias
This is a handy feature in Kotlin which lets us assign an alias to an existing type. It does not introduce a new type but allows us to refer to an existing type with an alternate name. For example:
typealias SerialNumber = String
SerialNumber is now an alias for the String type and can be used interchangeably with the String type. For example:
val serial: SerialNumber = "FC-100-AC"
is the equivalent of
val serial: String = "FC-100-AC"
A lot of times typealias can act as an ‘explaining variable’, to introduce clarity. Consider the following declaration:
val myMap: Map<String, String> = HashMap()
We know that the myMap holds Strings but we have no information on what those Strings represent. We could clarify this code by introducing typealiases for the String type:
typealias ProductId = String typealias SerialNumber = String
Now the map declaration above can be changed to:
val myMap: Map<ProductId, SerialNumber> = HashMap()
The above two definitions of myMap are equivalent but in the latter we can easily identify the contents of the map.
Kotlin compiler replace the typealias with the underlying type. Hence, the runtime behavior of myMap is unaffected, for example:
myMap.put(“MyKey”, “MyValue”)
The cumulative effect of such calcifications is a reduction in number of subtle bugs. On large distributed teams, bugs often a result of failure to communicate intent.
Early adoption.
Things to come
Co-routines have been available in Kotlin since version 1.1. Conceptually they are similar to async/await in JavaScript. They reduce complication in async programming by allowing us to suspend flow without blocking the thread.
They have marked as experimental, until now. Co-routines will graduate from experimental status in 1.3 release. This opens up more exciting opportunities.
Kotlin roadmap is guided via the Kotlin Evolution and Enhancement Process (KEEP). Keep an eye on this for discussions and upcoming features.
About the Author.
Community comments
Brief, Concise Intro Article
by Doug Ly /
Interesting read
by Kailash Singh /
Advantages of Kotlin
by Aleksandra Bessalitskykh /
Brief, Concise Intro Article
by Doug Ly /.
Interesting read
by Kailash Singh /
Your message is awaiting moderation. Thank you for participating in the discussion..
Advantages of Kotlin
by Aleksandra Bessalitskykh /
Your message is awaiting moderation. Thank you for participating in the discussion.
Thank you so much for this essential article!
In spite of the fact that Kotlin is a new language, it has already shown a lot of benefits in comparison with Java.
Kotlin is proven to be more concise than Java. It has been estimated that about 40% in the number of lines of code are cut..com/blog/kotlin-vs-java to find out additional information about the comparison of Kotlin vs. Java. | https://www.infoq.com/articles/intro-kotlin-java-developers?utm_source=articles_about_kotlin&utm_medium=link&utm_campaign=kotlin | CC-MAIN-2019-18 | refinedweb | 3,863 | 56.45 |
On Thu, 11 Mar 2004 21:42:04 -0800, "H. J. Lu" <hjl@lucon.org> wrote: >which means it will be discarded if the driver is builtin. If >xxx_remove may be discarded, please make sure there is > >#ifdef MODULE > remove: xxx_remove, >#endif > >so that xxx_remove won't be included when the driver is builtin. remove: __devexit_p(xxx_remove), is the correct method. The pointer is required for CONFIG_MODULE _or_ CONFIG_HOTPLUG, otherwise it must be set to NULL. __devexit_p() does all that. - To unsubscribe from this list: send the line "unsubscribe linux-ia64" in the body of a message to majordomo@vger.kernel.org More majordomo info at on Fri Mar 12 01:05:48 2004
This archive was generated by hypermail 2.1.8 : 2005-08-02 09:20:24 EST | http://www.gelato.unsw.edu.au/archives/linux-ia64/0403/8788.html | CC-MAIN-2020-16 | refinedweb | 130 | 67.35 |
Sorry for the video quality, but the slr was tied up :)
Finally finished the universal IR camera shutter release device.
The requirements were:
1. Remotely trigger the camera to take 3 photos in a row using a switch on the transmitter.
2. A solution that doesnt involve a physical connection to the camera, as my shutter release port on my Nikon is taken up with a GPS geotagger cable.
3. A universal trigger that will work with most cameras that have IR shutter releases.
An Arduino Uno based MCU that has specific libraries by Sebastian Setz for IR communication for cameras.
A super simple code. Involving nothing more than a shutterNow(); to trigger the shutter and delays for inbetween shots.
One of the best things is that this system is almost universal, it works with the following:
Canon
Olympus
Pentax
Minolta
Sony
Now getting the arduino to take pictures via an IR connection to the camera is easy.
The next part was triggering the arduino using the RC Transmitter. No easy feat unless you get a RelaySwitch from DIYDrones. It converts the signal from the RX to a simple on off relay.
Details are here:
If you own an APM1 you could set up channel 7 to switch the relay then just substitute the relayswitch for that.
The reason I havent used my APM1 is cause I plan on using an APM2 and they no longer have a relay on-board.
Below is my code:
Each time I flick a switch or push a button on the tx the camera takes 3 photos. Job done. AF is integral.
#include <multiCameraIrControl.h>
const int buttonPin = 2;
int buttonState = 0;
Nikon D5000(9);
void setup(){
pinMode(13, OUTPUT);
pinMode(buttonPin, INPUT);
}
void loop(){
buttonState = digitalRead(buttonPin);
if (buttonState == HIGH) {
// turn LED on:
digitalWrite(13, HIGH);
D5000.shutterNow();
delay(800);
D5000.shutterNow();
delay(800);
D5000.shutterNow();
delay(800);
digitalWrite(13, LOW);
delay(1000);
}
else {
// turn LED off:
digitalWrite(13, LOW);
}
}
Below is the library you need:
Arduino multiCameraControl Library
If you want to use this on an aircraft or multirotor, just use the arduino pro mini instead of the uno.
The Mini is 35mm x 18mm, small enough to swallow... Also you can power the board via the RX, so no extra wires.
Just a small box plugged into your RX and facing your camera.
Hopefully this helps someone like it helped me.
G:)
UPDATE
Theres no need for the relay switch! Thanks to Greg for the idea and code!
Just plug the signal cable from the RX directly into pin 2 of the arduino.
#include <multiCameraIrControl.h>
const int buttonPin = 2;
unsigned long buttonState;
Nikon D5000(9);
void setup(){
pinMode(13, OUTPUT);
pinMode(buttonPin, INPUT);
Serial.begin(9600);
}
void loop(){
buttonState = pulseIn(buttonPin, HIGH);
Serial.println(buttonState);
if(buttonState < 1500){
// turn LED on and take 3 photos:
digitalWrite(13, HIGH);
D5000.shutterNow();
delay(800);
D5000.shutterNow();
delay(800);
D5000.shutterNow();
delay(800);
digitalWrite(13, LOW);
delay(1000);
}
else {
// turn LED off:
digitalWrite(13, LOW);
}
}
Project completed using Arduino Pro Mini
A little bit of heat shrink around the board and we're in business :)
Hi, could you share the library again, the link has been dead... thanks
Hi,
Probably a silly question, But now with the APM 2.5 board, is it possible to use just one of the optional I/O ports to control the IR LED for the shutter control, uploading the appropriate code on the board of course.
Thanks
@Ahmed, it should work with all Canons, Minoltas, Nikons, Olympus', Pentaxs', and Sonys provided they have IR ports... :)
good work can you please post list of camera models which can be controlled.
@Andy, the best is either to buy one from flytron or build your own. $40 from flytron or DIY for $18.
The nice thing about the one I built is its universal, should work with most cameras that have a IR port.
An awesome light and tough camera is the new Nikon AW100. Its waterproof, shock resistant and has a built in gps for geotagging. Its the next camera Im going to buy.
@ Andy, very cool :)
@Melih, Thanks, I like making my own shields :)
or you can also use this shield for programming AtTiny ICs.-...
If you want it even smaller and lighter take a look at this video about putting arduino code onto a single small chip. super small, super light, super cheap....
Do any of you have a suggestion for a camera to use this with (preferably kind small)? I have been looking for a shutter release system just like this, but don't know what requirements I need to look for in the camera.
Job done :)
But thanks for the help :) | https://diydrones.com/profiles/blogs/arducsr-universal-remote-arduino-ir-camera-shutter-release-system | CC-MAIN-2022-05 | refinedweb | 786 | 72.97 |
Important: Please read the Qt Code of Conduct -
How to set different border width for top,bottom?
Do I need to use BorderImage and make a image containing a borde to specify left,top,right and bottom border width?
I would prefer to just set the borders with something like this, border.top.width:2 or border.top.color:"blue". and then change the state of the button to make a down/up button state.
Is it possible to do something like this without the use of another image?
-You could perhaps create a "myStyle.qss" sheet file and then launch your application with the --stylesheet myStyle.qss option? Something like :-
@QWidget {border : 0px solid blue;
border-top : 2px solid red;
border-bottom : 4px solid yelow;
}@
Edit : sorry, you right, it's not a solution for Qt Quick.
Hi,
I am really new to this but I don't think that is working on qml Elements, at least I don't know how to make it work.
You are right, ngrosjean is refering to the CSS styling system for widgets. I don't know how this would work in the Quick world.
Okay, thank you.
Hope someone know of a neat way of doing it :)
I don't know about the left and right side, but you could use a gradient for this inside of the rectangle. It isn't an image, but you could set a very small gradient at the top and bottom which would be conditional on the Onpressed option. If you need a fuller code example let me know. This would easily pull off what you are looking for I believe.
On further reading I saw you don't want this on just pressed, so what you would have to do with the rectangle is make a bool variable. Then set it to true or false when the button is clicked or active if you will. Then you could set the color on the true and false of that variable. Say you name the variable active in a rectangle id: Rect Your gradient color code would look something like this.
@
color: Rect.active ? "blue" : "white"
@
See this for the rest of the gradient help. "":
That sounds interesting, not perfect but at least it will do for now. I would like a perfect 3d pushButton effect. =)
Can you pleas give me the full code example, I am not good with gradients. I never manage to make them look the way I want, not even close =)
I have fixed the gradient part but I read that it is not a good thing to change a gradient with a state change. So I don't think this is a working solution for me.
This is what I have read:
"":
shullw: I see you did not mentioned states... =) I have found an example that I have changed to look as something you tried to explain to me :)
@
import Qt 4.7
Rectangle {
id: container
//property variant text signal clicked height: text.height + 10; width: text.width + 20 //border.width: 1 radius: 4 smooth: true gradient: Gradient { GradientStop { position: 0.0 color: !mouseArea.pressed ? "gray" : "darkgray" } GradientStop { position: 0.1 color: "black" } GradientStop { position: 0.9 color: "black" } GradientStop { position: 1.0 color: !mouseArea.pressed ? "darkgray": "gray" } } MouseArea { id: mouseArea anchors.fill: parent onClicked: container.clicked() } Text { id: text //anchors.centerIn:parent font.pointSize: 10 y:!mouseArea.pressed ? parent.height/2 - text.height/2-2+2:parent.height/2 - text.height/2 +1 x:parent.width/2 - text.width/2 text: "Click Me" color: "white" }
}
@ | https://forum.qt.io/topic/3836/how-to-set-different-border-width-for-top-bottom | CC-MAIN-2021-04 | refinedweb | 594 | 75.4 |
Yes, it is safe and highly reliable to hire dissertation help online. There is a massive advantage when you opt for hiring a tutor or online help for dissertation service. You get a 24/7 customer service option that will be with you until you are satisfied with the work. The experts writing your dissertation will provide you with a thoroughly researched paper that will fetch you nothing less than an A+ grade.
Tag: helper
magento2 – Magento 2 Model file Const get in Helper
Magento 2 Model file Const get in Helper.
I want to this type :-
File Path :- GetSomeMojoCategoryLandingPageModelEntityAttributeSource
<?php namespace GetSomeMojoCategoryLandingPageModelEntityAttributeSource; use MagentoFrameworkOptionArrayInterface; class Landingpageproducts extends MagentoEavModelEntityAttributeSourceBoolean implements ArrayInterface { const VALUE_NO = 'lpage_no'; const VALUE_NEW = 'lpage_new'; const VALUE_FEATURED = 'lpage_featured'; const VALUE_SALE = 'lpage_sale'; protected $_options; public function getAllOptions() { return ( ('value' => self::VALUE_NO, 'label' => __('No')), ('value' => self::VALUE_NEW, 'label' => __('New Products')), ('value' => self::VALUE_FEATURED, 'label' => __('Featured Products')), ('value' => self::VALUE_SALE, 'label' => __('Sale Products')) ); } ?>
Above const value get in Helper file.
So Please Help me how to get const in helper.
THANKS.
magento2 – Override a helper using Plugin
I have been trying to use plugin to override
MagentoSalesHelperReorder.php but I am not sure how and have been stuck with this problem. Basically I am trying to override the function in this helper file. I have added a comment inside the code what I would like to change.
/** * Check is it possible to reorder * * @param int $orderId * @return bool */ public function canReorder($orderId) { $order = $this->orderRepository->get($orderId); if (!$this->isAllowed($order->getStore())) { return false; } $currentOrder = $this->registry->registry('current_order'); if ($this->customerSession->isLoggedIn() || isset($currentOrder)) { \WHAT I AM TRYING TO DO - canReorder() change to canReorderIgnoreSalable() return $order->canReorderIgnoreSalable(); } else { return false; } }
How can I do this through plugin?
list manipulation – KenKen Puzzle Helper – Dropping order-less sequences
In the following example, I am generating all the variants of a $9 times 9$ KenKen puzzle that come in groups of three using addition that result in $18$.
data = Select(Tuples(Range(9), 3), Plus @@ # == 18 &)
This generates
${{1,8,9},{1,9,8},{2,7,9},{2,8,8},{2,9,7},{3,6,9},{3,7,8},{3,8,7},{3,9,6},{4,5,9},{4,6,8},{4,7,7},{4,8,6},{4,9,5},{5,4,9},{5,5,8},{5,6,7},{5,7,6},{5,8,5},{5,9,4},{6,3,9},{6,4,8},{6,5,7},{6,6,6},{6,7,5},{6,8,4},{6,9,3},{7,2,9},{7,3,8},{7,4,7},{7,5,6},{7,6,5},{7,7,4},{7,8,3},{7,9,2},{8,1,9},{8,2,8},{8,3,7},{8,4,6},{8,5,5},{8,6,4},{8,7,3},{8,8,2},{8,9,1},{9,1,8},{9,2,7},{9,3,6},{9,4,5},{9,5,4},{9,6,3},{9,7,2},{9,8,1}}$
I can then do something to search for repeated cases without order
Cases(data, {OrderlessPatternSequence(1, 8, 9)})
This generates (I want to delete all those after $198$ from data, but to do it for each unique set of three digits).
$${{1,8,9},{1,9,8},{8,1,9},{8,9,1},{9,1,8},{9,8,1}}$$
This approach has two drawbacks, I had to know the sequence to test for, then I can use that to drop all the repeats from data. I would have to repeat this for the next unique sequence.
Is there a simple way to create
data2 = some_fancy_command(data)
It produces data2 (note – I don’t care about the commas either), which only has unique 3-digit numbers regardless of order
$${{189},{279},{288}},{369},{378}}… $$
What is the easiest way to do that?
Note that I am familiar with, but only want a helper as opposed to a solver.
Aside: My goal is to have a tool that effectively duplicates as maybe a CDF of just an MMA Notebook where I enter all the cages, their type, the size of the puzzle and it provides hints on all the numbers that can go into the cage.
magento2 – Get userid of the admin logged in magento 2 helper
Below is a helper with a dependency and you can see within one of its function how the backend user id is read
<?php namespace MbsBackendScreenHelper; use MagentoFrameworkAppHelperAbstractHelper; use MagentoFrameworkAppHelperContext; class AdminUserReaderHelper extends AbstractHelper { /** * @var MagentoBackendModelAuthSession */ private $authSession; public function __construct( Context $context, MagentoBackendModelAuthSession $authSession ) { parent::__construct($context); $this->authSession = $authSession; } public function doSomethingWithMyAdminUserId() { ... $adminUserId = $this->authSession->getUser()->getId(); ... } }
dependency injection – How to solve the dilemma of helper functions relying on an object?
My issue can be concisely described as: is there any way (in PHP, or a pattern) to force default parameters at the time a class is called with the keyword
use so that people can just…well, use it, instead of having to deal with setup?
I usually have my functions that don’t really depend on much and don’t have side-effects inside a
Wrappers class.
Inside that
Wrapper class, I found myself to be needing an object, a dependency. Now, of course, I have a service provider that I can retrieve it from. It looks like this:
use Services; class Wrappers { public function helpMeDoSomething() { //an object implementing MyServiceInterface $service_i_need = Services::get( 'name' ); //do something with the service. } }
But, of course, that’s heavily problematic. Even if I do very granular & proper checking on
$service_i_need, I’m still hiding a dependency. When someone looks at the function, it’s not directly clear that it relies on a
MyServiceInterface and so, given I was out of ideas, I simply decided to go up one level and make the
Wrappers depend on the
MyServiceInterface, then wrap it all inside a service itself, so:
class Wrappers { public function __construct( MyServiceInterface $service_i_need ) { $this->service_i_need = $service_i_need; } public function helpMeDoSomething() { //We can use $this->service_i_need here! } }
Great! Now I just register my service:
Services::register( 'wrappers', new Wrappers( new ServiceINeed ) ) and all’s good.
Well, except now my
Wrappers class needs instantiation, which means I can’t
use Wrappers; at the top of my document, so, as a developer that wants to use
Wrappers, I’d now have to do:
use Services; $wrappers = Services::get( 'wrappers' ); $wrappers->helpMeDoSomething();
To me,
use is an absolute (good) word: **it shows whoever’s reading the document that it’s a dependency that’s needed 100% of the time and it doesn’t need any form of instantiation or setup.
You see, my
Services package can be brought into a document with
use Services and up until the point, we’re roughly on the same page as doing
use Wrappers, however, the problem with this is that my
Wrappers are supposed to only be no-side-effects functions that you can use on your data, whereas a service is an object that you should definitely type-hint, so, the expected outcome of
use Services is wildly different from
use Wrappers.
I’m stuck.
All I want is to globally expose a default implementation of a class such that its functions can be used statically.
In short, I want to initialize
Wrappers on my own as the creator of this package, then for others to just do
use Wrappers; and then for them to use
Wrappers::helpMeDoSomething()…and as I implemented every solution, although basic, it dawned on me: whatever uses
Wrappers::helpMeDoSomething() also indirectly depends on the service that
Wrappers depends on.
Basically, the “blindness” level of whoever’s using
Wrappers is as deep as the function call tree – if you’re calling my
helpMeDoSomething() inside function
A, then
A gets called inside
B, then
B inside
C, you are depending on an
MyServiceInterface object…you just don’t know it or it’s very deeply hidden.
How to solve this?
How to merge two descending singly linkedlist into one ascending linkedlist using Recursion without any helper method?
Here is my solution:
I want to know if my solution fits the requirement 100%? Or if there is any better solution?
Constraint of the question:
must be singly linkedlist
must use recursion, and no helper methods allowed.
The method must return the head node of the merged list.
class Solution{ Node dummy=new Node(0,null);// the dummy node to store the merged result public Node mergeAscend(Node a,Node b){ if(a==null&&b==null){//base case return null; } else{ if((a!=null&&b==null)||a.value>=b.value){// insert "b" after dummy //store the next node of current a, before pointing a.next to dummy.next; Node store_a_next_node=a.next; //insert Node "a" between dummy and dummy.next a.next=dummy.next; dummy.next=a; mergeAscend(store_a_next_node,b); } else if((a==null&&b!=null)||a.value<b.value){//insert "a" after dummy Node store_b_next_node=b.next; b.next=dummy.next; dummy.next=b; mergeAscend(a,store_b_next_node); } } return dummy.next; }
}
Need Helper for FunHomeBiz.com
Post edited: 6/14/20
context sensitive help – What is the name for a pattern featuring helper text in a semi-transparent overlay?
I am seeing an emerging design pattern in web apps that is used for helping new users get oriented to a page or application.
It consists of showing a diagram with succinct helper-text over a semi-transparent overlay, sometimes with arrows pointing to specific controls on the page. One of the best example of this I have seen is in UX Pin, an online wireframing/design tool.
Has anyone ever utilized this pattern – and if so, what is it called? Or how did you refer to it?
I am also interested in learning how it is accomplished. Is there a tool or plug-in that might be useful for achieving this effect, and is it possible to do this in a reusable fashion without placing static text in a transparent png?
magento2 – How to Use Helper Function in checkout_cart_index.xml?
I’ve to override checkout_cart_index.xml file from vendor/Magento for removing estimate shipping and discount code block!
It’s working successfully but my requirement is…it only should run when an admin or user select Yes from config part.I’ve tried ifconfig part but it’s not working. Can anyone suggest any better idea to fulfil this requirement!? Here’s my code of checkout_cart_index.xml. In this code set true in checkout.cart.coupon part is for removing Discount code and the above part is for for removing estimated shipping block and also override shipping.phtml
<action method="setTemplate"> <argument name="template" xsi:vender_extensionname::shipping.phtml</argument> </action> <arguments> <argument name="jsLayout" xsi: <item name="components" xsi: <item name="block-summary" xsi: <item name="config" xsi: <item name="componentDisabled" xsi:true</item> </item> </item> </item> </argument> </arguments> </referenceBlock> <referenceBlock name="checkout.cart.coupon" remove="true"/>
Here’s my config part
in which if user set to yes in discount code then it should be displayed but if one select NO then it should not be displayed.I’ve created helper data.php for fetching that yes no value but I’m confused in how to use helper function in my XML part. | https://proxies-free.com/tag/helper/ | CC-MAIN-2020-50 | refinedweb | 1,868 | 51.07 |
The Tree Construction is described in "XSL Transformations" [XSLT].
The provisions in "XSL Transformations" form an integral part of this Recommendation and are considered normative.
The XSL namespace has the URI.
NOTE:
The
1999in the URI indicates the year in which the URI was allocated by the W3C. It does not indicate the version of XSL being used.
XSL processors must use the XML namespaces mechanism [XML Names] to recognize elements and attributes from this namespace. Elements from the XSL namespace are recognized only in the stylesheet, not in the source document. Implementors must not extend the XSL namespace with additional elements or attributes. Instead, any extension must be in a separate namespace. expanded-name of the attribute has a non-null namespace URI. The presence of such attributes must not change the behavior of XSL elements and functions defined in this document. Thus, an XSL processor is always free to ignore such attributes, and must ignore such attributes without giving an error if it does not recognize the namespace URI. Such attributes can provide, for example, unique identifiers, optimization hints, or documentation.
It is an error for an element from the XSL namespace to have attributes with expanded-names that have null namespace URIs (i.e., attributes with unprefixed names) other than attributes defined for the element in this document.
NOTE:. | http://www.w3.org/TR/2001/PR-xsl-20010828/slice2.html | CC-MAIN-2015-48 | refinedweb | 222 | 56.76 |
What is your best advice on when to take advantage of the Section 179 deduction on business property versus depreciation? I am trying to finish up 2007 return (oops) and trying to determine how to handle the purchase of a computer and industrial sewing machine for a new business started in 2007. I also attended some training and had tuition and travel expenses prior to starting business, can I treat those as "start up costs" and take the 5,000 deduction. If so, what is an election statement that must be attached to the return?
Why can't I see your reply???
Thanks, XXXXX XXXXX need some clarification. New in 2008 is that you dont have to send statement to elect to use 5k for start up costs, but this return is for 2007. Do I need to send attached statement and, if so, what does it need to say?
Thanks so much.
Just one last question. Since I am filing this late, will the IRS allow me to deduct the start up costs, or because of the late filing, will force me to amortize. It seems to me that I did see that mentioned somewhere in my "research". Should I even bother trying, or should I be okay.
Again, thanks so much. | http://www.justanswer.com/tax/1oz3z-best-advice-when-advantage-section.html | CC-MAIN-2014-35 | refinedweb | 213 | 81.53 |
public class Solution { public List<Integer> findAnagrams(String s, String p) { List<Integer> res = new ArrayList<>(); if(s == null || p == null || s.length() == 0 || p.length() == 0) { return res; } int len1 = s.length(), len2 = p.length(); if(len2 > len1) return res; int[] anagram = new int[128]; for(int i = 0; i < len2; i++) { anagram[s.charAt(i)]++; anagram[p.charAt(i)]--; } int diff = 0; for(int i : anagram) { if(i != 0) diff++; } for(int i = len2; i < len1; i++) { if(diff == 0) res.add(i - len2); char c1 = s.charAt(i); char c2 = s.charAt(i - len2); if(c1 == c2) continue; anagram[c1]++; anagram[c2]--; if(anagram[c1] == 1) diff++; else if(anagram[c1] == 0) diff--; if(anagram[c2] == -1) diff++; else if(anagram[c2] == 0) diff--; } if(diff == 0) { res.add(len1 - len2); } return res; } }
@linyuan1212 The diff means how many different characters in the sliding window with size len2, for example, s: "cbaebabacd" p: "abc", in the first sliding window "cba", we can know the diff is 0, then move to next "bae", now compared with ""cba, the diff is 2, because there are two different characters, "e" in "bae", and "c" in "cba", thus during the moving, we can just calculate the diff to know whether it is matched or not
Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect. | https://discuss.leetcode.com/topic/64395/java-o-n-solution | CC-MAIN-2017-39 | refinedweb | 230 | 70.94 |
Introduction to Tkinter Scrollbar
Tkinter Scrollbar widget basically provides the slide controller which is used to implement the vertical scrolling widgets like the Listbox, Canvas and the Text. By using the Tkinter scrollbar widget once can also try creating the horizontal scrollbar too on the entry widgets. Tkinter scrollbar is used to roll through the content to see the whole content vertically when the scrollbar is set to vertical or horizontal scrollbar is used in order to scroll over the content horizontally. Scrollbar() syntax will be used to get a scrollbar with the attributes: master and the option/options.
Syntax:
w=scrollbar( master, option/options, … )
Attributes:
- Master: This master attribute of the Tkinter scrollbar represents only the parent window
- Option/options: The option/options attribute of the Tkinter scrollbar will have the list of option which are commonly used for the scrollbar widget. These option/options mainly be used as the key-value pairs which are separated by the commas.
Tkinter Scrollbar Widget
The most commonly used list of options for the widget are:
- activebackground: The “activebackground” option of the Tkinter scrollbar is used to color the slider/scrollbar/arrowhead/ arrowheads/cursor symbols when the mouse/cursor point is on the scrollbar/slider.
- Bg: The “bg” option of the Tkinter scrollbar is very much helpful in changing the background of the scroll bar/arrowheads/cursor points when the mouse/cursor point is actually not on the scroll bar/slider.
- bd: The “bd” option of the Tkinter scrollbar is used to set the 3d borders width around the whole trough’s perimeter and also the 3d effects’ width on the slider and the mouse point/arrowhead/cursor point. By default, there will be no borders around/beside at every corner of the trough. Along with that border which is having 2 pixels which is all around the slider and the arrowheads/cursor points.
- command: The “command” option of the Tkinter scrollbar/slider is the procedure which is also to be called every time when the scrollbar/slider is moved as needed.
- cursor: The “cursor” option of the Tkinter scrollbar widget is to make the cursor to appear when the scrollbar is below the mouse/the cursor point.
- elementborderwidth: The “elementborderwidth” option of the Tkinter scrollbar widget helps to adjust the border width around the slider and the arrowheads/cursor points. By default, the elementborderwidth is with the value “1”. You can add any border width using the elementborderwidth option as we needed/required.
- highlightbackground: The “highlightbackground” option of the Tkinter scrollbar widget helps with the highlight color’s whenever the scrollbar/slider don’t have any focus.
- highlightcolor: The”highlightcolor” option of the Tkinter scrollbar widget is used for the focus color of the highlight when the mouse point/scrollbar/slider has the focus.
- highlightthickness: The “highlightthickness” option of the scrollbar widget useful to set the thickness of the highlight but by default, the highlight thickness value is set to 1. You can set to the value 0 in order to suppress the focus highlight’s display.
- jump: The “jump” option of the Tkinter scrolling widget controls what to happen when the user drag/drags the slider. By default, 0 value of the jump option causes the callback command for every small slider drag. If the jump option value is set to the value 1, the callback is not called if the user doesn’t release the mouse/cursor button.
- orient: The “orient” option helps to set the orientation as horizontal/vertical. It is like orient=HORIZONTAL OR orient =VERTICAL .
- repeatDelay: The “repeatDelay” option helps us to control how much time the button 1 is to be held down in trough before slider moves in the specific direction repeatedly. By default, the repeating delay (repeatdelay=300) is 300 and the units are of milliseconds.
- repeatInterval: The “repeatInterval” option is for repeating the interval of the slider.
- takefocus: The “takefocus” option is for tabbing the focus through the scrollbar widget but one can set it with the value 0 if you don’t want this feature/behavior.
- troughcolor: The “troughcolor” option is for changing the color of the trough.
- width: The “width” option is to set the scrollbar/slider’s width ( x dimension is for vertical and y dimension is for horizontal). By default the width value is 16 (width=16).
Methods of Tkinter Scrollbar
These are the methods which are used with the Tkinter Scrollbar Objects:
- get(): The get() method returns 2 numbers a and b which is used to describe the slider’s current position. The get() value gives the exact position of the edge of the slider (left or right) and also for the vertical and the horizontal scrollbars respectively but the b’s value is for the right edge’s or the bottom edge’s position.
- set ( first1, last1 ): The set() method is to connect the scroll bar/ slider to another widget “w”. set w’s yscrollcommand or the yscrollcommand to slider/scrollbar’s set () method. These arguments have the same/similar meaning as all the values which is returned by the get() method.
- Pack(): This method is useful to set the alignment of the slider/sidebar.
Example
The below example is about implementing the Tkinter scroll bar ( vertical scrollbar with the Right Alignment) with some functions and with the for loop to print some text 100 times from 0-99 along with the check buttons when loop values exactly divide by the value 10. Here at first, we are importing all the Tkinter functions using the import * function from the python library. Then the “master1” and “top1” variables are created as the first step to make the scrollbar and also the checkbuttons when needed. Now the scrollbar1 variable is created with the “Scrollbar()” syntax with the master attribute as “master1” and the option attribute “bg”.
Option attributes and variables can be changed based on our requirements from the list of all the options available. Then for the sake of the alignment pack() method is used just like the usage in the Tkinter checkbutton. The pavanlist1 variable is created to set the alignment to the right and scrollbar using “side=RIGHT” in the pack() method. Then the Listbox() widget came with the master and option attribute for the sake of set method. Listbox() is helpful to display the list of items/other as needed/required. Listbox() items are stored in the mylist1 variable. Now the loop with the range of 100 came into existence to get the values from 0-99 by running the For loop 100 times. The text below the loop which runs 100 times will be listed in the “pavanlist1” variable and also prints the checkbuttons when the loop number/value is exactly divided by 10. Then the condition will be checked to print the Tkinter checkbutton when the loop values is exactly divided by the value 10. Then the config method is used with the scrollbar’s widgets to access the object’s attributes after the initialization from the other variable etc..in order to modify the text whenever you want during the runtime process of the program.
Code:
from tkinter import *
import tkinter
top1 = tkinter.Tk()
CheckVar11 = IntVar()
master1 = Tk()
scrollbar1 = Scrollbar(master1, bg="green")
scrollbar1.pack( side = RIGHT, fill = Y )
pavanlist1 = Listbox(master1, yscrollcommand = scrollbar1.set )
for pavanline1 in range(100):
pavanlist1.insert(END, "Line Numbers are " + str(pavanline1) + " now in online ")
if pavanline1%10==0:
C11 = Checkbutton(top1,text="Line Number : " + str(pavanline1),variable = CheckVar11)
C11.pack()
pavanlist1.pack( side = LEFT, fill = BOTH )
scrollbar1.config( command = pavanlist1.yview )
mainloop()
top1.mainloop()
Output:
Recommended Articles
This is a guide to Tkinter Scrollbar. Here we discuss a brief overview on Tkinter Scrollbar Widget, attributes, and examples to demonstrate the working of Tkinter Tkinter Scrollbar. You can also go through our other suggested articles to learn more – | https://www.educba.com/tkinter-scrollbar/ | CC-MAIN-2020-24 | refinedweb | 1,303 | 52.19 |
.systest;19 20 /**21 * A helper class used to stop a bunch of services, catching and logging any22 * exceptions and then throwing the first exception when everything is stoped.23 * 24 * @version $Revision: 1.1 $25 */26 public class AgentStopper {27 private Exception firstException;28 29 /**30 * Stops the given service, catching any exceptions that are thrown.31 */32 public void stop(Agent service) {33 if (service != null) {34 try {35 service.stop(this);36 }37 catch (Exception e) {38 onException(service, e);39 }40 }41 }42 43 public void onException(Object owner, Exception e) {44 logError(owner, e);45 if (firstException == null) {46 firstException = e;47 }48 }49 50 /**51 * Throws the first exception that was thrown if there was one.52 */53 public void throwFirstException() throws Exception {54 if (firstException != null) {55 throw firstException;56 }57 }58 59 protected void logError(Object service, Exception e) {60 System.err.println("Could not stop service: " + service + ". Reason: " + e);61 e.printStackTrace(System.err);62 }63 64 }65
Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ | | http://kickjava.com/src/org/apache/activemq/systest/AgentStopper.java.htm | CC-MAIN-2016-44 | refinedweb | 182 | 58.28 |
I borrowed another lab's MATLAB script which stored 3D location data of two objects (i.e. hands). Now I wish to combine this data with a video of the actual event and create a better picture of the motion of the objects. What is the best way of doing this (if I can do this)?
I'm new to Datavyu so go easy on me!
Thanks in advance :)
asked
07 Sep '14, 15:07
wiwa
1●1●1●2
accept rate:
0%
edited
07 Sep '14, 15:13
We're happy to help you get started with datavyu! If I understand correctly, you want to code a video that you also have motion-tracking data for and combine the two for later data analysis. If so, I would start off by determining when your video starts compared to the motion-tracking data. Once you've determined that you can match the times when analyzing by subtracting the difference between the start of each. After you've coded the video using datavyu you can export your data by using a print script or using the export file function. Either method can export your data as a .csv file which you can import into your statistical software along with your MATLAB data.
Let us know if this answers your question and/or you have additional questions.
answered
11 Sep '14, 21:49
import ×6
multiple ×4
matlab ×1
streams ×1
question asked: 07 Sep '14, 15:07
question was seen: 8,573 times
last updated: 11 Sep '14, 21:49
How do I import data from another application into OpenSHAPA?
importing physiological data
Built In Export Function on Multiple Files
Importing data
Clean tabular data export
Importing audio data
linking two sets of codes for inter-rater reliability
Access Data from GUI in script
How can I fix the "track timing error" and add data?
Error message: Could not open data source
First time here? Check out the FAQ! | http://datavyu.org/support/questions/558/matching-pre-recorded-location-data-with-a-video-import-non-video-data | CC-MAIN-2018-34 | refinedweb | 328 | 60.35 |
Hacking NetBeans #2 - Executing Ant Tasks from the IDE
By Roman Strobl on VII 26, 2005
In my new plug-in I need to call an an task from the IDE. Thanks to Zajo for telling me the secret how to do that via NetBeans API. It's quite simple, the hardest thing is (as usual) to find the right classes and methods. Here is the source code:
import java.io.BufferedWriter; import java.io.File; import java.io.FileWriter; import java.io.IOException; import org.apache.tools.ant.module.api.support.ActionUtils; import org.openide.filesystems.FileObject; import org.openide.filesystems.FileUtil; ... String" + "<target name=\\"copy\\">" + "<copy todir=\\"e:\\\\test2\\">" + "<fileset dir=\\"e:\\\\test\\" includes=\\"\*\*/\*.txt\\"/>" + "</copy>" + "</target>" + "</project>"; try { File zf = File.createTempFile("ant-copy", "xml"); BufferedWriter out = new BufferedWriter(new FileWriter(zf.getAbsoluteFile())); out.write(script); out.close(); FileObject zfo = FileUtil.toFileObject(FileUtil.normalizeFile(zf)); ActionUtils.runTarget(zfo, new String[] {"copy"}, null); zf.deleteOnExit(); } catch (IOException e) { System.out.println("IO error: "+e); }
What do I do in the code? At first I create a xml file which will serve as the ant script. I create it as a temporary file in default temp dir and write the xml into it. To execute the ant task I call the ActionUtils.runTarget method with parameters - a fileobject (created from a normalized File), 1 target to run and no properties. At the end I mark the file to be deleted when VM exits. Easy, isn't it?
To be able to compile this code you need to add to your plug-in two dependencies - Ant and Execution API. When you run this code, the output window opens and shows the results:
Now I can do with my plug-in anything what ant is capable of. Time to generate some powerful ant scripts.
Posted by Lucian Pintilie on červenec 27, 2005 at 12:08 dop. CEST #
ad 1. Sure, I'll fix that.
ad 2. Well my plug-in needs a generated ant script depending on the inputs - the build script will be different every time. Maybe I can do it by passing parameters to the ant script. Thanks for your comments.
Posted by Roman Strobl on červenec 27, 2005 at 03:55 dop. CEST #
Posted by Roman Strobl on červenec 27, 2005 at 04:00 dop. CEST #
Posted by Lucian Pintilie on červenec 27, 2005 at 08:09 dop. CEST # | https://blogs.oracle.com/roumen/entry/hacking_netbeans_2_executing_ant | CC-MAIN-2014-15 | refinedweb | 401 | 69.99 |
City Bikes and Elasticsearch Facets
UPDATE: This article refers to our hosted Elasticsearch offering by an older name, Found. Please note that Found is now known as Elastic Cloud.
A small demo of using facets in Elasticsearch on time series.
Introduction
In this article we will implement a solution with Elasticsearch from scratch. No prior knowledge of Elasticsearch is required. The installation of Elasticsearch is however skipped as I will be using a cluster hosted at Found. A local installation is described at elastic.co/guide or you can sign up for a free cluster. Where appropriate I will try to rather do gradual refinement than presenting an advanced, and possibly overengineered, solution right away. This implies that not all examples are the best practice way of doing things and the experience reader probably will spot a few optimizations right away, but as time elapses and more data is accumulated I plan to address the issues in further articles, one low hanging fruit at a time.
The Case
Oslo is one of many cities that have so called city bikes, or bikes for hire. The system works like this: a user walks to a nearby bike rack and unlocks a bike using his card. He may return the bike at any rack in Oslo. To assist users finding a nearby rack with free bikes or locks, the system provides the status of every rack through a webpage or custom smartphone app. In a city like Oslo there is one curious problem with this system: people tend to prefer using bikes downhill. This results in a congestion of bikes in the city centre (literally downtown!). Anyway, this is my theory. I know for a fact that the system operator use trucks for picking up and dropping off bikes. What I don’t know is whether the purpose is to transport the bikes to and from the workshop or to alleviate congestion. My objective for this case study is to explore this using Elasticsearch. Mainly there are two reasons for exploring this. Firstly to satisfy my curiosity and secondly to take a crack at estimating when racks get congested or depleted. The current applications are able to find the nearest available bike or lock, but they are not able to estimate the likelihood of the bike or lock remaining available by the time you get there.
The Data
The operator’s city bike webpage contains a map of all of their bike racks and their statuses. Every bike rack has a name, a position, a bike count and a free locks count. In Scala I express this as a case class:
case class BikeRack(name: String, longitude: Double, lattitude: Double, bicycles: Int, locks: Int, time : Date)
Note the addition of a date. This refers to the time at which the given status was observed.
Indexing
Using TagSoup it’s pretty straightforward to extract the required information from the webpage. TagSoup is a lenient html-parser that uses a best-effort approach for treating any html as xhtml. Combining TagSoup and Scala’s xml-support we can write a parser like this:
def getData() = { val parserFactory = new org.ccil.cowan.tagsoup.jaxp.SAXFactoryImpl val parser = parserFactory.newSAXParser() val source = new org.xml.sax.InputSource("") source.setEncoding("utf-8") val adapter = new scala.xml.parsing.NoBindingFactoryAdapter adapter.loadXML(source, parser) } def parse(data: Node): Seq[BikeRack] = { val timestamp = new Date() val onlinePattern = """.*Ledige sykler: (\d+).*Ledige låser: (\d+).*""".r val disabledPattern = """.*Ikke operativt.*""".r for { divTag <- data \\ "div" if (divTag \ "@class").toString() == "mapMarker" val status = getString(divTag, "data-content") val result = status match { case onlinePattern(bicycles, locks) => Some( BikeRack( getString(divTag, "data-name"), getDouble(divTag, "data-poslng"), getDouble(divTag, "data-poslat"), Integer.parseInt(bicycles), Integer.parseInt(locks), timestamp)) case disabledPattern() => None case x: String => println("Unrecognised format: " + x); None } if result.isDefined } yield result.get } def getString(node: Node, attr: String): String = { (node \ ("@" + attr)).toString() } def getDouble(node: Node, attr: String): Double = { java.lang.Double.valueOf(getString(node, attr)) }
In order to get this sequence of bike racks into Elasticsearch we need a client. Today’s option is Wabisabi. Wabisabi is an easy to use and is a Scala native client for Elasticsearch’s REST API.
def push(racks: Seq[BikeRack]) { val client = new Client("http://
.foundcluster.com:9200/") import ExecutionContext.Implicits.global val futures = for { rack <- racks val future = client.index( index = "oslo", `type` = "bikerack", data = rack.toJson, refresh = false) } yield (rack, future) for ((rack, future) <- futures) { future.onSuccess { case result => { println(s"Indexing: [$rack]. Got response: [${result.getResponseBody()}]") } } } val seq = Future.sequence(futures.map(_._2)); seq.onComplete { case _ => Client.shutdown() } seq.onSuccess{ case l => println("Indexed: " + l.size + " racks") } }
Elasticsearch expects the data to be JSON. With Scala’s string interpolation introduced in 2.10 its pretty straightforward to create JSON and redefine the BikeRack class like this:
case class BikeRack(name: String, longitude: Double, lattitude: Double, bicycles: Int, locks: Int, time : Date) { def toJson = { val utc = TimeZone.getTimeZone("UTC") val weekDayFormat = new SimpleDateFormat("u") weekDayFormat.setTimeZone(utc); val hourOfDayFormat = new SimpleDateFormat("H") hourOfDayFormat.setTimeZone(utc) val timeStampFormat = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss'Z'") timeStampFormat.setTimeZone(utc) s""" { "name" : "$name", "location" : { "lat" : $lattitude, "lon" : $longitude }, "bicycles" : $bicycles, "locks" : $locks, "timestamp" : "${timeStampFormat.format(time)}", "weekday" : ${weekDayFormat.format(time)}, "hourOfDay" : ${hourOfDayFormat.format(time)} }""" } }
For strings and numbers the formatting is native to JSON, but dates and geopositions are a bit trickier. Elasticsearch carefully tries to detect the contents of json strings when new fields are processed. If one formats dates according to one of the standard formats then no specific mapping is required.
Note the extraction of weekday and hour of day into separate fields. This is redundant, but it allows for greater flexibility when building queries that treat time as recurring events rather than a straight continuum.
Elasticsearch is now ready to receive data from our parser. All we have to do is to invoke it regularly over a period of time so we can start trend analysis.
Queries
Now the fun begins. Don’t worry if you don’t have much data in your cluster yet, we will start with some basic queries. Our first example is to calculate the average number of bikes for a given rack. We can do this by using the statistical facet and a simple match query. The match query retrieves all the documents that are named: “92-Blindernveien 5” and the statistical facet calculates the average of the bicycle field for all the documents retrieved.
The Statistical Facet
Simple average for all observations:
curl http://
.foundcluster.com:9200/oslo/bikerack/_search?pretty=true -XPOST -d '{ "query": { "match": {"name": "92-Blindernveien 5"} }, "facets" : { "stat1" : { "statistical" : { "field" : "bicycles" } } } }'
For this demo I have been running the parser every five minutes for more than two weeks, and the total number of bike rack observations in the index is around 54 000. Executing the above query produces a result like this:
{ "took" : 2229, "timed_out" : false, "_shards" : { "total" : 1, "successful" : 1, "failed" : 0 }, "hits" : { "total" : 4878, "max_score" : 4.871404, "hits" : [
] }, "facets" : { "stat1" : { "_type" : "statistical", "count" : 4878, "total" : 85934.0, "min" : 0.0, "max" : 28.0, "mean" : 17.616646166461663, "sum_of_squares" : 1710654.0, "variance" : 40.3413547214603, "std_deviation" : 6.351484450225813 } } }
In essence, the result consists of three parts. A metadata section, a hits section and a facets section. In the hits section we get the total number of documents that matched the query section of our query. In this case a total of 4878 bike rack observations. Unless you specify otherwise, Elasticsearch will include the ten highest ranked documents in the nested hits key. For this query the interesting part is in the facets section. The stat1 key holds the result of our stat1 facet. From this section we see that the average is 17,62 bikes. Combining the average with a standard deviation of 6,35 we can deduce that this bike rack was not empty in 95% of the observations, but the minimum value of 0 tells us that this rack have been observed as depleted in at least one observation.
Using the Hour of Day
When I leave home for work in the morning I’m often running a bit late, so every second counts. I have the option of walking to the closest rack or taking the bus. As it happens, it’s actually faster to go by bike than taking the bus, but the bike rack and the bus stop are located in opposite directions. I therefore use my mobile phone to check the status of the rack. As a matter of fact, I cannot remember the last time the rack was empty in the morning. This begs the question: can Elasticsearch prove that the probability of the rack being empty when I leave for work is very little? With that information at hand I could save those precious seconds it takes to check the current status of the rack. Let’s further refine our query and only consider observations between 09:00 and 10:00.
curl http://
.foundcluster.com:9200/oslo/bikerack/_search?pretty=true -XPOST -d '{ "query": { "bool" : { "must" : [ { "match": { "name": "92-Blindernveien 5"} }, { "match": { "hourOfDay": "9"} } ] } }, "facets" : { "stat1" : { "statistical" : { "field" : "bicycles" } } } }'
The bool query allows us to define several queries and how Elasticsearch should join their results. In this case we require a match in both queries. This time the statistical facet gave us the following result:
"stat1" : { "_type" : "statistical", "count" : 213, "total" : 4716.0, "min" : 10.0, "max" : 28.0, "mean" : 22.140845070422536, "sum_of_squares" : 107494.0, "variance" : 14.44964623421282, "std_deviation" : 3.8012690294443536 }
The minimum observed number of bikes is 10, the mean is 22.14 and the standard deviation is 3,80. Based on these figures we can conclude that the rack has never been depleted between 9:00 and 10:00 in the observed time and is not likely to become depleted at that time in the near future.
To better understand the trends of this particular bike rack we can use the terms_stats facet. The terms_stats facet is similar to the statistical facet, but requires specification of a key field, which it uses to group the documents by, calculate and calculate statistics for every term in that field. Using the terms_stats facet our query looks like this:
curl http://
.foundcluster.com:9200/oslo/bikerack/_search?pretty=true -XPOST -d '{ "query": { "match": {"name": "92-Blindernveien 5"} }, "facets" : { "stat1" : { "terms_stats" : { "key_field" : "hourOfDay", "value_field" : "bicycles", "size": 24, "order": "term" } } } }'
The result shows the statistics for every hour of the day.
"stat1" : { "_type" : "terms_stats", "missing" : 0, "terms" : [ { "term" : "0", "count" : 203, "total_count" : 203, "min" : 10.0, "max" : 27.0, "total" : 3933.0, "mean" : 19.374384236453203 }, { "term" : "1", "count" : 203, "total_count" : 203, "min" : 10.0, "max" : 27.0, "total" : 3939.0, "mean" : 19.40394088669951 }, { "term" : "10", "count" : 218, "total_count" : 218, "min" : 6.0, "max" : 28.0, "total" : 4330.0, "mean" : 19.862385321100916 }, { "term" : "11", "count" : 214, "total_count" : 214, "min" : 3.0, "max" : 27.0, "total" : 3901.0, "mean" : 18.22897196261682 }, { "term" : "12", "count" : 211, "total_count" : 211, "min" : 0.0, "max" : 27.0, "total" : 3003.0, "mean" : 14.23222748815166 }, { "term" : "13", "count" : 211, "total_count" : 211, "min" : 0.0, "max" : 25.0, "total" : 2530.0, "mean" : 11.990521327014218 }, { "term" : "14", "count" : 203, "total_count" : 203, "min" : 0.0, "max" : 25.0, "total" : 1800.0, "mean" : 8.866995073891626 }, { "term" : "15", "count" : 203, "total_count" : 203, "min" : 0.0, "max" : 25.0, "total" : 2112.0, "mean" : 10.403940886699507 }, { "term" : "16", "count" : 204, "total_count" : 204, "min" : 0.0, "max" : 28.0, "total" : 3259.0, "mean" : 15.97549019607843 }, { "term" : "17", "count" : 202, "total_count" : 202, "min" : 4.0, "max" : 27.0, "total" : 3275.0, "mean" : 16.212871287128714 }, { "term" : "18", "count" : 202, "total_count" : 202, "min" : 0.0, "max" : 26.0, "total" : 3276.0, "mean" : 16.217821782178216 }, { "term" : "19", "count" : 202, "total_count" : 202, "min" : 1.0, "max" : 26.0, "total" : 3748.0, "mean" : 18.554455445544555 }, { "term" : "2", "count" : 203, "total_count" : 203, "min" : 10.0, "max" : 27.0, "total" : 3936.0, "mean" : 19.389162561576356 }, { "term" : "20", "count" : 204, "total_count" : 204, "min" : 6.0, "max" : 28.0, "total" : 3548.0, "mean" : 17.392156862745097 }, { "term" : "21", "count" : 202, "total_count" : 202, "min" : 10.0, "max" : 28.0, "total" : 3775.0, "mean" : 18.68811881188119 }, { "term" : "22", "count" : 200, "total_count" : 200, "min" : 9.0, "max" : 27.0, "total" : 3848.0, "mean" : 19.24 }, { "term" : "23", "count" : 202, "total_count" : 202, "min" : 10.0, "max" : 27.0, "total" : 3923.0, "mean" : 19.42079207920792 }, { "term" : "3", "count" : 201, "total_count" : 201, "min" : 10.0, "max" : 27.0, "total" : 3892.0, "mean" : 19.36318407960199 }, { "term" : "4", "count" : 203, "total_count" : 203, "min" : 10.0, "max" : 27.0, "total" : 3935.0, "mean" : 19.38423645320197 }, { "term" : "5", "count" : 202, "total_count" : 202, "min" : 10.0, "max" : 27.0, "total" : 3898.0, "mean" : 19.297029702970296 }, { "term" : "6", "count" : 203, "total_count" : 203, "min" : 10.0, "max" : 27.0, "total" : 3734.0, "mean" : 18.39408866995074 }, { "term" : "7", "count" : 203, "total_count" : 203, "min" : 9.0, "max" : 27.0, "total" : 3664.0, "mean" : 18.049261083743843 }, { "term" : "8", "count" : 203, "total_count" : 203, "min" : 12.0, "max" : 28.0, "total" : 4443.0, "mean" : 21.886699507389164 }, { "term" : "9", "count" : 213, "total_count" : 213, "min" : 10.0, "max" : 28.0, "total" : 4716.0, "mean" : 22.140845070422536 } ] }
Histogram Facet
The above result is great, but isn’t it a bit odd that a numeric field is ordered alphabetically? The explanation is this: the terms_stats facet works on strings, and a bug in the first version of my parser ended up with Elasticsearch mapping hourOfDay and dayOfWeek as strings. Of course, the best solution for cleaning up the mess is to reindex the data, but what if reindexing is not feasible and the fields were not indexed in the first place? What we want a more granular resolution? The histogram facet is designed to work on numerals and allows for specification of bucket size at query time.
By using the key_script and value_script attributes in the histogram facet, it allows us to extract proper numerals at query time:
{ "query": { "match": {"name": "92-Blindernveien 5"} }, "facets" : { "histo1" : { "histogram" : { "key_script" : "doc['timestamp'].date.dayOfWeek * 100 + doc['timestamp'].date.hourOfDay", "value_script" : "doc['bicycles'].value" } } } }
And of course, this also allows us to tune the bucket size at query time:
{ "query": { "match": {"name": "92-Blindernveien 5"} }, "facets" : { "histo1" : { "histogram" : { "key_script" : "doc['timestamp'].date.dayOfWeek * 24 * 60 + doc['timestamp'].date.hourOfDay * 60 + doc['timestamp'].date.minuteOfHour", "value_script" : "doc['bicycles'].value", "interval" : 20 } } } }
You might have noticed that I omitted the curl command in the histogram examples. This has nothing to do with the histogram facet in particular, but the fact they use single quotes (’) in the scripts which would have to be escaped. To run such queries I recommend saving them to a file and using the following command:
curl http://
.foundcluster.com:9200/oslo/bikerack/_search?pretty=true -XPOST -d @query.json
Conclusion
In this article we have seen the flexibility of Elasticsearch as a data analysis tool. Go ahead and create a small script and start shoving in some JSON documents, then take it from there. You will probably soon find the need to do some mapping and tweaking of your indexer, and yes, Elasticsearch lets you do that. If there are breaking changes in your mappings, simply create a new index and when it looks good you can create a script to reindex the documents from the old index. The facets are not as flexible as traditional SQL, but they sure are fast, and once you get your head around them they can actually deliver a lot.
Next: City Bikes Part Two - Reindexing and Query Optimization with Filters | https://www.elastic.co/kr/blog/found-city-bikes-and-elasticsearch-facets | CC-MAIN-2017-17 | refinedweb | 2,586 | 70.09 |
import "github.com/ncw/gotemplate/heap"
Package heap provides heap operations for a type A and a comparison function Less. A heap is a tree with the property that each node is the minimum-valued node in its subtree.
The minimum element in the tree is the root, at index 0.
This provides a min-heap with the following invariants (established after Init has been called or if the data is empty or sorted):
!Less(h[j], h[i]) for 0 <= i < len(h) and j = 2*i+1 or 2*i+2 and j < len(h)
A heap is a common way to implement a priority queue. To build a priority queue, use the (negative) priority as the ordering for the Less method, so Push adds items while Pop removes the highest-priority item from the queue. The Examples include such an implementation; the file example_pq_test.go has the complete source.
This example inserts several ints into an IntHeap, checks the minimum, and removes them in order of priority.
Code:
h := &heap.Heap{2, 1, 5} h.Init() h.Push(3) fmt.Printf("minimum: %d\n", (*h)[0]) for len(*h) > 0 { fmt.Printf("%d ", h.Pop()) }
Output:
minimum: 1 1 2 3 5
Less is a function to compare two As
An A is the element in the slice []A we are keeping as a heap
template type Heap(A, Less)
Heap stored in an slice
Fix re-establishes the heap ordering after the element at index i has changed its value. Changing the value of the element at index i and then calling Fix is equivalent to, but less expensive than, calling h.Remove(i) followed by a Push of the new value. The complexity is O(log(n)) where n = len(h).
Init is compulsory before any of the heap operations can be used. Init is idempotent with respect to the heap invariants and may be called whenever the heap invariants may have been invalidated. Its complexity is O(n) where n = len(h).
Pop removes the minimum element (according to Less) from the heap and returns it. The complexity is O(log(n)) where n = len(h). It is equivalent to h.Remove(0).
Push pushes the element x onto the heap. The complexity is O(log(n)) where n = len(h).
Remove removes the element at index i from the heap. The complexity is O(log(n)) where n = len(h).
Updated 2017-05-11. Refresh now. Tools for package owners. This is an inactive package (no imports and no commits in at least two years). | https://godoc.org/github.com/ncw/gotemplate/heap | CC-MAIN-2019-35 | refinedweb | 432 | 66.13 |
Hi Lauren On Fri, Jul 20, 2012 at 10:11:17AM +0200, Laurens Van Houtven wrote: > Hi, > > > Apparently AMPBoxes aren't Arguments. However, I kind of want an AMPBox (like an AMPList, but only one). Yes it's a pretty common use-case. Right now you may use an AMPList with a single element. This is perfectly fine. Perhaps a wrapper class that restricts it to a single value would suffice? It would be easy to commit to Twisted probably. just a thought > :) I do not necessarily agree. I think namespace prefixing, if well defined, is fine on AMP arguments appearing in the top-level packet. Alternatively the AMPList approach is there. (JSON in AMP Arguments works pretty well I hear, too). -- Cheers, -E > > cheers > lvh > > > > > _______________________________________________ > Twisted-Python mailing list > Twisted-Python at twistedmatrix.com > | http://twistedmatrix.com/pipermail/twisted-python/2012-August/025966.html | CC-MAIN-2015-32 | refinedweb | 136 | 69.18 |
While many frontend developers are familiar with Angular, React or Vue, they are missing out on the next level of what is possible inside modern Browsers.
Content
- Introduction
- How can WebWorkers help?
- Multi Screen Apps
- Multi Screen Apps on mobile?
- How can we include our App code into a worker?
- What are remote methods?
- What is the problem with templates?
- Reducing the DOM by 80%+ in average
- ES8+ code directly inside the Browser
- Get your App documentation out of the box
- Got curious? What is neo.mjs?
- How to get up to speed?
- What is coming next?
1. Introduction
The Web is moving forward at a fast speed. Are you too?
When Angular and React got introduced, Browsers had a poor support for ES6+ features. As a result, the entire UI development got moved into nodejs.
Browsers catched up and can handle JS modules as well as several ESnext features on their own, so it is time for a change.
While it might be convenient to use custom files like .vue, your Browser can not understand them. You need a build process, transpilation or at least a hot module replacement to get your code changes into the Browser. This takes time and you can not debug your real code.
Most Apps today still use the following setup:
Some Apps are using WebWorkers, to move out single expensive tasks, but this is by far not enough. The main thread is responsible for DOM manipulations → the rendering queue. Getting it as idle and light-weight as possible needs to be a main goal.
2. How can WebWorkers help?
What we want to get out of this dilemma are the following setups:
A framework and your App logic can and should run inside a Worker.
With this setup, there are literally zero background tasks inside the main thread. Nothing can slow down your UI transitions or even freeze your UIs.
With a simple switch to use SharedWorkers instead of normal workers, we can even connect multiple main threads.
3. Multi Screen Apps
While this is a niche, it can create an amazing UX to expand Single Page Apps into multiple Browser Windows.
Please watch this 95s demo video:
All Browser Windows share the backend data.
All Windows can communicate directly (without a backend connection).
You can move entire Component Trees around, even with keeping the same JS instances.
4. Multi Screen Apps on mobile?
This will be a big deal. Many mobile Apps are using a native shell, containing multiple WebViews. The SharedWorkers setup can work here as well, resulting in loading framework & App related code only once and more importantly to move Component Trees around WebViews as well.
The Webkit Team (Safari) is thinking about picking this topic up again.
GitHub has added weight to the ticket:
149850 – Reinstate support for SharedWorkers
Bug 149850: Reinstate support for SharedWorkers
bugs.webkit.org
You really should do the same!
5. How can we include our App code into a worker?
Your index.html file will look like this (dev mode):
You just pull in the main thread code of the framework (40KB).
You don’t need to manually create this file.
All you need to do is:
npx neo-app
or clone the repo and use the create-app program.
This will import the WorkerManager & generate the Workers for you, register remote methods and load your App code into the App worker.
If you take a closer look into the files, you will notice that all virtual dom updates get queued into requestAnimationFrame.
You can create a similar setup on your own or let neo.mjs take care of this part for you.
6. What are remote methods?
In case you want to communicate between Workers or to the main thread, you might need an abstraction layer around the required postMessages.
This can be a lot of work, especially in case you want to optionally support SharedWorkers as well.
If you run your own code inside a Worker (inside the neo.mjs context the App Worker), you will notice that:
- window is undefined
- window.document is undefined
You simply can not access the real DOM.
This makes using virtual DOM mandatory.
Still, there are edge use cases where you want to directly access the DOM. Scrolling is a good one.
As the file name suggests, Neo.main.DomAccess is only available inside the main thread. It does not get imported into the App Worker.
All you need to do is adding the methods you want to expose to different workers or the main thread.
Now, inside your scope (the App Worker), you can call these remote methods as promises. They will get mapped into the Neo namespace out of the box.
As easy as this.
7. What is the problem with templates?
Angular, React & Vue are using pseudo XML string based templates.
These templates need to get parsed, which is expensive. You can do this at build time (e.g. Svelte), but then you can no longer easily modify them at build time. It is possible (e.g. manually manipulating a JSX output), but at this point no longer consistent to use.
Templates are a mix of dom markup, loops, if-statements and variables. It can slow down your productivity (e.g. scoping) and limit you.
While this topic is controversial for sure, neo.mjs got rid of templates.
Instead, persistent JSON-like structures are in place.
JSON-like means: nested JS objects & arrays, which you can change any way you want during the full component life cycle.
They do not contain variables, if-statements or loops.
You might agree, that JS is just perfect to work with these structures.
You never need to parse them.
Mapping config changes to the vdom is fairly trivial.
You can add flags to specific nodes and use the VdomUtil to fetch them.
You can add the removeDom flag to any node, which will remove the node (or tree) from the real DOM while keeping your vdom structure in place.
8. Reducing the DOM by 80%+ in average
This just mentioned removeDom attribute is incredibly powerful. I just enhanced card layouts to remove all inactive cards from the DOM by default.
You can change it using a config, in case you like to.
While nodes which have the style display:’none’ will get excluded from Browser layout calculations, they are still around.
Removing them reduces the the DOM, which then reduces the memory footprint of your main thread. A lot.
This is an easy way to further boost the performance.
As you can see, the calendar has 4 main views (Day, Week, Month, Year), but only one is inside the DOM.
The SideBar will get removed after collapsing it.
The SettingsContainer will get removed after collapsing it.
Settings contain a TabPanel with 5 tabs, only 1 tab body is inside the DOM at any time.
Your JS instances of all views still exist. You can still map changes to the virtual dom and put it back at any given time. In different spots of your App, if you like to.
Your state will keep in place, meaning in this use case: you can change settings for views, which are no longer inside the DOM.
9. ES8+ code directly inside the Browser
The main item of this article.
Take a look at the following screenshot:
The important things are:
- You can see the threads (Main, App, Data, Vdom)
- The WeekComponent.mjs file is located inside the App Worker
- You can see the real code (this is not a source map)
- You can see the custom class system enhancements: A full blown config system.
This leads to an unmatched debugging experience:
Change your code, reload the Browser and this is it.
No builds or transpilations are needed.
While JS modules are fully supported inside all major Browsers, they are still not supported inside the Worker Scope for Firefox & Safari. The dev teams are working on it.
For neo.mjs, there are Webpack based dist versions in place, which do run fine in Firefox & Safari.
The important part is, that Chrome fully supports it, so you can use the dev mode there and once it is bug free, test the dist/development version in other Browsers.
10. Get your App documentation out of the box
While many libs or frameworks provide a Docs App, this one will only provide documentation views for the framework files.
Using neo.mjs, you will also get documentation views for your own App related code. All you need to do is adding doc comments to your configs, methods and events.
11. Got curious? What is neo.mjs?
neo.mjs is an Open Source project (the entire code base as well as all examples & demo apps are using the MIT license).
Website App:
Repository:
neomjs/neo
neo.mjs enables you to create scalable & high performant Apps using more than just one CPU, without the need to take…
github.com
Meaning: you can use it for free.
It will stay like this.
However, the project is in need for more contributors as well as sponsors.
A lot(!) more items & ideas are on the roadmap.
If you want to contribute to a lovely Open Source project, this would be highly appreciated.
In case the project has or will have business value for your company: signing up as a sponsor will allow me to put more time into it, resulting in a faster delivery time for new things.
12. How to get up to speed?
The probably easiest way to learn neo.mjs, is following the first 2 tutorials on how to create the Covid Dashboard App from scratch.
12. What is coming next?
Right now, I am focussing on the new Calendar Component for the v1.4 release. The goal is to create an excellent UX, ahead of the GMail or native MacOS calendars.
You can take a look at the current progress here:
As the next step, I will polish the events a bit more and start with the drag & drop implementation afterwards.
Once DD is in place, the next item are in app dialogs. We have to grab them by the header to move them around or resize them.
A demo to move dialogs around different Browser Windows is possible and should be stunning.
Once this is done, I will further enhance the Grid / Table implementations. An easier access to column filtering, moving columns around, hiding columns etc.
You can definitely influence the roadmap!
Feel free to use the issues tracker:
Feedback appreciated!
Best regards & happy coding,
Tobias | https://www.coodingdessign.com/javascript/create-blazing-fast-multithreading-user-interfaces-outside-of-nodejs/ | CC-MAIN-2020-50 | refinedweb | 1,761 | 75.2 |
Hey, we showed off some new data types, and we utilized caching on our computed fields to improve performance. We also added queries to narrow down our data.
There are still a few things we should wrap up on before finishing up:
- Add some automatic refreshing to our Arrivals screen
- Customize the “Stops” screen so that it can provide us with a map of the Stops location
- Hook up the Incidents entity so that we can pull in data regarding all the busted escalators (and DC has a lot of them)
Let’s kick this off first by adding some automatic refreshing to our Arrivals List and Details screen.
Adding Auto Refresh
We should have an Arrivals screen that looks something like this:
This data is updated in real time, but our screen is not updated real time. It’d be nice to not have to hit the “refresh” button every time we want to see new train information.
We can add our own Auto Refresh functionality pretty easily as you will see.
- Open up the Arrivals screen in the screen designer (should be called MetroByArrivalTimeListDetail)
- Right click the screen in the Solution Explorer, and select select “View Screen Code”
- We should now be in the code editor
- Add a using statement for the System.Threading since we will need to use some classes from that namespace
- Copy and paste the below code into your screen’s class
- private Timer myTimer;
- partial void MetroByArrivalTimeListDetail_Closing(ref bool cancel)
- {
- this.myTimer.Dispose();
- }
-
- partial void MetroByArrivalTimeListDetail_Created()
- {
- TimerCallback tcb = MyCallBack;
- this.myTimer = new Timer(tcb);
- myTimer.Change(120000, 120000);
- }
-
- public void MyCallBack(Object stateInfo)
- {
- this.Details.Dispatcher.BeginInvoke(() =>
- {
- this.Refresh();
- }
- );
- }
Let’s go through what we’re doing with this code.
When our Arrival List and Details screen is created for the first time the Created() method gets invoked. Inside of this method we start a Timer. When we construct the Timer object we pass in a TimerCallBack object which specifies the method to be invoked by the timer – in this case the method is “MyCallBack”. The MyCallback method does the actual call to “Refresh” the screen. Keep in mind that we have to switch to the logic dispatcher before we invoke the Refresh() method. This is because Refresh() can only be invoked from the logic dispatcher, so we are simply making sure here that we are indeed on the logic dispatcher by doing a this.Details.Dispatcher.BeginInvoke() call.
Our Timer is set to be invoked after 120,000 milliseconds (which is = 2 minutes). And then will be invoked again every 2 minutes after that, for as long as this screen is open. When the screen is closed we will invoke the Closing() method which will make sure we are cleaning up after ourselves and will dispose of our object.
If you F5 now, open up the Arrivals List and Details screen and wait 2 minutes you will see the auto refresh in action.
Adding Bing Maps
For our next trick we are going to do something with the latitude and longitude properties that are displayed on the Stops List and Details screen. We can beautify this a bit and show a map instead.
Beth Massi has an awesome demo of her own on consuming OData, using the Bing Maps extension, and a dozen other things. She also reminds us that if we are going to be using Bing Maps in our project we need to get a Bing Maps Key first. As Beth says:.
Hang onto your Bing Maps Key, we’ll use it later.
The Bing Maps extension is a VSIX which you can find in the zip file here.
Double click the VSIX file after you download the zip file to install the extension. After installing the VSIX open up the Properties designer for our application and go to the “Extensions” tab. We need to enable the Bing Maps extension now like this:
Let’s go back to our Stops entity now in the entitiy designer.
Add a new string property to the Stops entity which will be a computed property. Call it “Location”.
The Stops entity should look something like this now:
In the Properties for Location, click the “Edit Method” button. We should now be in the generated computed property method. Copy the below code into the method:
- partial void Location_Compute(ref string result)
- {
- result = this.Lat + " " + this.Lon;
- }
In this method we are setting up our Location computed property so that it’s value is the Latitude and Longitude coordinates of our Metro stop. The value has to be Latitude followed by Longitude (if you invert it and make it Longitude followed by Latitude you will end up in Antarctica and will completely miss your train).
Now open up our StopsDCMetroListDetail screen. We’ll add in the Location property now to this screen. (I dragged and dropped it onto the screen). And then click on the drop down arrow for the property and switch it to “Bing Map Control”.
On the property sheet for the Location field you should see an option for the Bing Maps Keys. You’ll need to paste in the value here that you got from the site.
At this point we should be ready to F5.
Check it out
We should now have an Arrivals List and Details screen that auto refreshes every couple minutes, and a Stops List and Details screen that shows us a map of where our Metro stop is located.
Our Stops List and Details screen now:
I still want to pull in information for the “Incidents” entity so that we can keep track of all the busted escalators.
To do this will involve a blog post just by itself, since we will need to do some low level technical things to pull in all that information. But after we do that I think we’ll be able to navigate the DC Metro system with confidence.
Thanks for reading and let me know if you have any questions.
– Matt Sampson
Cool, Thanks for this.
Don't stop, keep em coming 🙂
Excellent example
Great Example Matt.
Matt, please suggest or help me how to come up with Odata Json format in lightswitch 11 beta.
@Rama – Hey Rama, check out: social.msdn.microsoft.com/…/98bbf743-1bf7-4421-b3ff-a3004509f86f for some information on how to enable JSON format.
You basically need to request the JSON format in the HTTP Web Request that your non-LightSwitch client creates – like so: msdn.microsoft.com/…/system.net.httpwebrequest.accept.aspx
Thank You Matt.
Could you please suggest how can we do OData Sync two way in Light Switch.
Any sample would be great. I am looking for a easy solution in ls for data sync using OData from SQL Azure to 200 ls Client Systems.
Regards
Rama
rama@shoppersstop.com.au
@Rama Dwarapudi – I'm not sure quite what you mean. If you connect to an OData Service that allows for Read AND Write operations, then you'll be able to save data changes through the LS Client which will then be persisted on the OData service end.
Sorry for the long delay – Matt S | https://blogs.msdn.microsoft.com/rmattsampson/2012/04/02/odata-apps-in-lightswitch-part-2/ | CC-MAIN-2019-35 | refinedweb | 1,192 | 70.63 |
Monitoring Camel-K applications on Openshift using the Fuse Console — Part 2 (Kamelets)
In an earlier article, we discussed how we can add the Jolokia trait to monitor Camel-K integrations. In this article, we will see how this can be done for Kamelets.
Kamelets (Kamel route snippets) is a new concept introduced in Camel K that allows users to connect to external systems via a simplified interface, hiding all the low-level details about how those connections are implemented. There is a community-driven catalog of reusable Kamelets (Camel route snippets, i.e. connectors) that can be used to stream data from/to external systems into any platform powered by Apache Camel. The catalog definition can be found here.
Let us assume, that we have an integration to read a message from an AMQ broker and push it to a Kafka topic. We can very easily reuse the kamelet definitions from the catalog and create this integration.
Each Kamelet is basically a route template with configuration properties. You need to know which component you want to get data from (a source) and which component you want to send data to (a sink). You connect the source and sink components by adding kamelets in a Kamelet binding as illustrated.
As a prerequisite, we will set up an AMQ Broker with the AMQP protocol exposed.
We will also set up a queue named myqueue. We will also set up a Kafka broker and create a topic called process-topic.
We will also install the Red Hat Fuse console so that we can monitor the integration. Check out my previous post on instructions to set up the console using operators.
Let us now define the binding for the AMQ source. For this, we will use the AMQP Source Kamelet and the Kafka Sink Kamelet. The Kamelet binding for this integration would look like this.
We will now need to add the Jolokia trait to the Kamelet binding. The Jolokia trait activates and configures the Jolokia Java agent. By enabling this trait, we can register the integration with the fuse console. You can easily tune your KameletBinding with traits configuration adding .metadata.annotations.
Let us install the Kamelet binding on the namespace.
kamel installoc create -f amqp-to-kafka.yaml
Once the integration is built and deployed, you should see it appear on the Fuse Console.
We can now push a message from the AMQ broker and monitor the routes.
Notice how the source, sink, and route1 definitions show up on the left palette. The route definition for route1 is fairly simple with a source and destination.
Let us now explore the source route.
You can see the route definition of the Kamelet along with the exchanges completed. A quick look at the sink route.
Notice how the Kamelet definition hides a lot of complexities and exposes a clean interface to use, and the fuse console still provides an easy way to monitor these.
References:
Monitoring Camel-K applications on Openshift using the Fuse Console | https://snandaku.medium.com/monitoring-camel-k-applications-on-openshift-using-the-fuse-console-part-2-kamelets-39bde66f7a7f?source=read_next_recirc---------3---------------------d093a0e0_49cc_4b21_afbb_492980575f99------- | CC-MAIN-2022-27 | refinedweb | 502 | 65.32 |
greetings;
i get this error when i compile, how do i correct it? C:\Program Files\Microsoft Visual Studio\MyProjects\enum\enum.cpp(11) : error C2676: binary '++' : 'enum game_result' does not define this operator or a conversion to a type acceptable to the predefined operator
here is my code;
Code:#include <iostream> using namespace std; enum game_result {win, lose, tie, cancel}; main() { game_result result; enum game_result omit = cancel; for (result = win; result <= cancel; result++) { if (result == omit) cout << "The game was cancelled\n"; else { cout << "The game was played "; if (result == win) cout << "and we won!"; if (result == lose) cout << "and we lost."; cout << "\n"; } } return 0; } | http://cboard.cprogramming.com/cplusplus-programming/27855-enum-program-not-working.html | CC-MAIN-2014-52 | refinedweb | 107 | 60.04 |
.
73 Reader Comments
Double-tap to zoom is much smarter than previous versions too, at least in my limited testing. With some good addons this might replace Chrome on my phone.
Any word on when the tablet update would hit the beta channel?
I'm finally upgrading to a new phone (with ICS). I currently use Dolphin on my old phone, but I'm willing to try something new.
I remember seeing a comparision of different Android browers a while ago, but I don't know if there's something current. It'd certainly be something I'd be interested in reading.
*Yes I am aware that that's not really their fault, but that doesn't help me either, does it?
Scrolling is very jerky. Tested on the ars mobile page with a us galaxy note.
No text reflow options.
The browser has tendency to resize certain elements. I cannot figure out why some text is really large and some is really small. I think it may be trying to increase the size of clickable elements?
This is still not ready. I like the desktop version but this beta still needs work.
There's probably some heuristics to determine what the "content" of the page is (as opposed to navigation stuff or secondary/tertiary content) and use a larger font for that by default to make it easier to read.
Why is Google screwed up anyway? It's showing a mobile version of the site, but a broken one. Gmail and Reader are showing me stuff I haven't seen since I last used my BlackBerry.
If you're like 99.999% of Android users in the US and still on Gingerbread, skip.
From the reddit post yesterday: Lots of sites, including google, are using webkit specific tags that then don't work for firefox.
Now that's clearly not an optimal solution and reminds us all very much of earlier times, but still: What do I do with a browser that can't render a large part of popular websites correctly? If it can't render things like reddit or google correctly, it's worse than the 2.3 default android browser..
On ICS (I have a Nexus S dev phone) I prefer Chrome, mostly because it tends to pop up a loupe when a touch target is ambiguous. On less mobile friendly sites like the ars forums, jumping to the correct page is a big win. I also feel like I get more consistent page load perf on JS heavy sites, namely one site I work on which is a page based site with 300k of minified+gzip JS.
The final problem is web devs who didn't live through the last browser monoculture writing webkit only mobile sites. I blame this on the CSS WG.
Also, I don't really want all the bookmarks I have on the PC synced with my phone. The phone is for content consumption and not snort signature info, etc. Having all those bookmarks would slow navigation and loading.
Although the new version is a huge improvement, there is still no compelling reason to switch.
It's still Chrome > Opera > Stock > Dolphin > Firefox
I use opera since chrome still has bugs that they need to iron out.
Overall I find the new UI much more intuitive and the tab interface is way better than the default browser's.
There is only XUL.
1. The ability to view websites in my phone exactly as they appear on a desktop computer screen
2. The ability to reflow text at any zoom level by e.g. double tapping
Firefox for Android fulfills neither criteria. At the default zoom level, it already distorts websites by making the text bigger. Then there's only one level of zoom at which it will reflow the text (double-tapping zooms in by a fixed amount and reflows the text, rather than just reflowing the text at the current zoom level).
Unlike Chrome, Firefox runs on pre-ICS devices, supports Adobe Flash, and is customizable with browser extensions. Both browsers are quite good, though. Firefox is faster at some things and Chrome at others.
The html5test.com score is 311 + 9 bonus points. (In nightly builds the score is 334 + 9.)
This is because Google chooses to serve different content to different browsers, and their "nice" mobile sites are often coded without standard styles that work across browsers.
We're hoping to include it in the next major update, which will be on the beta channel in about four weeks. You can test it today with a development snapshot from
Lame sauce.
When you open the tab menu there is a button to view tabs from other computers. Unfortunately the tab menu can only be opened if you have at least two tabs open already. This UI has been improved in the latest nightly builds to make it always accessible.
I'll try sideloading it.
Chrome for ICS is too buggy.
Still using stock for my primary.
@mbrubeck, thanks for the work that you do and for entering the fray to offer feedback.
The state of the mobile web is dismal, and I am thankful that Firefox is there to fight the good fight for a standards based web. When Firefox was released for the desktop, many sites were coded for IE. Folks made the same rendering complaints then as they do now regarding mobile. On the desktop, standards based websites have become the norm, thanks (in large part) to Firefox.
To me, browsers are pretty interchangeable these days; I use Firefox as much for what it represents as I do for what it is.
The html5test.com score is 311 + 9 bonus points. (In nightly builds the score is 334 + 9.)
Actually the new nightly now scores a 383 plus 9 bonus points.
The biggest gains:
* "Microdata":
* Access to the webcam
* DataView in WebGL
There is only XUL.
Ha! Made me smile. Looks like everyone else missed the joke.
Smooth and no glitches here, and since the last few builds never a crash.
There is only XUL.
What a lovely singing voice you must have.
The Google Play store says "This app is incompatible with your Verizon Motorola DROID2." I'm running system version 4.5.621.A955.Verizon.en.US / Android version 2.3.4 (Gingerbread). However, the platforms list clearly includes the Droid 2. I'm guessing the Play store is in error?
Edit: I grabbed Firefox mobile v14.0 via FTP and it seems to run fine.
On a semi-related note, I missed being able to use my Volume buttons for page-up/down. Wish more mobile browsers had that feature.
YAY!
This is the second time I have tried it. For the amount of web browsing I do on the phone, the native browser is 'good enough', but I might be not using it much because it sucks compared to a real desktop browser.
As a bonus, installing Firefox 14 and letting it sync bookmarks and stuff with my desktop browser led to some kind of sync loop and the phone drained its battery to about 12% in about 5 hours.
Luckily I noticed it in time to plug it in to charge a good bit before the end of the work day.
With this update they've given themselves a nice base to work from.
Problems it still has are : 1. no swipe to quit tabs (instead you have to touch a tiny X), 2. does very little to reformat sites other than zoom; the very best mobile browser, bar NONE, for reformatting is Opera. Opera will let you either double tap or pinch to zoom AND reformat (the only browser I've seen to so that). FF refuses to reformat beyond very slight zoom of text. It just won't break lines beyond certain points. Frankly, if Opera wasn't so unreliable I'd use it far more often.
Come on, Moz., fix dynamic formatting. After stability and correctness, that is the most important part of a mobile browser.
Is there a way to sync my desktop Chrome bookmarks with Android Firefox?
about 5 hours ago
<blockquote>iandisme wrote:
There are no widgets.
There is only XUL.</blockquote>
Ha! Made me smile. Looks like everyone else missed the joke.
@namespace url("");
Not all
You must login or create an account to comment. | http://arstechnica.com/gadgets/2012/06/hands-on-firefox-for-android-may-become-your-favorite-android-browser/?comments=1&post=22997835 | CC-MAIN-2014-10 | refinedweb | 1,401 | 83.46 |
#include <modelAPI.h>
UsdModelAPI is an API schema that provides an interface to a prim's model qualities, if it does, in fact, represent the root prim of a model.
The first and foremost model quality is its kind, i.e. the metadata that establishes it as a model (See KindRegistry). UsdModelAPI provides various methods for setting and querying the prim's kind, as well as queries (also available on UsdPrim) for asking what category of model the prim is. See Kind and Model-ness.
UsdModelAPI also provides access to a prim's assetInfo data. While any prim can host assetInfo, it is common that published (referenced) assets are packaged as models, therefore it is convenient to provide access to the one from the other.
establish an _IsCompatible() override that returns IsModel()
GetModelInstanceName()
Definition at line 72 of file modelAPI.h.
Option for validating queries to a prim's kind metadata.
Definition at line 161 of file modelAPI.h.
Construct a UsdModelAPI on UsdPrim
prim . Equivalent to UsdModelAPI::Get(prim.GetStage(), prim.GetPath()) for a valid
prim, but will not immediately throw an error for an invalid
prim
Definition at line 84 of file modelAPI.h.
Construct a UsdModelAPI on the prim held by
schemaObj . Should be preferred over UsdModelAPI(schemaObj.GetPrim()), as it preserves SchemaBase state.
Definition at line 92 of file modelAPI.h.
Destructor.
Definition at line 322 of file modelAPI.h.
Returns the type of schema this class belongs to.
Reimplemented from UsdAPISchemaBase.
Return a UsdModelAPI holding the prim adhering to this schema at
path on
stage. If no prim exists at
path on
stage, or if the prim at that path does not adhere to this schema, return an invalid schema object. This is shorthand for the following:
Returns the model's asset identifier as authored in the composed assetInfo dictionary.
The asset identifier can be used to resolve the model's root layer via the asset resolver plugin.
Returns the model's composed assetInfo dictionary.
The asset info dictionary is used to annotate models with various data related to asset management. For example, asset name, identifier, version etc.
The elements of this dictionary are composed element-wise, and are nestable.
Returns the model's asset name from the composed assetInfo dictionary.
The asset name is the name of the asset, as would be used in a database query.
Returns the model's resolved asset version.
If you publish assets with an embedded version, then you may receive that version string. You may, however, cause your authoring tools to record the resolved version at the time at which a reference to the asset was added to an aggregate, at the referencing site. In such a pipeline, this API will always return that stronger opinion, even if the asset is republished with a newer version, and even though that newer version may be the one that is resolved when the UsdStage is opened.
Retrieve the authored
kind for this prim.
To test whether the returned
kind matches a particular known "clientKind":
Returns the list of asset dependencies referenced inside the payload of the model.
This typically contains identifiers of external assets that are referenced inside the model's payload. When the model is created, this list is compiled and set at the root of the model. This enables efficient dependency analysis without the need to include the model's payload.
Return a vector of names of all pre-declared attributes for this schema class and all its ancestor classes. Does not include attributes that may be authored by custom/extended methods of the schemas involved.
Return true if this prim represents a model group, based on its kind metadata.
Return true if the prim's kind metadata is or inherits from
baseKind as defined by the Kind Registry.
If
validation is KindValidationModelHierarchy (the default), then this also ensures that if baseKind is a model, the prim conforms to the rules of model hierarchy, as defined by IsModel. If set to KindValidationNone, no additional validation is done.
IsModel and IsGroup are preferrable to IsKind("model") as they are optimized for fast traversal.
Return true if this prim represents a model, based on its kind metadata.
Sets the model's asset identifier to the given asset path,
identifier.
Sets the model's assetInfo dictionary to
info in the current edit target.
Sets the model's asset name to
assetName.
Sets the model's asset version string.
Author a
kind for this prim, at the current UsdEditTarget.
kindwas successully authored, otherwise false.
Sets the list of external asset dependencies referenced inside the payload of a model.
Definition at line 131 of file modelAPI.h.
Compile time constant representing what kind of schema this class is.
Definition at line 78 of file modelAPI.h. | https://www.sidefx.com/docs/hdk/class_usd_model_a_p_i.html | CC-MAIN-2021-21 | refinedweb | 793 | 57.98 |
.
Yes, that's too focused article. I came across the same problems months back. After reading this, I started recalling them.
I've been using VisualStudio on windows with no problems. Now I'm trying to use Code::Blocks and the Hello World program does this in the command terminal:
---
Process returned 32761 (0x7FF9) execution time : 1.106 s
Press any key to continue.
---
And google isn't helping.
I think we can use "using namespace std;" below the "#include<iostream>" instead of using "std::" as a prefix for every usage of cin, cout and endl.
One thing I've noticed is that this tutorial uses std::
If you put
using namespace std;
at the top of your code
You won't have to put std:: before everything.
I saw the exact same thing, but then after coding for a few weeks (and asking questions), people have been pointing me to this question on Stack Overflow:
To sum up, its good practice to use it!
Uhm, are you sure that's what the Stack Overflow thread says? It says "BAD PRACTICE"
A namespace is basically like a container that is used to avoid naming conflicts in a program, if you have say a "cout" with some other functionality in another namespace you will not be able to use it in your program since the compiler will always use the "std" version of it if you've used the "using namespace std". This may not matter when you are learning and will save some time but its a bad practice in the long run.
Hello, im usin Visual Studio and the debug show me this
'HelloWorld.exe' (Win32): 'C:\Users\Lenovo\source\repos\HelloWorld\Debug\HelloWorld.exe' cargado. Símbolos cargados.
'HelloWorld.exe' (Win32): 'C:\Windows\SysWOW64\ntdll.dll' cargado.
'HelloWorld.exe' (Win32): 'C:\Windows\SysWOW64\kernel32.dll' cargado.
'HelloWorld.exe' (Win32): 'C:\Windows\SysWOW64\KernelBase.dll' cargado.
'HelloWorld.exe' (Win32): 'C:\Windows\SysWOW64\msvcp140d.dll' cargado.
'HelloWorld.exe' (Win32): 'C:\Windows\SysWOW64\vcruntime140d.dll' cargado.
'HelloWorld.exe' (Win32): 'C:\Windows\SysWOW64\ucrtbased.dll' cargado.
'HelloWorld.exe' (Win32): 'C:\Windows\SysWOW64\kernel.appcore.dll' cargado.
'HelloWorld.exe' (Win32): 'C:\Windows\SysWOW64\msvcrt.dll' cargado.
'HelloWorld.exe' (Win32): 'C:\Windows\SysWOW64\rpcrt4.dll' cargado.
El subproceso 0x232c terminó con código 0 (0x0).
El subproceso 0x2368 terminó con código 0 (0x0).
El subproceso 0xa50 terminó con código 0 (0x0).
El programa '[2432] HelloWorld.exe' terminó con código 0 (0x0).
When i execute the program without the debuggin works fine, but when I open de .exe in the directory the windows opens and inmediatly closes up
what
fatal error: array (iostream etc.): No such file or directory
The above compiler error may be caused by giving .c extension instead of .cpp for your file... :) Worth to mention.
Hi, when I compile this code, it gives me error: unused variable 'x' [-Werror=unused-variable]
Here is the code
1 #include <iostream>
2
3 int main()
4 {
5 int x{ 2 + 3 }
6
7 return 0;
8 }
Someone please help me out
What can I do?
You have declared a variable `x`, but you're not using it. This doesn't make sense so the compiler is warning you.
If you use `x`, the warning goes away. For example you can print `x`
When I run the program I get my directory added to the console that outputs "Hello World!". How do I remove this from future projects?
here's a screenshot
thanks btw for the guide! it is very very well done!
The output is from your IDE, not from your program. If you run your program directory, you won't see this message.
Thank you for providing such detailed information.
Dear sir,
how i known about your other website like learnopengl.com. can i get the list of your all website.
and knowledge about Artificial intelligence & data structure are necessary for programmer.Is there any website website development or designing like this
Thanks for this tutorial
please reply
Hi everyone!!
I'm new here. I just registered for BSc in Computer Science with one of the universities here in SA, majoring in Maths & Computer Science. Currently busy with the basics....quiet interesting. But I am honestly clueless on what can I do/create at this level. Or at what level can I be able to create projects using C++
I wanted to bring up a potential problem Code Blocks users might face when trying to run their executables outside of the IDE.
The exe it creates in debug build rely on DLL files inside the Code::Blocks MINGW folder, but when running them outside of Code Blocks the DLLs cannot be found.
The solution is to add the MinGW\Bin to System Path
C:\Program Files\CodeBlocks\MinGW\bin
If you need help on how to edit system path you'll have to google that, apparently windows 10 made it easier to do and there are a few dedicated programs to make it less of a hassle.
I am using Particle IDE to run the "Hello World" code but when I try to compile the code I receive the same note which is:
Error: Could not compile. Please review your code
Thank you for your help.
Hey I'm not very experienced with IDEs but the error seems to be an IDE problem. Solve the problem by trying another IDE. However, I think the author mentioned that you should look up errors online (more specific: thread pages like stack or for your case on particle community). I picked up something for you that might help:
If that didn't help search for yourself! I'm sure you'll find something, I bet you are not the only one with that problem (Searching for a solution online is just like testing and debugging. Give your error message and modify your search input until you'll get a result that helps you).
I followed the guide of the previous versiom when I created my project, and I still get the error LNK2019
I would have been memey as all hell and given people the lmgtfy link instead of just telling people to look it up on google.
Hey, I am revisiting this website from scratch after slightly over an year of being here.
I have now learned Java and built some Android projects as well!!
I just felt the need to revisit this marvelous website that was my entering point into programming, and to be honest: I MISS POINTERS and operator overloads!!! I miss c++'s complexity!!
Hello everyone, I am glad to be back!!
Hello again! I'm glad to hear you're still programming :)
If Android development is something you want to do, but you don't like Java, you can use C++. Have a look at Qt. There's also the NDK, but I never used that so I can't tell you about it.
FYI if you go to run your program in Code::Blocks in Linux, you'll need to make sure under Settings -> Environment that you have the right Terminal selected for the program to run in. By default in Arch KDE it pulls an error; if you're using the KDE Desktop environment change the drop down to "konsole -e".
Which terminal you end up using is up to you of course. :)
good
My suggestion for common problems would be a tutorial about #include guards. When beginning, I wasted countless hours chasing out linking/naming errors before I finally figured out what include guards were.
"Debug it! There are tips on how to diagnose and debug your programs later in chapter 1 or 2."
"in chapter 3" (Debugging C++ Programs)?
here: "... before your operating system closes the console window."
0.7: "before your IDE"
here:
0.7: "// .. we find a newline" (but no word "character")
here: "Second, add the following code at the end of your main() function (right before the return statement):"
0.7: Second, add the following code at the end of the main() function (just before the return statement):
here: "(right ...", "your main()"
0.7: "(just ..." , "the main()"
"... before the return statement)"
"(Visual Studio users, make sure these appear after ..."
Redundancy in 0.7?
In 0.7 there is word "lines" after "these":
"First, add or ensure the following lines are near the top of your program (Visual Studio users, make sure these lines appear after #include "pch.h" or #include "stdafx.h", if those exist):"
"Try running in Start Without Debugging (ctrl-F5) mode."
As your requested: no capitalization in "ctrl".
<a href="">Stack Overflow</a>
Can your, please, change to https (redirects to it anyway)?
Name (required)
Website
Save my name, email, and website in this browser for the next time I comment. | https://www.learncpp.com/cpp-tutorial/a-few-common-cpp-problems/comment-page-5/ | CC-MAIN-2021-17 | refinedweb | 1,458 | 73.88 |
Is it possible to play a sound and the moment it ends do something?
I have a button and when you press it will play a sound and say hello.
@ if(system("canberra-gtk-play -f /usr/share/Sound.ogg&"))
cout << "Could't play ~/usr/share/Sound.ogg. Check file existence.\n";
cout << "Hello!" << endl;@
But the problem is that the moment that you press the button it will play the sound and say hello before the sound ends... So how is it possible to say hello the moment that the sound ends.
I dont see any signal in the "docs": , so i suspect that you have to track it yourself. You can set a timer connected with a function wich checks if isfinished == true. (on windows it always returns true :( ). And if so do anything you like.
How to check if isfinished == true? Can u be more precisely?
I am using Ubuntu so no problem, i think, with the true or false.
Some not tested pseudo code:
@
In ".h" put QSound *sound;
void "constructor"
{
QTimer *t = new QTimer(this); //create timer
sound("path to sound");
connect(t,SIGNAL(timeout()),this,SLOT(testFinished()));
t->start(1000); //1 second timer
}
void testFinished()
{
if (sound->isFinished) {
cout << "Finished" << endl;
return;
}
}
@
Errors on your code:
‘((ArbylaPlayer*)this)->ArbylaPlayer::sound’ cannot be used as a function
‘((ArbylaPlayer*)this)->ArbylaPlayer::sound->QSound::isFinished’ to ‘bool’
"Here": is the QSound class reference but the
@QSound.play("mysounds/bells.wav");@
and the
@QSound bells("mysounds/bells.wav");
bells.play();@
doesn't work.. i think it is playing in the background but you have no sound.. I have include QSound in both .h and .cpp files but still no sound..
As i said, it was a pseudo not tested code. It was only to gave you an idea. :)
Please post your code.
Note: Use the "docs": provided by Qt.
Here is my code in cpp file
@
#include "arbylaplayer.h"
#include "ui_arbylaplayer.h"
#include "QSound"
ArbylaPlayer::ArbylaPlayer(QWidget *parent) :
QMainWindow(parent),
ui(new Ui::ArbylaPlayer)
{
ui->setupUi(this);
QSound::play("/usr/share/Sound.ogg");
}
ArbylaPlayer::~ArbylaPlayer()
{
delete ui;
}@
No errors but u do not hear any sound.. Any ideas?
from the docs:
"X11
The Network Audio System is used if available, otherwise all operations work silently. NAS supports WAVE and AU files."
Yes and what should i do for that? At the class reference about Qt for Embedded Linux it says: A built-in mixing sound server is used, accessing /dev/dsp directly. Only the WAVE format is supported.
I have a .wav file but still no sound.. :/
I don't think that the sound really plays.. It must be something else.. I putted
@if (!QSound::isAvailable()) {
cout << "No sound" << endl;
}@
and the outpout is "No sound"... So something is missing.. I installed from Synaptic: nas-local server, nas-bin and libaudio-dev but i think somehow i must reconfigure Qt with NAS sound support but i do not know how to do this.. Any ideas? | https://forum.qt.io/topic/5355/is-it-possible-to-play-a-sound-and-the-moment-it-ends-do-something/ | CC-MAIN-2019-22 | refinedweb | 497 | 78.14 |
Cross-build for QNX ARM smashes stack when using FunPtr wrappers
I have built an unregistered LLVM cross-compiler for arm-unknown-nto-qnx8.0.0eabi, which I finally got to build using the attached patch. Simple programs no longer crash like they do in registered ARM cross-compilers (as reported on mailing list at and other places), however the following code does crash:
{-# LANGUAGE ForeignFunctionInterface #-} module Main (main) where import Foreign.Ptr foreign import ccall "wrapper" wrap_refresh :: ( IO ()) -> IO (FunPtr ( IO ())) main :: IO () main = do wrap_refresh (return ()) return ()
It seems, from experiments, that any code using the "wrapper" imports causes this error:
$ ./Main *** stack smashing detected ***: Main terminated Abort (core dumped) | https://gitlab.haskell.org/ghc/ghc/issues/7621 | CC-MAIN-2019-39 | refinedweb | 112 | 50.57 |
Full Stack Web Development Internship Program
- 3k Enrolled Learners
- Weekend/Weekday
- Live Class
Searching Algorithms are very important as they help search data in patterns which is otherwise very difficult. In this article we will take take a look at Binary Search in C with practical implementation. Following Pointers will be covered in this article:
Let us get started with article on Binary Search in C,
A Binary Search is a sorting algorithm, that is used to search an element in a sorted array. A binary search technique works only on a sorted array, so an array must be sorted to apply binary search on the array. It is a searching technique that is better then the liner search technique as the number of iterations decreases in the binary search.
The logic behind the binary search is that there is a key. This key holds the value to be searched. The highest and the lowest value are added and divided by 2. Highest and lowest and the first and last element in the array. The mid value is then compared with the key. If mid is equal to the key, then we get the output directly. Else if the key is greater then mid then the mid+1 becomes the lowest value and the process is repeated on the shortened array. Else if the key value is less then mid, mid-1 becomes the highest value and the process is repeated on the shortened array. If it is not found anywhere, an error message is displayed.
Let us move further with this Binary Search In C article and see an example,
Let’s look at the code:
#include <stdio.h> int main() { int i, low, high, mid, n, key, array[100]; printf("Enter number of elementsn"); scanf("%d",&n); printf("Enter %d integersn", n); for(i = 0; i < n; i++) scanf("%d",&array[i]); printf("Enter value to findn"); scanf("%d", &key); low = 0; high = n - 1; mid = (low+high)/2; while (low <= high) { if(array[mid] < key) low = mid + 1; else if (array[mid] == key) { printf("%d found at location %d.n", key, mid+1); break; } else high = mid - 1; mid = (low + high)/2; } if(low > high) printf("Not found! %d isn't present in the list.n", key); return 0; }
Output:
If the key is present:
If Key is not present:
In the program above, We declare i, low, high, mid, n, key, array[100].
We first, take in the number of elements the user array needs and store it in n. Next, we take the elements from the user. A for loop is used for this process. Then, we take the number to be searched from the array and store it in the key.
Next, we assign 0 to the low variable which is the first index of an array and n-1 to the high element, which is the last element in the array. We then calculate the mid value. mid = (low+high)/2 to get the middle index of the array.
There is a while loop which checks if low is less then high to make sure that the array still has elements in it. If low is greater then, high then the array is empty. Inside the while loop, we check whether the element at the mid is less than the key value(array[mid] < key). If yes, then we assign low the value of mid +1 because the key value is greater then mid and is more towards the higher side. If this is false, then we check if mid is equal to key. If yes, we print and break out of the loop. If these conditions don’t match then we assign high the value of mid-1, which means that the key is smaller than mid.
The last part checks if low is greater then high, which means there are no more elements left in the array.
Remember, this algorithm won’t work if the array is not sorted.
Let us move to the final bit of this Binary Search In C article
Binary Search Using Recursive Function:
Code:
#include <stdio.h> int binaryScr(int a[], int low, int high, int m) { if (high >= low) { int mid = low + (high - low) / 2; if (a[mid] == m) return mid; if(a[mid] > m) return binaryScr(a, low, mid - 1, m); return binaryScr(a, mid + 1, high, m); } return -1; } int main(void) { int a[] = { 12, 13, 21, 36, 40 }; int i,m; for(i=0;i<5;i++) { printf(" %d",a[i]); } printf(" n"); int n = sizeof(a) / sizeof(a[0]); printf("Enter the number to be searchedn"); scanf("%d", &m); int result = binaryScr(a, 0, n - 1, m); (result == -1) ? printf("The element is not present in array") printf("The element is present at index %d", result); return 0; }
OUTPUT:
In the above program, we use a function BinaryScr to search the element in the array. The function is recursively called to search the element.
We accept the input from the user and store it in m. We have declared and initialized the array. We send the array, the lower index, higher index and the number to be searched to the BinaryScr function and assign it to result. In the function, it performs binary search recursively. If not found, then it returns -1. In the main function, the ternary operator is used. If result=-1 then not found else found is displayed.
With this we come to the end of this blog on ‘Binary Search blog and we will get back to you. | https://www.edureka.co/blog/binary-search-in-c/ | CC-MAIN-2022-21 | refinedweb | 931 | 78.08 |
For a deeper look into our Eikon Data API, look into:
Overview | Quickstart | Documentation | Downloads | Tutorials | Articles
Hi there, is there a way to retrieve contract names of a futures product available on a date in the past using Eikon Python API? For example, crude oil on March 1, 2002 there were CLH02,CLJ02,CLK02,....CLZ04... etc on exchanges. Would the API get these names? Need to get these names first and then to check historical info like close prices, first notice date of these contracts, thanks.
With Eikon Data API only, for a date in the past, I don't believe you will be able to have the complete solution.
For chain expansion in history, history product, such as Refinitiv Tick History, would work. We have several materials available, for instance, see article How to expand Chain RIC using the Tick History REST API in Python, but the approach described requires Tick History product.
Otherwise, you can expand today and request at a point in time, which will not result in the complete retrieval at point-in time as some constituents would be expected to have changed overtime.
Once you have the chain constituents in time, you can retrieve any information for the list of constituents in that point of time using Eikon Data API. For example:
ek.get_data(df["Instrument"].tolist(), ['TR.CLOSEPRICE','TR.CLOSEPRICE.date'], {'SDate': '20211001','EDate':'20211101'} )
To see what fields are available, Data Item Browser tool (DIB) can be used, see article How to discover available fields for Data Grid service on JET(App Studio HTML5 SDK) API using Eikon Desktop for more info.
I hope that this information helps
I think the best pure Eikon Data API approach is described in this previous discussion thread:
1. Expand the CRUDE OIL RIC chain as it is now
2. Obtain the historical information on the dates that you require based on the instruments expanded in step 1.
With history product, for example Refinitiv Tick History, you can use Historical Chain Resolution request to expand that chain in the exact point in time, see this previous discussion thread for a discussion of this approach, and you can request close prices and other fields based on the expanded RIC list.
Hope this information helps
Hi Zoya, I still don't have a clue after reading the thread you provided. I don't need to know 'joiners' and 'leavers' or other information.
I want to know what futures contracts are available for trade on exchange given a date, like today or a date in history. Is there a way using a root RIC + a date to fetch these contract names?
Hi, My problem is not solved, thanks for asking @raksina.samasiri.
I want to know what futures contracts are available for trade on exchange given a date, like today or a date in history. Is there a way using a root RIC + a date to fetch these contract names? Thanks.
Sorry your problem is not solved and let me provide some additional information.
For today, I would suggest using Refiniv Data library, please try the following code:
import refinitiv.data as rd from refinitiv.data.content import historical_pricing from refinitiv.data.content import pricing rd.open_session() cruds = pricing.chain.Definition(name="0#CL:").get_stream() cruds.open(with_updates=False) cruds.close() print(cruds.constituents)
Resulting in constituents:
['CLK2', 'CLM2', 'CLN2', 'CLQ2', 'CLU2', 'CLV2', 'CLX2', 'CLZ2', 'CLF3', 'CLG3', 'CLH3', 'CLJ3', 'CLK3', 'CLM3', 'CLN3', 'CLQ3', 'CLU3', 'CLV3', 'CLX3', 'CLZ3', 'CLF24', 'CLG24', 'CLH24', 'CLJ24', 'CLK24', 'CLM24', 'CLN24', 'CLQ24', 'CLU24', 'CLV24', 'CLX24', 'CLZ24', 'CLF25', 'CLG25', 'CLH25', 'CLJ25', 'CLK25', 'CLM25', 'CLN25', 'CLQ25', 'CLU25', 'CLV25', 'CLX25', 'CLZ25', 'CLF26', 'CLG26', 'CLH26', 'CLJ26', 'CLK26', 'CLM26', 'CLN26', 'CLQ26', 'CLU26', 'CLV26', 'CLX26', 'CLZ26', 'CLF27', 'CLG27', 'CLH27', 'CLJ27', 'CLK27'...
That we can request the info that you require, for example:
non_streaming = rd.content.pricing.Definition( cruds.constituents, fields=['TRDPRC_1','ASK','BID'] ).get_stream( non_streaming.open(with_updates=False) non_streaming.get_snapshot()
resulting in:
InstrumentBIDTRDPRC_1ASK 0CLK299.4599.4799.47 1CLM298.5198.5198.54 2CLN297.5897.5297.61 3CLQ296.5496.5896.5 ... | https://community.developers.refinitiv.com/questions/91813/retrieve-futures-contract-names-at-a-point-in-time.html | CC-MAIN-2022-27 | refinedweb | 667 | 60.65 |
Xtreme Visual Basic Talk
>
Visual Basic .NET (2002/2003/2005/2008, including Express editions)
>
.NET General
> Opening an HTML file from Visual basic
PDA
Opening an HTML file from Visual basic
Ken_Stenger
12-06-2004, 12:29 PM
Can anyone tell me how to open an HTML file from visual basic.net? I do not want to edit it or anything, I literally just want to open the html file named Main (Main.html) when my "Users Guide" button is clicked. Any suggestions will be much appreciated. Thank you
Ken
excaliber
12-06-2004, 01:51.
Ken_Stenger
12-06-2004, 02:43.
I want to view the file as a webpage. I literally just want to click a button and have Internet Explorer open with the html file as a webpage
excaliber
12-06-2004, 04:14 PM
Next question then. Do you want to open IE itself, or open IE within your program?
If just IE by itself, use System.Diagnostics.Process namespace.
Otherwise use the IE webbrowser.
Ken_Stenger
12-06-2004, 07:30 PM
First off - I would like to open the website separate from my program, I basically want to open the file with a new instance of IE completely separate from my program.
by using this forum I have obtained some code that works
System.Diagnostics.Process.Start("Iexplore", "")
where the "" is my personal website.
but the problem is, I would really like to link the file locally. I tried the following code:
System.Diagnostics.Process.Start("Iexplore", "Main.html")
and put Main.html in my bin file, but this gave me an error because when IE opened it opened with "" which caused a problem. Any Suggestions?
Ken
gravity7
12-06-2004, 07:35 PM
Try using a file path reference::\myprogram\myhtmlfile.html as the URL and not the
EZ Archive Ads Plugin for
vBulletin
Computer Help Forum | http://www.xtremevbtalk.com/archive/index.php/t-200964.html | CC-MAIN-2014-49 | refinedweb | 313 | 67.25 |
Custom objc popover presentation code, why does it crash?
I've run out of ideas as to why this code crashes on the second display of the popover. I was trying to write custom code for displaying a popover, because I want to control more details of what happens when it is shown/dismissed, but I can't get the actual popover view display portion to be stable...this code crashes pythonista on me every time, almost always on the second attempt to show the popover:
import objc_util import ui def make_cgrect(x,y,w,h): import objc_util return objc_util.CGRect(objc_util.CGPoint(x,y),objc_util.CGSize(w,h)) def getUIViewController(view): import objc_util UIViewController = objc_util.ObjCClass('UIViewController') UIView = objc_util.ObjCClass('UIView') if isinstance(view,ui.View): viewobj = view.objc_instance elif isinstance(view,objc_util.ObjCInstance) and \ view.isKindOfClass_(UIView): viewobj = view elif isinstance(view,objc_util.ObjCInstance) and \ view.isKindOfClass_(UIViewController): viewobj = view viewResponder = viewobj.nextResponder() try: while not viewResponder.isKindOfClass_(UIViewController): viewResponder = viewResponder.nextResponder() except AttributeError: return None return viewResponder def showPopup(sender, popup): import objc_util parentvc = getUIViewController(sender) UIViewController = objc_util.ObjCClass("UIViewController") vc = UIViewController.new() vc.view = popup.objc_instance vc.modalPresentationStyle = 7 # this is the popover style value vc.preferredContentSize = objc_util.CGSize(popup.width, popup.height) popovervc = vc.popoverPresentationController() popovervc.sourceView = sender.objc_instance popovervc.sourceRect = make_cgrect(0,0,sender.width,sender.height) parentvc.presentViewController_animated_completion_(vc,True,None) #### create main view v = ui.View() v.frame = (0,0,400,400) #### create view to show as a popover v2 = ui.View() v2.frame = (0,0,200,50) v2.background_color = (1.0,0.0,0.0,1.0) #### put in a button to trigger the popover b = ui.Button() b.title = "Show Popup" b.frame = (0,0,100,30) b.border_width = 2 b.corner_radius = 3 b.center = (v.width*0.5,v.height*0.5) b.action = lambda s: showPopup(s,v2) #### present the main view v.add_subview(b) v.present(style="sheet")
Everything looks legit to me, and it does work the first time around, though apparently puts things in an unstable state. I must be doing something bad with a reference or something...but I've tried a bunch of different ways of storing things and it always crashes. Any more advanced objc_util gurus want to take a look?
Are you just getting segfault, or is there an objc exception that shows in the exception handler?
When you say attempt to show another popover, do you mean when you run the script again? Or when you dismiss the popover and show another?
For the latter, try sprinkling retain_global everywhere you can. Basically on any objc objects you create.
It's usually a segfault. Sometimes, and you're right...this is probably key, the objective-c exception is: "UIViewControllerHierarchyInconsistency" but I can't figure out where that inconsistency is coming from.
This is, as I said, attempting to present the popover a second time: show popover, dismiss it, show it again, crash. Between script runs, things reset and it will show the first time again without crashing.
It was my understanding that, once the popover view is dismissed, it is removed from the view hierarchy. If it is talking about the "v2" popover content view, even if I explicitly destroy and remake that view, it still crashes. Maybe this is two separate issues? I'll see if I can trace that inconsistency.
maybe some on_main_thread? technicallyi think apple says anything that screws around with view/vc heirarchy should be on the main thread.
another thought, does this happen only when showing the same view as a popover? what if you show popoverview1, then popover a different popoverview2?
Good point...I'll try throwing it on_main_thread explicitly. I'm pretty sure it doesn't crash if I show a different popover each time. It's the attempt to show the same popover more than once which causes it.
ok, seems the problem line is
vc.view = popup.objc_instance
the corresponding exception complains about the suiview being associated with another vc, and suggest removing that association.
so, this seems to work, before setting vc.view:
popup_vc=getUIViewController(popup) if popup_vc: popup_vc.view=None
I'll be honest, I must've gotten lost somewhere in my tests, because...that works!
...but where this code was actually being used, the popup view is created anew each time the popover is shown, so the view used in
vc.view = popupwas being given an unowned and un-presented view...as far as I know. Oh well...I'll see if there was some other crash-causing issue, since this now works reliably.
As usual, @JonB, thank you! Fresh eyes are always appreciated.
There. | https://forum.omz-software.com/topic/5471/custom-objc-popover-presentation-code-why-does-it-crash | CC-MAIN-2021-49 | refinedweb | 776 | 52.87 |
Label python data points on plot
I searched for ages (hours which is like ages) to find the answer to a really annoying (seemingly basic) problem, and because I cant find a question that quite fits the answer I am posting a question and answering it in the hope that it will save someone else the huge amount of time I just spent on my noobie plotting skills.
If you want to label your plot points using python matplotlib
from matplotlib import pyplot as plt fig = plt.figure() ax = fig.add_subplot(111) A = anyarray B = anyotherarray plt.plot(A,B) for i,j in zip(A,B): ax.annotate('%s)' %j, xy=(i,j), xytext=(30,0), textcoords='offset points') ax.annotate('(%s,' %i, xy=(i,j)) plt.grid() plt.show()
I know that xytext=(30,0) goes along with the textcoords, you use those 30,0 values to position the data label point, so its on the 0 y axis and 30 over on the x axis on its own little area.
You need both the lines plotting i and j otherwise you only plot x or y data label.
You get something like this out (note the labels only):
Its not ideal, there is still some overlap - but its better than nothing which is what I had..
How about print
(x, y) at once.
from matplotlib import pyplot as plt fig = plt.figure() ax = fig.add_subplot(111) A = -0.75, -0.25, 0, 0.25, 0.5, 0.75, 1.0 B = 0.73, 0.97, 1.0, 0.97, 0.88, 0.73, 0.54 plt.plot(A,B) for xy in zip(A, B): # <-- ax.annotate('(%s, %s)' % xy, xy=xy, textcoords='data') # <-- plt.grid() plt.show()
★ Back to homepage or read more recommendations:★ Back to homepage or read more recommendations:
From: stackoverflow.com/q/22272081 | https://python-decompiler.com/article/2014-03/label-python-data-points-on-plot | CC-MAIN-2019-26 | refinedweb | 310 | 74.69 |
I am working on a project using command line BLAT right now. I need to be able to take the output of the BLAT run, in any of the supported formats, and convert into a format that can be re-entered into a BLAT run. Eventually, my goal is to be able to iterate my BLAT runs. For reference BLAT can output psl, pslx, maf, sim4, axt, blast- tab, and blast-text format but takes as input only fasta, nib, and 2bit. I found a Biopython module called BlatIO (BlatIO on github.com) that supports parsing for .psl or .pslx files and attempted to parse this .psl output into a fasta format using my own code:
import sys sys.path.insert(1, 'C:\\Python27\Lib\site-packages\Bio\BlatIO.py') from Bio.AlignIO import BlatIO from Bio import SearchIO from Bio.SearchIO._model import QueryResult, Hit, HSP, HSPFragment alignments = SearchIO.parse(input_file, 'blat-psl', pslx=True) line1= QueryResult.id line2= HSPFragment.query print ('>', line1) print (line2)
The output is not an ID and a sequence like I would expect though. Instead I get this:
('>', property object at 0x029BC9F0) property object at 0x029BC3C0
I am open to all suggestions about how to get ANY of the BLAT output formats into ANY of the BLAT input formats....either through fixing the code I have started above or some other method.
THANK YOU!
(PS- I have already done this project in BLAST so please don't tell me to just use BLAST. I know that BLAST has different and in some ways better output formatting options, but I really need to use BLAT not BLAST. PPS - I am aware of tools like those as usaglaxay.com that convert files however I really need a code or package to do this, preferably in Python or Perl, and not a web browser tool!)
Your best friends for sorting out things like this are:
in your case when you print the object it gives you the string representation of that object, which is not all that helpful (ok it is atrocious)
I don't know biopython but think it has to do with that line1 is sort of a 'class reference' not a result object instance, it seems intuitive that you need to loop over all
alignments; at least in bioperl you need to do this. And then extract data via accessor methods. So it doesn't look like your program could work at all (note I know nothing about python,so maybe there is some kind of weird magic).
I'd look for a class that writes fasta files (smth like
SeqIO(
Bio::SeqIOin bioperl)), pass it the alignment object and see what happens.
Btw, if I see correctly
BlatIOinherits from
SearchIOand the object returned by
SearchIO.parseshould have the same interface as any object returned by
SearchIO.parse, so you just have to look for example code for class/interface
SearchIOand it should work. That, given the factory pattern of SearchIO.parse works as I assume. | https://www.biostars.org/p/92066/ | CC-MAIN-2018-47 | refinedweb | 502 | 72.05 |
Rosegarden house style is as follows, in approximately descending order of importance:
Class names are UpperCamelCase, method names are lowerCamelCase, non-public member variables are m_prefixedCamelCase, slot names start with “slot”.
Indentation is by four spaces (0x20) at a time. There should be no tab characters (0x09) anywhere in Rosegarden source code. The indentation should look the same regardless of whether you read it in an IDE, in a terminal window with “cat” or “vi”, in Emacs, or quoted in an email. It must not depend on having the right settings for tab-to-space conversion in your IDE when you read it. (Emacs and vim users will note that we already start every source file with a meta-comment that sets up the right indentation mode in these editors.)
NOTE: As of revision 13,509, Rosegarden has 8,691 tab characters in 287 files. Rosegarden 10.02 had 9,598 tab characters in 262 files. The good news is fewer tabs overall, but the bad news is 25 new files have tabs that never had tabs before. New developers are ignoring this rule, apparently.
No extra indentation inside namespaces; set off inner code by two carriage returns on either side of namespace brackets
namespace Rosegarden { // 1 // 2 class SomeUsefulClass { //code }; // 1 // 2 }
Braces are what you might call Java-style, which means:
int SomeClass::someMethod(int f) { int result = 0; if (f >= 0) { for (int i = 0; i < f; ++i) { result += i; } } else { result = -1; } return result; }
Single statement blocks are preferably either “inline” or bracketed. Like this:
if (something) somethingElse(); else someOtherThing(); if (something) { somethingElse(); } else { someOtherThing(); }
but not:
if (something) somethingElse();
Whitespace is much as in the above examples (outside but not inside
(), after but not before
;, etc.):
connect(detailsButton, SIGNAL(clicked(bool)), this, SLOT(slotDetails()));
but not:
connect( detailsButton, SIGNAL( clicked(bool) ), this, SLOT( slotDetails() ) );
and please avoid:
if( something) if(something)
No whitespace between
if (and other C++ keywords) and the
( makes Michael's eyes hurt.
Pointers are
MyObject *ptr; not
MyObject* ptr;
If you have more arguments than will fit on a reasonable length line (80 characters is a good figure, but this is not a hard rule), align the extra arguments below and just after the opening ( rather than at a new level of indentation:
connect(m_pluginList, SIGNAL(activated(int)), this, SLOT(slotPluginSelected(int)));
but not:
connect(m_pluginList, SIGNAL(activated(int)), this, SLOT(slotPluginSelected(int)));
If your
( is so far to the right that following the above rule isn't practical, then indent the remainder of the
() block by 8 spaces.
CommandHistory::getInstance()->addCommand(new SegmentSyncCommand( comp.getSegments(), selectedTrack, dialog.getTranspose(), dialog.getLowRange(), dialog.getHighRange(), clefIndexToClef(dialog.getClef())));
Use doxygen-style comments for class and function documentation
/** ... */
or
///
Use comment codes in your comments when appropriate:
Note: //!!! causes problems for doxygen. We might want to revise the above. Adding a space avoids problems: // !!!
If in doubt, please err on the side of putting in too many comments. We have contributors of all ability levels working here, and what may seem obvious to you might be complete gibberish to someone else. Comments give everyone a better chance to be useful, and they are always welcome, while nobody will think you are a super code warrior or code gazelle for committing a thousand lines that have only three choice comments
When commenting out a large block of text, it is preferable to use C++ style
//
/* */
//&&& This code was deleted temporarily // for (int m = 0; m <= n; ++m) { // int x = s.x() + (int)((m * ((double)m * ax + bx)) / n); // int y = s.y() + (int)((m * ((double)m * ay + by)) / n); // }
but not:
//&&& This code was deleted temporarily /* for (int m = 0; m <= n; ++m) { int x = s.x() + (int)((m * ((double)m * ax + bx)) / n); int y = s.y() + (int)((m * ((double)m * ay + by)) / n); } */
(The reason C++-style comments are preferred is not arbitrary. C++-style comments are much more obvious when viewing diffs on the rosegarden-bugs list, because they force the entire block of text to be displayed, instead of only the starting and ending lines.)
Includes should try to follow the following pattern (from local to global):
#include "MyHeader.h" #include "MyCousinClassHeader.h" #include "MyOtherCousin.h" #include "RosegardenHeader.h" #include "base/SomeOtherHeader.h" #include <QSomeClass> #include <QSomeOtherClass> #include <some_stl_class> #include <some_other_stl_class>
Switch statements are all over the place, and we need to pick one style and use it, so we'll call this a good switch statement:
switch (hfix) { case NoteStyle::Normal: case NoteStyle::Reversed: if (params.m_stemGoesUp ^ (hfix == NoteStyle::Reversed)) { s0.setX(m_noteBodyWidth - stemThickness); } else { s0.setX(0); } break; case NoteStyle::Central: if (params.m_stemGoesUp ^ (hfix == NoteStyle::Reversed)) { s0.setX(m_noteBodyWidth / 2 + 1); } else { s0.setX(m_noteBodyWidth / 2); } break; }
and this a bad one (note particularly the
switch( here, grrrrr!):
switch(layoutMode) { case 0 : findAction("linear_mode")->setChecked(true); findAction("continuous_page_mode")->setChecked(false); findAction("multi_page_mode")->setChecked(false); slotLinearMode(); break; case 1 : findAction("linear_mode")->setChecked(false); findAction("continuous_page_mode")->setChecked(true); findAction("multi_page_mode")->setChecked(false); slotContinuousPageMode(); break; case 2 : findAction("linear_mode")->setChecked(false); findAction("continuous_page_mode")->setChecked(false); findAction("multi_page_mode")->setChecked(true); slotMultiPageMode(); break; }
You should feel encouraged to use variables to make the code easier to understand without a look at the header or the API docs. Good:
int spacing = 3; bool useHoops = false; doSomething(spacing, useHoops);
avoid:
doSomething(3, false);
Note that you should not feel like you have to take this to ridiculous extremes. Just bear it in mind, especially when it's something obscure enough you've just had to go look at the header or the API yourself to figure out what some static parameter was for.
Sometimes variables are simple and disposable, and names like i or n are appropriate, but unless your variable is something extremely simple and obvious, like an iterator, then it is preferable to use more verbose variable names in order to give the people who follow you a year later some shred of a hint what you were thinking when you wrote the code. This particularly egregious example does some rather complicated transformations that would have been crystal clear to the author at the time, but coming at it as a stranger, it's really just short of impossible to try to work out what the logic is supposed to do here. mi and ph and i oh my. It's just spectacularly impossible to maintain, so avoid doing this:
int i; int mi = -2; int md = getLineSpacing() * 2; int testi = -2; int testMd = 1000; for (i = -1; i <= 1; ++i) { int d = y - getSceneYForHeight(ph + i, x, y); if (d < 0) { d = -d; } if (d < md) { md = d; mi = i; } if (d < testMd) { testMd = d; testi = i; } }
Use Q_ASSERT_X() from <QtGlobal> for assertions.
Q_ASSERT_X(id < m_refreshStatuses.size(), "RefreshStatusArray::getRefreshStatus()", // where "ID out of bounds"); // what
Make sure errors are properly handled in a non-debug build. Do not depend on assertions.
Q_ASSERT_X(p, "foo()", "null pointer"); if (!p) { return; }
Menu items use title case. We don't capitalize pronouns except pronouns “standing alone” (exhaustive pronouns). For instance, “Move to Staff Above”.
We use trailing dots “…” to indicate that a menu item leads to a dialog.
For example:
<Action name="general_move_events_up_staff" text="&Move to Staff Above..." /> | https://www.rosegardenmusic.com/wiki/_export/xhtml/dev:coding_style | CC-MAIN-2018-22 | refinedweb | 1,203 | 51.78 |
May 04 2020 07:48 AM - edited May 04 2020 07:57 AM
May 04 2020 07:48 AM - edited May 04 2020 07:57 AM
We have been using Sentinel in conjunction with Azure Log Analytics for quite some time to ingest selected security logs (AD, DNS, Windows Security etc.) from VM-agents in our server environment. Last week we upgraded the workspace to enable the newly released "Azure Monitor for VMs" and also installed the Service Map agents on our VMS. This resulted in a huge increase of data ingested into Sentinel (5x, see attached image) due to VM performance metrics and network traffic logs from the the Service Map now being ingested into Sentinel, that I have no interest in having there.
How can I choose/filter what logs from a Log Analytics Workspace that is forwarded to Azure Sentinel?
May 04 2020 10:07 AM - edited May 04 2020 10:10 AM
May 04 2020 10:07 AM - edited May 04 2020 10:10 AM
@Magnus Tjerneld You cannot state which information stored in a Log Analytics workspace is available to Azure Sentinel. It is an all or nothing proposition.
You can control what data gets ingested from the VMs by clicking on the "Windows, Linux, and other sources" link on the Overview page and from there go to the "Data" section. This will show you what logs you are ingesting as well as severity for each log. You can go through the list and see if you can trim down some of the information being imported.
May 04 2020 11:19 AM
May 04 2020 11:19 AM
Ok; that's too bad. I dug a little deeper into the increased log volumes and realized that the bulk of the records/sources are not visible under Data Connections.
Previously; before we upgraded the Workspace to onboard "Azure Monitor for VM:s" I was manually pulling a number of selected perfmon counters (CPU, RAM and so on). I did this at a 5 min interval to give me an overview without causing too much data in OMS. But after enabling Azure Monitor for VM:s, a huge number of new counters has been enabled and the resolution is much higher. Over the last 24h I've recieved ~300k datapoints for the 4 VMs I'm monitoring:
None of these namespaces are visible under Data Sources in Log Analytics.
It would be great to be able to lower the resolution/interval, but I can't seem to find anywhere to limit the interval on these logs?
I realize that this is now more of an "Azure Monitor for VM:s" question than a Sentinel question.
May 04 2020 01:25 PM
May 04 2020 01:25 PM
@Magnus Tjerneld Did you check under the "Windows Performance Counters" in the "Data" section?
May 04 2020 11:35 PM
May 04 2020 11:35 PM
@Gary Bushey Yes; only the manual counters that I had set up before I enabled "Azure monitor for VMs" are visible there. AMFVM seems to set up it's own data collection that you don't seem to be able to edit.
May 05 2020 12:55 AM
May 05 2020 12:55 AM
Please see the GA release info
May 05 2020 01:22 AM - edited May 05 2020 01:22 AM
May 05 2020 01:22 AM - edited May 05 2020 01:22 AM
Thanks @CliveWatson. I read this before and have now read it again; and I realize that I can delete my old perfmon counters. However, I do not find any information regarding:
- Can I limit the "resolution" of data performance data sent to Log Analytics after upgrading to Azure Monitor for VMs? In the old solution, I could set intervals in seconds.
- Can I choose not to collect data for a specific namespace? For us, Disk-metrics make up 90% of logs ingested and causes a lot of extra costs in Sentinel. If possible, I'd opt out.
And my wish would be to be able to exclude all performance counters from ingestion into Sentinel. This only results in added cost and no added value.
Sep 24 2020 09:16 AM
Sep 24 2020 09:16 AM
@Magnus Tjerneld Were you able to find a solution to this? Filtering out which data that is ingested by sentinel?
Sep 24 2020 01:51 PM
Sep 24 2020 01:51 PM | https://techcommunity.microsoft.com/t5/microsoft-sentinel/limit-what-data-in-log-analytics-to-be-passed-on-to-sentinel/td-p/1357874 | CC-MAIN-2022-33 | refinedweb | 740 | 66.88 |
/* X Communication module for terminals which understand the X protocol.>
, Vsystem_name;
char *x_id_name;
/* Initial values of argv and argc. */
extern char **initial_argv;
extern int initial_argc;
/*;
/*;
/* Reusable Graphics Context for drawing a cursor in a non-default face. */
static GC scratch_cursor_gc;
/* Mouse movement..
The silly O'Reilly & Associates Nutshell guides barely document
pointer motion hints at all (I think you have to infer how they
work from an example), and the description of XQueryPointer doesn't
mention that calling it causes you to get another motion hint from
the server, which is very important. */
/*;
/*_row, mouse_face_beg_col;
static int mouse_face_end_row, mouse_face_end_col;
static int mouse_face_past. X and Y can be negative or out of range for the frame. */ Lisp_Object;
static int x_noop_count;
/*_y);
mouse_face_deferred_gc = 0;
}
}
/*Queue (););
FONT_TYPE *font = FACE_FONT (face);
GC gc = FACE_GC (face);
/* HL = 3 means use a mouse face previously chosen. */
if (hl == 3)
cf =) (xgcv.foreground == xgcv.background)
xgcv.foreground = f->display (scratch_cursor_gc)
XChangeGC (x_current_display, scratch_cursor_gc, mask, &xgcv);
else
scratch_cursor_gc =
XCreateGC (x_current_display, window, mask, &xgcv);
gc = scratch_cursor_gc;
#if 0
/* If this code is restored, it must also reset to the default stipple
if necessary. */
if (face->stipple && face->stipple != FACE_DEFAULT)
XSetStipple (x_current_display, gc, face->stipple);
}
}, window, left,
top + FONT_HEIGHT (font),
FONT_WIDTH (font) * len,
/* This is how many pixels of height
we have to clear. */
f->display (x_current_display,;
FONT_TYPE *font;
{
register int len;
Window window = FRAME_X_WINDOW (f);
GC drawing_gc = (hl == 2 ? f->display.x->cursor_gc
: (hl ? f->display.x->reverse_gc
: f->display.x->normal_gc));
if (sizeof (GLYPH) == sizeof (XChar2b))
XDrawImageString16 (x_current_display, window, drawing_gc,
left, top + FONT_BASE (font), (XChar2b *) gp, n);
else if (sizeof (GLYPH) == sizeof (unsigned char))
XDrawImageString (x_current_display,. */
XTclear_end_of_line (first_unused)
register int first_unused;
{
struct frame *f = updating_frame;, FRAME_X_WINDOW (f),
CHAR_TO_PIXEL_COL (f, x - 1),
CHAR_TO_PIXEL_ROW (f, y),
FONT_WIDTH (f->display.x->font),
f->display;
XClearArea (x_current_display, FRAME_X_WINDOW (f),
CHAR_TO_PIXEL_COL (f, x),
CHAR_TO_PIXEL_ROW (f, y),
FONT_WIDTH (f->display.x->font),
f->display.x->line_height, False););
#endif /* 0 */
/* Invert the middle quarter of the frame for .15 sec. */
/* We use the select system call to do the waiting, so we have to make sure
it's available. If it isn't, we just won't do visual bells. */
#if defined (HAVE_TIMEVAL) && defined (HAVE_SELECT)
/* Subtract the `struct timeval' values X and Y,
storing the result in RESULT.
Return 1 if the difference is negative, otherwise 0. */
static int
timeval_subtract (result, x, y)
struct timeval *result, x, y;
{
/* Perform the carry for the later subtraction by updating y.
This is safer because on some systems
the tv_sec member is unsigned. */ is certainly positive. */
result->tv_sec = x.tv_sec - y.tv_sec;
result->tv_usec = x.tv_usec - y.tv_usec;
/* Return indication of whether the result should be considered negative. */
return x.tv_sec < y.tv_sec;
}
XTflash (f)
struct frame *f;
BLOCK_INPUT;
{
GC gc;
/* Create a GC that will use the GXxor function to flip foreground pixels
into background pixels. */
{
XGCValues values;
values.function = GXxor;
values.foreground = (f->display.x->foreground_pixel
^ f->display.x->background_pixel);
gc = XCreateGC );
{
struct timeval wakeup, now;
EMACS_GET_TIME (wakeup);
/* Compute time to wait until, propagating carry from usecs. */
wakeup.tv_usec += 150000;
wakeup.tv_sec += (wakeup.tv_usec / 1000000);
wakeup.tv_usec %= 1000000;
/* Keep waiting until past the time wakeup. */
while (1)
{
struct timeval timeout;
EMACS_GET_TIME (timeout);
/* In effect, timeout = wakeup - timeout.
Break if result would be negative. */
if (timeval_subtract (&timeout, wakeup, timeout))
break;
/* Try to wait that long--but we might wake up sooner. */
select (0, 0, 0, 0, &timeout);
}
}
XFill ()
{ | https://emba.gnu.org/emacs/emacs/-/blame/03b78673fb71aa01a455624ae2a841c7dd1b8767/src/xterm.c | CC-MAIN-2021-10 | refinedweb | 561 | 57.57 |
Content
All Articles
PHP Admin Basics
PHP Forms
PHP Foundations
ONLamp Subjects
Linux
Apache
MySQL
Perl
PHP
Python
BSD
XML is great, but I've constantly wondered why it's so difficult to parse.
Most languages provide you with three options: SAX, DOM, and XSLT. Each has its
own problems:
<hello>
SimpleXML is a new and unique feature of PHP 5 that solves these problems
by turning an XML document into a data structure you can iterate through like a
collection of arrays and objects. It excels when you're only interested in an
element's attributes and text and you know the document's layout ahead of time.
SimpleXML is easy to use because it handles only the most common XML tasks,
leaving the rest for other extensions..)
Along the way, there's a brief discussion on XML namespaces and XPath, since
they're necessary to process XML documents that expand beyond the basics. In
particular, to handle RSS 1.0, you need to work with these XML
specifications.
To try SimpleXML, you need a copy of PHP 5 Beta 3, as not everything
described here works in earlier versions. SimpleXML also requires
libxml2, an open source XML parsing library that all of PHP 5's
XML extensions now use. SimpleXML support is enabled by default, so it's
automatically installed when you build PHP 5.
libxml2
Like PHP 5, SimpleXML is beta quality. There are still a few bugs, memory
leaks, and unimplemented features, but overall it's coming together nicely.
The first set of examples use the following chunk of RSS, which is stored in
rss-0.91.xml:
rss-0.91.xml
<?xml version="1.0" encoding="utf-8" ?>
<rss version="0.91">
<channel>
<title>PHP: Hypertext Preprocessor</title>
<link></link>
<description>The PHP scripting language web site</description>
</channel>
<item>
<title>PHP 5.0.0 Beta 3 Released</title>
<link></link>
<description>PHP 5.0 Beta 3 has been released. The third beta
of PHP is also scheduled to be the last one (barring unexpected
surprises).</description>
</item>
<item>
<title>PHP Community Site Project Announced</title>
<link></link>
<description>
Members of the PHP community are seeking volunteers to help
develop the first web site that is created both by the community and for
the community.</description>
</item>
</rss>
To begin, create a new SimpleXML object. For XML on disk, use
simplexml_load_file('/path/to/file.xml'). If it's stored in a PHP
variable, use simplexml_load_string($xml). So, to load the RSS,
do:
simplexml_load_file('/path/to/file.xml')
simplexml_load_string($xml)
$s = simplexml_load_file('rss-0.91.xml');
Element text is accessed like object properties:
print $s->channel->title . "\n";
PHP: Hypertext Preprocessor
If there's more than one element in the same level in document, they're
placed inside an array. In this example, there's only one
<channel>, but two <items>s. To access
an <item>, use its location in the array:
<channel>,
<items>
<item>
print $s->item[0]->title . "\n";
PHP 5.0.0 Beta 3 Released
To print all titles, use a foreach loop:
foreach
foreach ($s->item as $item) {
print $item->title . "\n";
}
PHP 5.0.0 Beta 3 Released
PHP Community Site Project Announced
Use array notation to read element attributes:
print $s['version'] . "\n";
0.91
Other XML features, like comments and processing instructions, are
unsupported. You can't (yet) access these entities. However, since most XML
documents don't place vital information in comments or use processing
instructions, this isn't a big drawback.
SimpleXML uses XPath to allow you to gather information from a document.
Find and print all the text inside title elements with:
foreach ($s->xsearch('//title') as $title) {
print "$title\n";
}
PHP: Hypertext Preprocessor
PHP 5.0.0 Beta 3 Released
PHP Community Site Project Announced
The xsearch() method searches a SimpleXML object and returns
an array of matching nodes. Pass your XPath query as the argument. In this
case, //title finds all title elements regardless of location in
the tree. Or, restrict the search to only <title>s inside
of <item>s with //item/title.
xsearch()
//title
<title>
//item/title
If you've used XSLT, you're familiar with XPath. XSLT templates use XPath
expressions to determine when to process a node. For more on XPath, read John
E. Simpson's XPath and XPointer (O'Reilly) or John's XML.com article, Top Ten Tips to
Using XPath and XPointer. Additionally, Chapter 9 of XML
in a Nutshell, by Elliotte Rusty Harold and W. Scott Means (O'Reilly), covers XPath and is available free online.
Related Reading
XPath and XPointer
Locating Content in XML Documents
By John E. Simpson
While these examples are somewhat trivial, XPath is quite useful with
complex documents, as you can create sophisticated queries to return finely
tuned. | http://www.onlamp.com/pub/a/php/2004/01/15/simplexml.html?page=1 | CC-MAIN-2015-40 | refinedweb | 797 | 57.16 |
In Part 1, I will cover what exactly an assembly is, and what an assembly contains.
An assembly is a core part of the runtime. An assembly is the collection of all information required by the runtime to execute your application. This information is referred to as the Metadata. An assembly can be a DLL file or a Portable Executable (PE) file. The Common Language Runtime (CLR) can only execute code in assemblies and the assembly must include a manifest or reference a manifest contained in another file.
An assembly that has been compiled into a portable executable (PE) file has an extension of AppName.exe. A common mistake made by newcomers to the .NET Framework is they think this *.exe file is the same as a standalone executable created by many other languages such as a C++ compiler. There are very big differences between a C++ executable file and a .NET PE file. The PE assembly consists of code in the form of Intermediate Language (IL). This IL code requires the .NET platform to be installed on the host in order to run.
An assembly controls many aspects of an application. The assembly handles versioning, type and class scope, security permissions, as well as other metadata including references to other assemblies and resources. The rules described in an assembly are enforced at runtime.
All assemblies have a manifest. This manifest may be immediately contained within the assembly as in a single file assembly or may be referenced in a separate file as in multi-file assemblies. The manifest contains information about all portions considered as part of the assembly itself. The information in the manifest is known as the Metadata.
The manifest contains information that determines which components are visible beyond the assembly's namespace and also which components are to remain private to the assembly (Scope).
Other optional information can be found in the manifest such as configuration information, information about your company, or copyright.
The manifest contains so much information that usually an assembly can simply just be copied as a whole from one .NET framework to another without having to register with the destination operating system.
This hash contains information such as references to all separate files that are referenced by the assembly, and references to other assemblies that are part of this assembly. The hash is encrypted using public key cryptography. This will be covered more in Part 2 of this article series. The hash is used at runtime to verify that all related files and assemblies have not been modified since the assembly was compiled. This helps prevent unwanted tampering.
When the .NET SDK is installed, the framework creates a folder on the system. This folder is known as the Assembly Cache. The cache contains a Private section as well as a Global section. The global cache is home to assemblies that are shared. I will cover "Shared Assemblies in Part 2. Specific restricted files for an application may reside in the private section. Files that your application downloads at runtime are also stored in the private section of the cache. There are several utilities for working with assemblies and the cache area. These utilities are covered in the next series of articles (Part 2 -- 3).
All assemblies in the global cache must be shared and must contain unique namespaces. The folder names themselves contain part of the assembly's unique identity. The Assembly Cache can be found in your Windows directory in a subfolder called assembly. Check either C:\WINNT\Assembly or C:\Windows\assembly.
View All
View All | https://www.c-sharpcorner.com/article/assemblies-the-ins-and-out-part-i/ | CC-MAIN-2021-43 | refinedweb | 596 | 58.18 |
This document is not current, and is kept only for archival purposes. Please refer to Everything2 Help for all up-do-date help documents.This is a simple primer to noding video games, be they arcade games, console games, handheld games, or computer games. The same basic principles apply to games from Space War to the latest major releases.
While there's no rule dictating that you must use these hints, it's a good checklist for making the perfect (video game) node, as the list below is the basic core information needed to make a writeup useful, as well as place it in a context. A writeup on Super Mario Brothers 3 that goes into detail about the quality of the graphics, for example, would be rather silly if no writeup in the node mentioned that it was an NES title.
When in doubt, add extra information in your writeup. For example, if there's an alternate version of the game you are noding, but you can't find any other information, mention the alternate; someone else will probably node it themselves.
For those who would like ideas for a video game node, Insert Coin's homenode, maintained by the videogames group, has a list of games, people, and subjects direly in need of content rescue. Any game not already in the big video games list would also make a good candidate.
Naming the Node
As always, do a search on all of the possible alternate names for the game, and also check what's already noded via the video games metanode. Try not to duplicate content.
Use the publisher's name for the game, as appears on the packaging and in press releases. The latter are important because often the packaging has advertising slogans. Halo is preferred over Halo: Combat Evolved, for example. (In the case of older arcade games, use the name from the marquee - the label at the top of the cabinet - rather than the name on the title screen.)
Go back and double-check the packaging and the press releases. The most common mistakes caused by this are forgetting "the" at the beginning of a name (The Legend of Zelda), including or excluding spaces (Game Boy, Mega Man), or noding games under nicknames (use The Legend of Zelda: Ocarina of Time, not Zelda 64.)
When it comes to Arabic numerals vs. Roman numerals, again, go with the publisher's usage, unless common usage is pervasively opposed. (Street Fighter 2 is one of these rare exceptions.)
Include a subtitle for the game only if it is needed for identification (Grand Theft Auto: Vice City, for example) or if the game is commonly known by alternately the first and second name (Jedi Knight: Dark Forces 2). However, if the subtitle isn't necessary for identification, exclude it. (Ogre Battle 64 is preferred to Ogre Battle 64: Persons of Lordly Caliber.)
Don't namespace a game if the name is a common word. Id Software's breakout FPS should go in the Doom node, not Doom (game) or somesuch.
Namespace a game with the series name only if common usage includes the series name. This is of particular note with the Legend of Zelda series, which should always be namespaced, or Star Wars games, which should be considered on a case-by-case basis.
If a game has multiple titles in different regions or countries, all of the titles are valid for noding, particularly due to international differences. For example, Resident Evil is known as Biohazard in Japan; both could be noded.
Often, a game won't have an English title. Try to use the name most commonly used in the US; Seiken Densetsu 3 should be used instead of Secret of Mana 2, however Earthbound Zero should be used instead of Mother.
What You Must Include. Really.
Try and make sure you include all of the things in this list, if applicable. All of them are core information for any video game node. If you make a writeup in a video game node that lacks this info, please make sure these are included.
Generally, you only need to worry about covering this basic information if you are writing an all-encompassing writeup for a game. If you are only discussing a limited aspect (modifying an arcade game, for example), there's no need to add information like this. That said, extra information is always good.
Platform - While this may seem to be obvious, this is helpful to completely uninformed readers, as well as a useful note as to what version of the game you are talking about, in the event that there's a version of the game of which you are unaware.
Alternate versions - Mention any alternate versions of the game, especially if they are identical. At the very least, mention other versions in passing, so that other noders can post their own information. Commonly missed are Macintosh/Linux ports, portable versions (particularly sticky can be Game.com or e-Reader remakes), arcade versions (particularly in the case of games that rose to fame on consoles), minigame remakes (as in Animal Crossing), Colecovision/Intellivision versions of of Atari 2600/7800 games.
Release Date - Try and get the exact day. Failing that, the month and year or just the year should be considered a requirement. Noting international release dates is also nice, but not required.
Format and Hardware - Make sure you mention whether the game was released on floppy, CD (including the number of CDs, for console games), cartridge, HuCard, punch card, etc. If the game required a peripheral, like the Sega 32X, you need to mention that.
Arcade games, due to their variety of formats and technologies, are a special case, and this info can be enough for a full writeup on its own. What kind of hardware would someone need to run the game? Did the game come in a dedicated cabinet, conversion kit, or both? Was there a cocktail or cockpit version available, or were they all upright machines? Did the game use a medium res or VGA monitor? Did the game use a proprietary technology in the arcade, like Capcom's CPS system? Did the game use standard controls, or special ones? What were they, and how could you repair or replace them?
Developer, Publisher, and Staff Notables - The publisher and developer are absolute musts. It's also a good idea to include any prominent figures in the video games industry involved in the game's design, and when noting the developer, mentioning any later names of the development team is helpful. (For example, while Konami is credited with developing Contra, that developers would go on to leave Konami and become known as Treasure. Some groups with similar histories are Rareware, DMA Design, and the untitled creators of the Ogre Battle Saga and Final Fantasy Tactics.)
Acquiring the Game - Many games are no longer available from retail. Make mention of rarity and value for collectible or arcade games, emulation, and abandonware availability, all as appropriate.
ESRB Ratings - If the game is new enough to be have been rated by the Entertainment Software Ratings Board (or another authority, like the ELSPA in the UK), then mention the rating. Elaboration on the reasons for the rating (and possibly the descriptors; try looking on esrb.org) is nice, but not necessary.
/msg Insert Coin - Send Insert Coin a /msg with the title of any game nodes, be they about prominent figures in the video games industry, video games characters, video games platforms, or just about video games. Generally, there's a metanode that your node will fit into, and, hey, who doesn't like free nodevertising?
Some excellent sources for this kind of basic, dry information would be gamefaqs.com, klov.com, gamers.com, 3dgamers.com, vgmuseum.com, and classicgaming.com.
What You May Like To Include.
On the other hand, E2 is not any of those sites, and the job of recording dry, inoffensive information is taken. While database information is all well and good, it should be paired with context, descriptions, and, one would hope, your own inimitable writing style.
Gameplay - Describe the way the game plays. Compare it to other games, or just describe it abstractly, but do give the reader an idea of how the game actually plays.
History - Some games have an interesting story as to how they came to exist; try and research and mention this if possible. Try and describe the hype that surrounded the game's release; was it a sleeper hit, a cult classic, the game of that year's E3? If the game was a hit, why was it a hit? Or why did it flop? What features were advertised or promoted? Which weren't? Did the game spawn imitators? Was it an imitator? All of this is the kind of info that makes a game node more than the dry listings on the KLOV. There's a lot to say about the context into which the game was released.
Community - Some games have associated online communities, to be found on the web or in usenet newsgroups. Information about this, locations and significant figures and history, may make your node a more juicy read. Sometimes, the communities may be more interesting than the games!
Both Sides of the Controversy - People argue about games. A lot. Some games seem to spark controversy, though, whether it be between gamers and regulatory groups (Mortal Kombat) or among gamers themselves (Tomb Raider). Try and cover both sides of the controversy, rather than turning the nodes for these games into GTKY debates.
Technology - Some games, like the Legend of Zelda or Quake, were major technical achievements. Make sure you mention these kinds of landmarks. If the game used unusual hardware, like special chips or accessories, or new graphic techniques, or whatever, try and mention it, as well as the impact. In the case of arcade games, try and find out what the wiring scheme (JAMMA, JAMMA+, Konami Standard, etc.) was, if you can.
Emulation - If the game can be emulated, do mention how well it emulates, as well as any notable facts (Is there a variant ROM people should use? Should people be careful of incomplete/corrupt rips, as with Castlevania III?) In general, there's no need to mention the emulator, except in the case of arcade games.
Packaging - Describe the cartridge label and box art, or arcade cabinet, or CD/DVD case, or whatever packaging the game came with, if you can, as it often can remind players of games they had otherwise forgotten.
Points of Failure - Is is a vector graphics arcade game, with the notoriously unreliable monitors? Is it an arcade game with a suicide battery? Is it a console game with a savegame battery? If the game has some common point of failure, it's helpful to mention the problem, as well as any workarounds.
The Progression of the Series - If the game you are noding shares its name with the whole series, do mention (and preferably hard link) the rest of the games, at least in passing. If the game is part of a series, explaining how it fits into the series (story, gameplay changes, etc.) is probably a good idea. Some series deserve a full writeup (or multiple writeups) on the whole series, of course.
Many noders will add a small series progression bar at the bottom of the w/u, for quick reference. For example (taken from Metal Gear Solid):
Metal Gear - Metal Gear 2: Solid Snake - Metal Gear Solid - Metal Gear: Ghost Babel - Metal Gear Solid 2
Things To Avoid
Please don't do this stuff.
Nodesquatting - Please, please, please don't node speculation about a game, unless there's a lot of misinformation or some other pressing reason to node it early. After about three or six months, your w/u just went from useful to work for the editors, and they're all busy, nice folk, so let's not make their job harder. Not only this, but nodesquatting can lead to misplaced w/us under interim or imagined titles, like Metal Gear Solid X.
Duplicating Content - Search for all of the possible variant names for the game, and make very sure that the information in your node isn't elsewhere. This is particularly important for subnodes about a game; for a long time Observations on the Final Fantasy series had much the same information as Final Fantasy. The exception is with international editions of a game, as these generally have substantive differences. For example, Mariokart Advance and Mario Kart Super Circuit overlap somewhat, but this is okay.
Making Subnodes - In general, if you're noding an aspect of a game rather than the game itself, it's preferable to put the info in the game's node. For example, if you want to node Killer Instinct's combo system, put the info in Killer Instinct, not Killer Instinct combo system. This isn't a hard and fast rule; feel free to break it if you have a lot of information (Final Fantasy Tactics Job list).
Cut and paste writeups will die - Don't cut and paste from GameFAQs, and don't cut and paste giant swathes from the manual. The former we can go and read ourselves, and the latter is copyrighted material too. That said, amusing quotes, story summaries, or other short excerpts from the manual are fine.
Avoid Highly Subjective Writeups - Please, please, please, please don't make your w/u a rant about how unspeakably awful a game is without at least covering the basic information. Same goes for gushing about how much you love a game; the internet already has lots of useless fanboy rants. Everything is NOT a BBS.
While you can probably find a template for covering all of the basic information for a game simply by looking at a few high-rated game nodes, here's a template that covers the basic information.
Title:
Developer:
Publisher:
Date Published:
Platforms:
For quick reference, simply cut and paste:<! -- [ ] < > -->
<strong>Title</strong>: <br>
<strong>Developer</strong>: <br>
<strong>Publisher</strong>: <br>
<strong>Date Published</strong>: <br>
<strong>Platforms</strong>: <br>
Here's an alternate block (created by TheBooBooKitty) for Atari games:
Atari 2600 Game
Produced by: Insert manufacturer
Model Number: insert model number
Rarity: insert rarity number
Year of Release: insert year here
Programmer: Insert programmer here
Again, you can simply cut and paste:
<B>[Atari 2600] Game</B><BR>
<B>Produced by:</B> [Insert manufacturer]<BR>
<B>Model Number:</B> insert model number<BR>
<B>[Atari rarity guide|Rarity]:</B> insert rarity number<BR>
<B>Year of Release: [insert year here]<BR>
<B>[Atari 2600 game programmers|Programmer]:</B> [Insert programmer here]
Originally distilled from Insert Coin's homenode, Insert Coin and amib's writeups in video games, and fondue's, Carthag's, and TheBooBooKitty's writeups in Computer And Video Games : Noding convention for entries. Template originally based on one by Carthag. Written based on input from videogames.
Thanks go to amib and the videogames gang.
See also:
Video Games
Insert Coin
Pick titles carefully
The Final Fantasy Numbering System | https://everything2.com/node/document/Archived+E2+FAQ%253A+Video+Games | CC-MAIN-2020-45 | refinedweb | 2,527 | 61.26 |
On Tue, 2007-09-25 at 11:01 +0200, Avi Kivity wrote: > Dan Kenigsberg wrote: > > On Tue, Sep 25, 2007 at 03:28:24AM +0200, andrzej zaborowski wrote: > > > >> Hi, > >> > >> On 24/09/2007, Dan Kenigsberg <address@hidden> wrote: > >> > >>> As with previous "Takes" of this patch, its purpose is to expose host > >>> +{ > >>> + asm("cpuid" > >>> + : "=a" (*ax), > >>> + "=b" (*bx), > >>> + "=c" (*cx), > >>> + "=d" (*dx) > >>> + : "a" (function)); > >>> +} > >>> > >> I haven't really read through the rest of your code but this piece > >> appears to be outside any #ifdef/#endif so it will only build on x86. > >> > > > > I might be missing something here, but isn't not being on the > > TARGET_PATH of Makefile.target enough? I don't see #ifdef TARGET_I386 > > elsewhere under target-i386. I don't mind adding extra protection, I > > just be happy to better understand the whats and whys. > > > > target-i386 means the guest will run i386 instructions, but the host can > be something else (say, powerpc). > > Nothing else uses host instructions in that directory, so no protection > was necessary before.. Regards. -- J. Mayer <address@hidden> Never organized | http://lists.gnu.org/archive/html/qemu-devel/2007-09/msg00441.html | CC-MAIN-2015-35 | refinedweb | 176 | 65.46 |
Practical .NET
I discussed the fundamentals of what an ASP.NET Core tag helper is and how it can make you more productive in a previous post. In this post, I'm going to walk through the basic processing cycle of a tag helper, talk about your options in the first part of the cycle and explain a limitation in that part of the cycle that prevents me from recreating my favorite custom HtmlHelper as a tag helper.
The Tag Helper Processing CycleAs I discussed in that previous column, a tag helper is a class that attaches itself to HTML-compliant elements in your View or Razor Page. Razor takes care of attaching your tag and then calls your helper's Process (or ProcessAsync) method -- which is where you come in. Within the Process method, you first gather information about the element you're attached to and its environment. In the second part of the cycle (my next column) you rewrite the element your helper is attached to in order to create the HTML that will go down to the browser.
When it comes to gathering data that you'll use to rewrite the tag your helper is attached to, you have four sources:
Adding AttributesAdding an attribute to the element your tag helper attaches itself to is easy: Just define a public property as a string. This code adds an attribute called DomainName to whatever element my tag helper is attached to:
public class Contact : TagHelper
{
public string DomainName {get; set;}
Thanks to kabab-casing, any HTML element that this tag attaches to will now acquire an attribute named domain-name. An example that this helper would attach itself to might look like this:
<contact domain-
The value that the attribute is set to ("phvis.com," in this case) will be available to your code through the helper's DomainName property.
If you want, you can specify that your property is to be set to the name of some property on the View's Model object by declaring your property with the ModelExpression type. This will enable any developer using your property to get IntelliSense support for entering a property name from the Model object. More importantly, your code will be passed the value of that property through the ModelExpression's Model property. Here's an example:
public ModelExpression modelData {get; set;}
Getting information about the context your tag helper is executing in is almost as simple. First, declare a property as type ViewContext and then decorate it with the ViewContext attribute (you can call the property anything you like). Unlike my previous example, you don't want this property to add an attribute to your HTML element, so you also decorate it with the HtmlAttributeNotBound attribute. That code looks like this:
[ViewContext]
[HtmlAttributeNotBound]
public ViewContext Vc { get; set; }
Your property will automatically be loaded with a ViewContext object that has a ton of information about the current View and request that triggered your tag helper to run. As an example, in the Process method that's automatically called when processing your tag helper, you can use the ViewContext to access the ViewBag. This code is retrieving a property called IsValid from the ViewBag:
bool x = Vc.ViewBag.IsValid;
The Element Itself
You access information about the element that your tag is attached to through the TagHelperContext object passed to your Process method as a parameter -- a parameter named, cleverly, context. The object's TagName property (as you might expect) lets you find out the name of the element that your tag helper is attached to.
The context object's AllAttributes property lets you access attributes on that element (including attributes that you've created through your tag helper's properties). Using the AllAttributes collection, you can look for specific attributes -- this code, for example, lets you retrieve the value of the class attribute on the element ... provided the element has a class attribute:
string class = context.AllAttributes["class"].Value.ToString();
Unfortunately, that code will blow up if the element doesn't have a class attribute. If you're asking for a specific attribute (or attributes) it's probably better to use TryGetAttribute or TryGetAttributes. Like the TryParse method, these methods accept an out parameter to hold the value of the attribute (if it's found) while they return true or false, depending on whether the attribute is found.
Rewriting the previous code to use TryGetAttribute gives this code that won't ever raise an exception:
TagHelperAttribute tha;
if (context.AllAttributes.TryGetAttribute("class", out tha))
{
string val = tha.Value.ToString();
}
However, if you are interested in attributes, I suspect that it's just as likely that you'll loop through the AllAttributes collection looking for specific attributes and taking some action if the ones you want are present. Code like this might make more sense to you:
foreach (TagHelperAttribute att in context.AllAttributes)
{
if (att.Name == "class")
{
' ... do something with att.Value ...
}
}
Retrieving ContentFinally, the element that your helper is attached to might contain content that you're interested in between its open and close tags. If so, you can retrieve that content through the incongruously named TagHelperOutput object passed to your Process method in the method's output parameter.
The first step in retrieving the content is to call the output parameter's GetChildContentAsync method (since this is an async method, you should probably use the await keyword and add the async modifier to the Process method to simplify your code). Calling the GetChildContentAsync method gives Razor a chance to execute any tag helpers inside your element so that, in the next step, you'll be handed the actual HTML going to the browser. After calling GetChildContentAsync, you need to call the output parameter's GetContent method to get the HTML. GetContent returns a string with the usual C# escape characters for carriage returns (for example, /r).
Typical code to retrieve an element's content looks like this:
TagHelperContent content = await output.GetChildContentAsync();
string contentAsString = content.GetContent();
If you want to determine if there is any content before asking for it you can check the IsEmptyOrWhitespace property on the output parameter's Content property.
Once you've gathered all that information, you're ready to rewrite your tag to deliver something useful to the browser. That's my next column.
My Problem
But before I wrap this up, I should mention the one limitation I've found with tag helpers compared to HtmlHelper.
My custom HtmlHelper wrote out a bunch of HTML elements (a fieldset, a label, a textbox and a div element to hold validation messages). I could create a custom tag helper that would do the same.
The problem is that I should be taking into account all of the attributes that may be decorating the property when I write out my HTML: DataType, DisplayName and so on. That isn't easy to do (I assume I could use reflection to check for those attributes on my and take them into account ... but, gosh, that would be a lot of work I don't want to do). I'd also be obliged to keep updating my tag helper as new attributes are added (don't want to do that, either).
My custom HtmlHelper handled that easily: I just called the other, relevant HtmlHelper methods for creating input tags, labels and validation message div elements. That's not an option with the tag helpers (trust me -- I tried. It's ugly code and doesn't work).
You might think I could simply use nested tag hepers to address that problem. I could have one tag helper that would write out elements that were extended with tag helpers that take those attributes into account. I could nest that tag helper inside a second helper that would use GetChildContentAsync to retrieve those elements-with-helpers and generate the necessary HTML. Unfortunately, that doesn't work either because GetChildContentAsync only converts tag helpers once and won't convert tag helpers generated by the tag helper process (trust me, again -- I not only tried, I found Microsoft documentation saying this doesn't work).
Which means that, right now, I either have to give up the efficiency and consistency benefits of my custom HtmlHelper or go back to writing out all the HTML myself. I'm not willing to type that much HTML so I'll be using tag helpers as complements to -- rather than replacements for -- HtmlHelpers. That's not the end of the world ... but it's still too bad because I know that the HTML/CSS designers I work with prefer working with tag helpers rather than with HtmlHelpers.
Too bad for them. Like I | https://visualstudiomagazine.com/articles/2019/05/01/creating-custom-tag-helpers.aspx | CC-MAIN-2019-30 | refinedweb | 1,446 | 58.11 |
import submissions workingAsked by pjkool on April 06, 2016 at 06:35 AM
Hi I try to use
import.jotform.io
I tried to use my own form but it didnt work.
So I did new form with just one field Name.
And import is still not working.
I did cvs as in excample
"1_Name"
"Smith, John"
ad tried with xls as well
but it just is sayingError: Unrecognised Field "1_Name". Please check the template. The top row must match the template.
What I am doing wrong, is there maybe some kind of xls or csv version that I should use, any encodig problem or something like that.
Is the servise working just now or have it some problems itself.
Attached csv what I am trying to import.
Thank You very very much
Anti Kodas
St John’s School, in Estonia
- JotForm Support
It seems that the import.jotform.io service does not work at the moment. I will escalate the issue to our L2 support.
We apologize for any inconvenience this has caused you.
- JotForm Support
I apologize, it seems that the import of the CSV submissions works after all.
Please make sure make sure that you don't have any extra whitespaces in the file.
You can also try adding more entries and see if it works then.
Finally, try clearing your browser cache before importing.
We'll wait for your response.
Hi, I am very sorry
I tested again and still same problem, I used same data in csv (copy paste) as in example there please see my screenshot Import test
I cleared my browser cache, tested also with firefox as well. There is no space in the file. Tested with more entries as well, same situation.
I tried to have most basic working, one field one entry.
You got this working?
Thank You very much
Anti
Sorry for the inconvenience. I have just test your simple form with the name field and was able to import the CSV properly. I have also tested in both chrome and FF. I was able to import the data correctly. As my colleague mentioned, please verify that you do not have extra characters that might be killing the import.
If you are still having problems, we would be glad to review the CSV file. You may send it to our email: support@jotform.com
Please lake sure you include the thread in the subject area. | https://www.jotform.com/answers/811608-I-cant-get-import-submissions-working | CC-MAIN-2017-30 | refinedweb | 405 | 84.37 |
sage vs. python integers and floats in pandas, matplotlib etc.
I have run into a problem before, where e.g. matplotlib did not like sage floats as input for axes ranges and I had to wrap them all in np.float(). Now I found a similar problem with pandas:
import pandas as pd with these indexers [3] of <type 'sage.rings.integer.Integer'>
Is there a way to set up proper parsing for the worksheet at the onset, so that I don't have to wrap numbers and variables in the specific type needed by the respective function every time?
Thanks for your help! | https://ask.sagemath.org/question/35270/sage-vs-python-integers-and-floats-in-pandas-matplotlib-etc/ | CC-MAIN-2017-17 | refinedweb | 104 | 71.85 |
As part of my continuing exploration of things that are slower than they should be I recently came across a problem with VirtualAlloc. If you use the wrong flag with this function then, depending on how much memory you allocate, you may find it running 1,000 times slower, or perhaps even far worse.
This is the zeroth entry in an ongoing series about how to make Windows slower. The next part is Making Windows Slower Part 1: File Access.
The gory details
VirtualAlloc is a Windows function used for low-level memory allocation. It lets you reserve regions of virtual address space and commit pages within those regions.
I recently received a report that an application running on one of our servers was starting up significantly more slowly on machines with more memory. By ‘more slowly’ I mean ridiculously slowly. When we upgraded to 288 GB (yep, that’s a metric boatload of memory) the startup time went from 3 minutes to 25 minutes.
Ouch.
This application reserves most of memory for a cache and it does it in 1 MB chunks, so on a 288 GB machine it was doing 250,000 calls to VirtualAlloc. It is these memory allocations that were taking 25 minutes. And honestly, even 3 minutes is far too long.
A bit of work with ETW tracing together with an inspired guess quickly led to the solution:
Don’t use the VirtualAlloc MEM_TOP_DOWN flag.
The MEM_TOP_DOWN flag tells VirtualAlloc to allocate from the top of the address space. This is helpful when looking for bugs in an application that has just been ported to 64-bit because it ensures that the top 32-bits of addresses are necessary.
Nowhere is there any warning that VirtualAlloc with this flag does not scale well. It appears to use an O(n^2) algorithm where ‘n’ is related to the number of allocations you have made with this flag. From profiling it looks like it is slowly and methodically scanning a list of the reserved memory trying to ensure that it finds the hole with the highest address.
When this flag is removed the time went from 25 minutes to instantaneous. Okay, it probably wasn’t instantaneous but it was so fast that it wasn’t worth measuring.
Testing
I decided to run some tests. I don’t have a 288 GB server so I had to run tests on my underpowered 8 GB laptop. After some experimentation I ended up with code that would VirtualAlloc a bunch of 64-KB address ranges, then free them, and then do it again. All allocations were done with the flags MEM_COMMIT | MEM_RESERVE | MEM_TOP_DOWN. Each time it allocated more, going from 1,000 to 64,000. I made my test program a 64-bit app so that it could easily handle the ~4 GB of address space this required.
I dropped the data into Excel and it made the most perfect O(n^2) graph I have ever seen:
I then tried it without the MEM_TOP_DOWN flag and the results were strikingly different:
Not only has the graph gone from quadratic to linear, but check out the change in the scale. With MEM_TOP_DOWN VirtualAlloc is, for large numbers of allocations, over a thousand times slower, and getting slower all the time!
It’s amusing to put both results on one chart just to see the blue graph (VirtualAlloc without MEM_TOP_DOWN) being indistinguishable from a horizontal line at zero.
Just to be thorough I grabbed an ETW trace. ETW is great for this kind of stuff because it can grab full call stacks all the way from the implementation of VirtualAlloc down to my test code. Here’s the relevant excerpt of the CPU Usage (Sampling) Table:
Over the 8.23 seconds that I looked at the sampling shows that approximately 8,155 milliseconds (8.155 seconds) was spent trying to find the next address, instead of actually adjusting page tables and allocating memory.
Conclusion
Don’t use MEM_TOP_DOWN, except for testing. That’s pretty obvious.
Here’s the test code. No warranty. Don’t remove the copyright notice. Feel free to otherwise use and modify. Compile as 64-bit.
const size_t MaxAllocs = 64000;
__int64 GetQPC()
{
LARGE_INTEGER time;
QueryPerformanceCounter( &time );
return time.QuadPart;
}
double GetQPF()
{
LARGE_INTEGER frequency;
QueryPerformanceFrequency( &frequency );
return (double)frequency.QuadPart;
}
void* pMem[MaxAllocs];
void TimeAllocTopDown(bool topDown, const size_t pageSizeBytes, size_t count)
{
DWORD flags = MEM_COMMIT | MEM_RESERVE;
if (topDown)
flags |= MEM_TOP_DOWN;
assert(count <= _countof(pMem));
__int64 start = GetQPC();
for (size_t i = 0; i < count; ++i)
{
pMem[i] = VirtualAlloc( NULL, pageSizeBytes, flags, PAGE_READWRITE );
assert(pMem[i]);
}
__int64 elapsed = GetQPC() – start;
if (topDown)
printf(“Topdown”);
else
printf(“Normal “);
printf(” allocs took %7.4f s for %5lld allocs of %lld KB blocks (total is %lld MB).\n”, elapsed / GetQPF(), (long long)count, (long long)pageSizeBytes / 1024, (long long)(count * pageSizeBytes) / (1024 * 1024));
// Free the memory.
for (size_t i = 0; i < _countof(pMem); ++i)
{
if (pMem[i])
VirtualFree( pMem[i], 0, MEM_RELEASE );
pMem[i] = 0;
}
}
int _tmain(int argc, _TCHAR* argv[])
{
bool topDown = true;
for (size_t blockSize = 64 * 1024; blockSize <= 64 * 1024; blockSize += blockSize)
{
for (int i = 0; i <= 64; ++i)
{
TimeAllocTopDown(topDown, blockSize, i * 1000);
}
}
return 0;
}
I think the purpose of the MEM_TOP_DOWN flag is for when you allocate a region of space that is going to be used for a long time, not for situations where it needs to be reallocated lots of times. It allows you to allocate a region of address space that is located far away from where more frequent allocation and deallocation happen.
For example: In a program I’m currently writing I have a facility to edit a bunch of text documents. These documents are backed by memory mapped files (when saved) but the edit history of are backed by the pages in a virtual memory region. Now this virtual memory region is supposed to exist for as long as the application is running now it is useful to put this region in the far end of the process user-mode address space, away from all the main action of the application. So I use the MEM_TOP_DOWN flag to put a region (large enough to support an extremely heavy use of my editing facility) at the far end of the address space. I don’t use this flag when committing pages from within this region. So far I use this flag once in my application life time and I can’t see any bad impact on my application.
So I think in your situation you are probably right in avoiding this flag. But it is a useful flag when creating long lived regions.
Regards
Karl
The performance problems with MEM_TOP_DOWN seem to be related to how many outstanding allocations of this type there are. That’s the ‘n’ in O(n^2). So, with just a few allocations you shouldn’t hit any performance problems.
I’m not sure why a long-lived region should be allocated at the top of memory. I don’t see any association between the two. It’s harmless, but I don’t see what the benefit is.
The only good use I’ve heard of for this flag is in debugging 64-bit compatibility problems. If you use MEM_TOP_DOWN then you are guaranteed to get addresses that will not fit in 32 bits, and thus expose any pointer truncation bugs.
It is tru that these addresses wont fit in 32. The purpose in my case is to avoid external fragmentation of the address space. This is usually more of an issue on 32bit systems. And although the 64bit virtual address space is absolutely huge, the user-mode partition is limited to about ~8192 GB on x64 Windows. If you use a lot of your address space then it may be a problem with external fragmentation to reserve a long lived region of space in the middle of the address space. Things would have to be allocated/reserved around this region and some things might not fit “under” the region which may waste that address space. It is generally not a big problem on 64bit applications as the address space is so big, but if you can live with 64bit pointers using this flag is not a bad thing in my opinion.
Note on 64bit windows you are not guaranteed to get ’32bit sized pointers’ by not using this flag, it entirely depends on how much of the your address space is already in use and the size of the region that you are reserving.
I think your point is still very valid for the situation that you describe and for 64bit windows this flag is not really necessary unless you are using extreme amounts of address space.
The performance degradation is duly noted on MSDN stating this about the use of MEM_TOP_DOWN in VirtualAlloc(Ex) calls… “Allocates memory at the highest possible address. This can be slower than regular allocations, especially when there are many allocations.”
That’s encouraging progress. In the VS 2010 documentation it just says “Allocates memory at the highest possible address” so it’s good to see that they have improved it. The acknowledgement that allocations slow down when there are more allocations is particularly good as that hints at the precise problem.
So, good news.
Thanks very much for bringing this to your readers’ attention!
FYI dlmalloc (which is a commonly used memory allocator replacement) calls VirtualAlloc with MEM_TOP_DOWN for any large allocations (by default anything >= 256k). Having such a large threshold puts a very reasonable upper bounds on the maximum number of these allocations you can have in 32-bit and that thankfully limits the penalty of doing N+1 large allocs. In my testing with 1GB worth of 256k top-down allocations, the penalty for doing another was less than 0.2ms.
However, if you’re in 64-bit you may want to increase dlmalloc’s DEFAULT_MMAP_THRESHOLD to compensate for the likely increase in the number of simultaneous large allocations due to the increased address space.
Two things:
1. I think you mean “does it in 1 MB chunks” instead of “does it in 1 GB chunks” because the latter would only make 288 calls instead of 250,000.
2. For 64-bit pointer truncation testing we use the “AllocationPreference” registry setting (described here) instead of the MEM_TOP_DOWN flag. I haven’t tested it to see if it gets the same bad performance, but now I’m curious.
Oops — you’re right. I fixed that. I also fixed your comment 🙂
It would help if Microsoft would actually document the meaning of the allocation preference flag. Is it a bit flag? Is it an address (with probably 4 KB page units) for where allocations should start? My guess is that it is the latter, but forcing us to guess at both the interpretation and the units seems cruel and unhelpful.
My guess would be that performance will be good and that allocations will start at the value of AllocationPreference times 4 KB. Please let me know what you find. In my brief testing I can’t make this flag do anything. I created a REG_DWORD of the requested name and value and it made no difference to allocation patterns of my 64-bit Windows 7 application.
Worked for me; it’s what we used for 64-bit development to detect pointer truncation.
Note: Changing the AllocationPreference requires a restart to take effect.
Ah — that explains why it didn’t work in my tests.
That requirement also makes the lack of proper documentation for this flag even more frustrating. Experimenting to figure out how that setting works when it requires a reboot each time is a bit much. But I may have to do it anyway because the setting would be handy. It’s too bad it’s global instead of per-process.
Sorry, no good. AllocatePreference is not a starting address for allocations. Instead it appears to be a bit flag that sets MEM_TOP_DOWN for all allocations. Thus, it hits the O(n^2) behavior of MEM_TOP_DOWN.
I confirmed the top-down allocation behavior by calling VirtualAlloc in a loop and noting that the addresses returned started with 000007FFFFDF0000 and headed down from there.
I confirmed the slowdown by running my tests code with the topDown flag set to false and verifying that the allocation times showed a perfect O(n^2) behavior. Here are some typical results:
Normal allocs took 0.0000 s for 0 allocs
Normal allocs took 0.0039 s for 1000 allocs
Normal allocs took 0.0163 s for 2000 allocs
Normal allocs took 0.0412 s for 3000 allocs
Normal allocs took 0.0816 s for 4000 allocs
Normal allocs took 0.1313 s for 5000 allocs
Normal allocs took 0.1920 s for 6000 allocs
Normal allocs took 0.2775 s for 7000 allocs
Normal allocs took 0.3707 s for 8000 allocs
Normal allocs took 0.4648 s for 9000 allocs
Normal allocs took 0.5757 s for 10000 allocs
Normal allocs took 0.6803 s for 11000 allocs
Normal allocs took 0.8080 s for 12000 allocs
Normal allocs took 0.9536 s for 13000 allocs
Normal allocs took 1.0996 s for 14000 allocs
Normal allocs took 1.2588 s for 15000 allocs
Normal allocs took 1.4367 s for 16000 allocs
Normal allocs took 1.6244 s for 17000 allocs
It is frustrating that Windows doesn’t bother to offer a per-process flag for doing allocations that start at the 4 GB line. That would be incredibly valuable to anybody doing 64-bit development. Until then my advice from last year is the best option:
Thanks for testing this. I’m also sad to see that it is also O(N^2)! Your technique in the linked post ensures high-bits-set pointers but as you’ve said, it sure would be nice if there was a non-performance-robbing simple flag that did it. Cheers.
What happens if MEM_TOP_DOWN is used, say, instead of MEM_RESERVE in a 32-bit LAA application? Would the same kinds of slowdowns be expected?
More specifically, what would the effect be of injecting code in a graphically intensive (ie., real-time 1st/3rd-person perspective 3D) game to replace MEM_RESERVE with MEM_TOP_DOWN, with the conditions of flProtect must be set to PAGE_READWRITE, lpAddress is NULL, and dwSize is at least 1MB?
You don’t use MEM_TOP_DOWN *instead* of another flag, you use it *in addition to” another flag.
We hit serious problems because we were allocating 250,000 blocks of address space. A 32-bit application can, at most, allocate 65536 blocks of address space (address space is given out in minimum 64 KB blocks) so the worst of the O(n^2) will be avoided. So, MEM_TOP_DOWN can probably be used in a 32-bit app without severe repercussions.
Then again, even a ms or two of delay could be noticeable in a game, so I wouldn’t ship it that way.
It really depends on how many blocks are allocated that way. A few hundred? Probably not a problem.
Whoops – so MEM_TOP_DOWN is an optional flag. Well that shows my naivete on this topic. I really appreciate the explanation, though.
The game code example I gave actually comes from some source code that an author of a set of shader enhancements / memory optimization has made public. The claim associated with the use of the MEM_TOP_DOWN flag is that memory allocation fragmentation is reduced and some users claim they see fewer exception error crashes with it in use.
It sounds as though the slowdowns would likely not be as much of an issue in this situation since, the MEM_TOP_DOWN flag is added only to certain allocations over 1MB. Even with the potential ms slowdowns I guess I’d still be worried about possible race conditions cropping up – but I’m too new to this to know for sure.
Still – what is actually gained by making some allocations at the top of the memory space of a 32-bit LAA (game) app?
I can believe that using MEM_TOP_DOWN might reduce fragmentation, but a more careful analysis would be wise. There are tools such as sysinternals vmmap () that will help with this. In general if you can avoid intermingling allocations of different sizes in the same range then you will avoid fragmentation. If this technique puts all the 1 MB allocations at the top, away from the rest, then that might be helpful.
But at some point you have to consider either reducing memory consumption or going to 64-bit. The main obstacle to that is stubborn 32-bit Windows XP users.
Race conditions should not be an issue, I don’t think.
Pingback: Hidden Costs of Memory Allocation | Random ASCII
Pingback: ETW Central | Random ASCII
Pingback: Making Windows Slower Part 1: File Access | Random ASCII | https://randomascii.wordpress.com/2011/08/05/making-virtualalloc-arbitrarily-slower/?shared=email&msg=fail | CC-MAIN-2018-39 | refinedweb | 2,819 | 63.09 |
From: williamkempf_at_[hidden]
Date: 2001-09-01 13:29:47
--- In boost_at_y..., Beman Dawes <bdawes_at_a...> wrote:
> I believe William Kempf's Boost.Threads submission should be
accepted by
> Boost.
>
> Bill deserves a great deal of credit for patiently working through
a large
> number of issues, for discussing the issues at length and over an
extended
> period of time, and for making changes to the library when
indicated.
Thanks. I think I've honed several skills I was lacking in when this
all started. Got a ways to go yet, but this process has been quite
invigorating.
> A really large number of people helped him. Be sure to look at the
> Acknowledgements page in the docs.
A large number isn't the word for it! None of this could have been
done with out everyone's input. I well know that not everyone is
going to be satisfied with the result (this was one of the reasons
I've heard given for not trying to include a thread library in the
original standard), but because of the strong discussions we had on
here it's hopefully at least a good starting point. Whether or not
the library is accepted I'm grateful for the opportunity to work with
everyone on this. Thanks aren't enough, and I truly appreciate
everyeone's input.
> --------
>
> * The design goals seem to have been met, including the safety
goals that
> are particularly important to me.
>
> * While a number of alternate designs for portions of the library
have been
> discussed, none seem enough better to warrant delay. The
alternatives have
> been beaten to death and it is time to move forward.
Moving forward should include implementations of alternate designs on
top of the existing design. This will help to insure that there are
no holes in the existing design and allow people to evaluate
alternatives.
> * In general, the design seems of reasonable size and functionality
to
> me. While there may be legitimate questions about the inclusion of
some of
> the classes (see next comment), the design has not become bloated
with
> marginally useful features and functions as happens all too often.
>
> * I share Peter Dimov's concerns about semaphore. A long time ago
I asked
> on comp.programming.threads if Boost should avoid any features as
too error
> prone. The one suggestion I got was to avoid semaphores. While
semaphore
> is documented in Boost.Threads as dangerous, perhaps we should
remove it
> entirely or move it to an implementation namespace.
I've found semaphores to be indispensible for building some higher
level constructs. However, one can argue that the higher level
constructs should be provided by the library to begin with and
including this type as an implementation detail will allow this to be
done. So I guess I don't have overly strong feelings one way or
another about this.
> * There are many functions where a reasonable implementation may
wish to
> allocate memory. These should be documented as potentially
throwing
> std::bad_alloc. Like the standard library, it seems to me this can
best be
> done with a blanket "any function may throw std::bad_alloc"
statement
> rather than try to guess which functions an implementation may wish
to do
> dynamic memory allocation.
I'd agree. Any suggestions about where in the documentation to put
this blanket statement?
> * It seems to me that some implementations may run out of resources
(other
> than memory) in the constructors for almost any of the Boost.Thread
> classes. Thus I think there should be a blanket statement that all
> constructors can potentially throw thread_resource_error. If a
blanket
> statement isn't included, at least semaphore and the mutex
constructors
> seem to me to need to allow throwing thread_resource_error.
Unfortunately I didn't document this one properly. Add this to the
list of things to correct.
> * class thread_resource_error needs documentation.
>
> * class xtime needs documentation.
Both were supposed to be documented, and I failed on this one.
Things came up at work after we'd set the deadline and I guess I just
juggled things poorly. I'll try to fix this stuff this weekend so
others can comment on the changes before the review is over.
> * A tutorial would be helpful, although certainly not necessary
right away.
Agreed.
> * Jens Maurer's bounded queue should be included at least as sample
code if
> not as part of the library itself.
The "monitor" example is already an example of a bounded queue. If
we want to leave the bounded queue solely as an example then I'll
just include Jens's refinements in this example. I'd suggest leaving
it as an example, because a queue isn't really a threading concept.
> * Once Boost.Threads is accepted, and thus the interfaces stable,
let's
> quickly try to find implementors for platforms other than Win32 and
Linux,
> and get them to contribute their implementations to Boost.
Yes!
> * I've used (an earlier version of) the Boost.Threads code to
produce a
> multi-threaded version of an existing industrial program. It ran
correctly
> right away, and was 40+ percent faster on a dual processor machine
than the
> single threaded version. I'd never done any multi-threading
previously.
This is exciting. I'd love to hear anyone's stories about usage of
this library (good and bad experiences). You can share with the list
or send them directly to me. :)
> * I haven't reviewed the Boost.Threads code in detail, but what I
have
> looked at seems reasonable and meets our agreed upon guidelines.
Hopefully
> some other reviewer will go over the code with a fine-toothed comb.
Especially those with MT experience!
Thanks,
Bill Kempf
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2001/09/16864.php | CC-MAIN-2019-18 | refinedweb | 960 | 65.73 |
I've noticed that the sources in the current CVS tree have RCS $Id$
headers in them.
I've had nothing but problems with sources that use RCS headers.
Unless the release is done with "cvs export" (which is forgotten more
often than not, as far as I can tell), the headers make it harder to
import the code into another CVS repository.
Also, I've never really seen the point of such headers, at least not
when using cvs. You can always use "cvs log" to find out any
information you might need.
Can we get rid of these headers?
Tom
--
tromey@cygnus.com Member, League for Programming Freedom | http://mail-archives.apache.org/mod_mbox/httpd-dev/199609.mbox/%3Cm1n2yemktr.fsf@creche.cygnus.com%3E | CC-MAIN-2015-48 | refinedweb | 110 | 73.17 |
The Fl_Gl_Window widget sets things up so OpenGL works. More...
#include <Fl_Gl_Window.H>
The Fl_Gl_Window widget sets things up so OpenGL works.
It also keeps an OpenGL "context" for that window, so that changes to the lighting and projection may be reused between redraws. Fl_Gl_Window also flushes the OpenGL streams and swaps buffers after draw() returns..
Creates a new Fl_Gl_Window widget using the given size, and label string.
The default boxtype is FL_NO_BOX. The default mode is FL_RGB|FL_DOUBLE|FL_DEPTH.
Creates a new Fl_Gl_Window widget using the given position, size, and label string.
The default boxtype is FL_NO_BOX. The default mode is FL_RGB|FL_DOUBLE|FL_DEPTH.
Returns an Fl_Gl_Window pointer if this widget is an Fl_Gl_Window.
Use this method if you have a widget (pointer) and need to know whether this widget is derived from Fl_Gl_Window. If it returns non-NULL, then the widget in question is derived from Fl_Gl_Window.
Reimplemented from Fl_Widget.
Returns non-zero if the hardware supports the given or current OpenGL mode.
Returns non-zero if the hardware supports the given or current OpenGL mode.
Returns non-zero if the hardware supports the given or current OpenGL mode.
Returns true if the hardware overlay is possible.
If this is false, FLTK will try to simulate the overlay, with significant loss of update speed. Calling this will cause FLTK to open the display.
Returns or sets a pointer to the GLContext that this window is using.
This is a system-dependent structure, but it is portable to copy the context from one window to another. You can also set it to NULL, which will force FLTK to recreate the context the next time make_current() is called, this is useful for getting around bugs in OpenGL implementations.
If destroy_flag is true the context will be destroyed by fltk when the window is destroyed, or when the mode() is changed, or the next time context(x) is called.
Will only be set if the OpenGL context is created or recreated.
It differs from Fl_Gl_Window::valid() which is also set whenever the context changes size.
Draws the Fl_Gl_Window.
You must subclass Fl_Gl_Window and provide an implementation for draw().
You must override the draw() method.
You may also provide an implementation of draw_overlay() if you want to draw into the overlay planes. You can avoid reinitializing the viewport and lights and other things by checking valid() at the start of draw() and only doing the initialization if it is.
Reimplemented from Fl_Window.
Reimplemented in Fl_Glut_Window.
Hides the window if it is not this window, does nothing in WIN32.
The make_current() method selects the OpenGL context for the widget.
It is called automatically prior to the draw() method being called and can also be used to implement feedback and/or selection within the handle() method.
The make_overlay_current() method selects the OpenGL context for the widget's overlay.
It is called automatically prior to the draw_overlay() method being called and can also be used to implement feedback and/or selection within the handle() method..
If the desired combination cannot be done, FLTK will try turning off FL_MULTISAMPLE. If this also fails the show() will call Fl::error() and not show the window. xid() will change, possibly breaking other code. It is best to make the GL window a child of another window if you wish to do this!
mode() must not be called within draw() since it changes the current context.
Changes the size and position of the window.
If shown() is true, these changes are communicated to the window server (which may refuse that size and cause a further resize). If shown() is false, the size and position are used when show() is called. See Fl_Group for the effect of resizing on the child widgets..
Reimplemented from Fl_Window..
The swap_buffers() method swaps the back and front buffers.
It is called automatically after the draw() method is called.
Is turned off when FLTK creates a new context for this window or when the window resizes, and is turned on after draw() is called.
You can use this inside your draw() method to avoid unnecessarily initializing the OpenGL context. Just do this:
You can turn valid() on by calling valid(1). You should only do this after fixing the transformation inside a draw() or after make_current(). This is done automatically after draw() returns. | http://www.fltk.org/doc-1.3/classFl__Gl__Window.html | CC-MAIN-2015-22 | refinedweb | 718 | 75.5 |
17 September 2012 08:42 [Source: ICIS news]
By ?xml:namespace>
MEK prices have already surpassed first-half June levels to close at $1,325/tonne (€1,007/tonne) CFR (cost & freight) northeast (NE) Asia in the week ended 14 September, according to ICIS pricing data and players are expecting prices to increase further if the costs of naphtha remain at high levels.
“Naphtha prices have stayed at high levels for more than a month. We will have to raise prices in order to maintain our current production rates,’’ a northeast Asian producer said.
Naphtha is the benchmark feedstock for MEK, which is used to produce paints and coatings.
Based on naphtha prices at $990-995/tonne CFR Japan on 17 September, the MEK-naphtha price spread is less than the $400/tonne level normally required by MEK producers.
This has prompted producers to target prices at $1,300-1,340/tonne FOB (free on board)
Their distributors in
“We can consider only a small price increase of around $10-20/tonne unless we can increase our domestic prices further. Demand is still very slow,’’ said a South Korean distributor.
Importers in South Korea, which is a key Asian market, said the domestic consumption did not see any seasonal pick-up in September so far, and they are hoping that end-users and dealers will restock prior to the 29 September-1 October Chuseok holiday.
In the broader Asian market, distributors said they are worried about booking high-cost cargoes because of the availability of more competitively-priced substitutes such as ethyl acetate (etac).
In the printing ink and paint industries, for example, some end-users will consider switching to etac when the prices for etac are $200-300/tonne lower than those for MEK, a distributor in southeast Asia said.
Etac prices were at $980/tonne CFR SE Asia in the week ended 14 September, or about $380/tonne below the prices of MEK in southeast Asia in the same week, according to ICIS data.
Meanwhile, most of the MEK producers in northeast
In
“If the margins do not improve, producers will prefer to delay restarting their plants and divert their feedstock to more lucrative products,’’ a Chinese producer | http://www.icis.com/Articles/2012/09/17/9595880/high-naphtha-costs-to-support-asias-mek-prices.html | CC-MAIN-2014-15 | refinedweb | 370 | 53.14 |
A registry for known page sizes. More...
#include <qgspagesizeregistry.h>
A registry for known page sizes.
QgsPageSizeRegistry is not usually directly created, but rather accessed through QgsApplication::pageSizeRegistry().
Definition at line 73 of file qgspagesizeregistry.h.
Creates a registry and populates it with known sizes.
Definition at line 25 of file qgspagesizeregistry.cpp.
Adds a page size to the registry.
Definition at line 61 of file qgspagesizeregistry.cpp.
Decodes a string representing a preset page size.
The decoded page size will be stored in the size argument.
trueif string was successfully decoded
Definition at line 111 of file qgspagesizeregistry.cpp.
Returns a list of page sizes in the registry.
Definition at line 66 of file qgspagesizeregistry.cpp.
Finds a matching page size from the registry.
Returns the page size name, or an empty string if no matching size could be found.
Orientation is ignored when matching page sizes, so a landscape A4 page will match to the portrait A4 size in the registry.
Definition at line 91 of file qgspagesizeregistry.cpp.
Finds matching page sizes from the registry, using a case insensitive match on the page size name.
Definition at line 77 of file qgspagesizeregistry.cpp. | https://api.qgis.org/api/classQgsPageSizeRegistry.html | CC-MAIN-2022-21 | refinedweb | 196 | 61.53 |
Scala DOM TypesScala DOM Types
Scala DOM Types provides listings and type definitions for Javascript HTML and SVG tags as well as their attributes, DOM properties, and CSS styles.
"com.raquo" %%% "domtypes" % "0.15.1" // Scala.js 1.7.1+ "com.raquo" %% "domtypes" % "0.15.1" // JVM
Our type definitions are designed for easy integration into any kind of library. You can use this project to build your own DOM libraries like React or Snabbdom, but type-safe. For example, popular Scala.js reactive UI library Outwatch recently switched to Scala DOM Types, offloading thousands of lines of code and improving type safety (diff). I am also using Scala DOM Types in my own projects:
- Laminar, a high level reactive UI library for Scala.js
- Scala DOM Builder, a low level DOM manipulation and tree tracking library
- Scala DOM Test Utils, a library that verifies that your DOM node / tree matches provided description
DOM stands for Document Object Model, in our context it's an object that represents an HTML document along with its HTML elements and their attributes, props and styles.
Table of ContentsTable of Contents
- Community
- Why use Scala DOM Types
- What about ScalaTags
- What about scala-js-dom
- Design Goals
- Documentation
- Related Projects
CommunityCommunity
Please use github issues for bugs, feature requests, as well as all kinds of discussions, including questions on usage and integrations. I think this will work better than spreading thin across gitter / stackoverflow / etc. You can watch this project on github to get issue updates if you're interested in following discussions.
See also: Contribution guide
Why use Scala DOM TypesWhy use Scala DOM Types
Canonical use case: you're writing a Scala library that does HTML / DOM construction / manipulation and want to provide a type-safe API like this:
div( h1(rel := "title", "Hello world"), p( backgroundColor := "red", "Welcome to my fancy page!", span(draggable := true, "Fancyness is important.") ), button(onClick := doFancyThing, "Do Fancy Thing"), a(href := "", title := "foo", "Example") )
Of course, your API doesn't need to look anything like this, that's just an example. Scala DOM Types doesn't actually provide the
Tag.apply and
:= methods that you'd need to make this example work.
If you do in fact want similar syntax, you could extend
Tag,
HtmlAttr,
Prop, etc., or provide your own alternatives to those (Scala DOM Types does not require you to use its own traits).
You can also extend the API of those classes with implicit conversions / implicit classes instead of subclassing. Or you might even use Scala DOM Builder if that's what you need, or some of its individual classes (it's also very extensible and reusable).
You don't need to be writing a whole library to benefit from Scala DOM Types, you can use it instead to make your application code more type-safe. For example, your imaginary method
setProperty(element: dom.Element, propName: String, propValue: Any)
could become
setProperty[Value](element: dom.Element, prop: Prop[Value], propValue: Value)
Now you can't pass just about any random string as
propName, and even
propValue is now type checked.
What about ScalaTagsWhat about ScalaTags
ScalaTags is a popular Scala library that contains DOM type definitions similar to what we have here. However, Scala DOM Types is different in a few ways:
More type safe. For example, in Scala DOM Types an
inputtag is linked to Scala.js
HTMLInputElementclass. This lets you provide exact types for the DOM nodes you create, so that you don't need to perform unsafe casts in your application code if you want to e.g. access the
valueproperty on an
inputyou created. Similarly, all attributes, properties and styles are linked to the types that they accept to prevent you from assigning incorrect values.
More flexible. Scala DOM Types does not tell you how to compose your attributes / props / styles / tags together, and does not enforce any rendering paradigm. You are free to implement your own composition. I see that some projects fork ScalaTags just to get the type definitions without everything else. Scala DOM Types does not get in your way, eliminating the need for such forking.
Better representation of native DOM types. Scala DOM Types handles Reflected Attributes consistently, and uses Codecs to properly encode/decode DOM values.
There are some other differences, for example Scala DOM Types uses camelCase for attr / prop / style names because that is consistent with common Scala style.
What about scala-js-domWhat about scala-js-dom
The scala-js-dom project serves a very different purpose – it provides typed Scala.js interfaces to native Javascript DOM classes such as
HTMLInputElement. You can use those types when you already have instances of DOM elements, but you can not instantiate those types without using untyped methods like
document.createElement because that is the only kind of API that Javascript provides for this.
On the other hand, Scala DOM Types lets the consuming library create a type-safe representation of real JS DOM nodes or trees, and it is up to your library's code to instantiate real JS nodes from the provided description. Scala DOM Builder does that in the most straightforward way, but higher level libraries like React, Snabbdom or Laminar could use Scala DOM Types in their own way, e.g. to create virtual or reactive DOM structures.
Oh, and Scala DOM Types does work on the JVM. Obviously you can't get native JS types there, but you can provide your own replacements for specific Scala.js types, or just not bother with such specificity (see
defs.sameRefTags).
Design GoalsDesign Goals
The purpose of Scala DOM Types is to become a standard DOM types library used in Scala.js projects.
Precise TypesPrecise Types
The most important type information must be encoded as Scala types. For example, DOM properties that only accept integers should be typed as such.
Reasonably Precise TypesReasonably Precise Types
The types we provide will never be perfect. For example, MDN has this to say about the
list attribute (
listId in our API):
The value must be the id of a element in the same document. [...] This attribute is ignored when the type attribute's value is hidden, checkbox, radio, file, or a button type.
A far as I know, encoding such constraints as Scala types would be very hard, if possible at all.
This is not to say that we are content with the level of type safety we currently have in Scala DOM Types. Improvements are welcome as long as they provide significantly more value than burden to users of this library. This kind of thing is often subjective, so I suggest you open an issue for discussion first.
FlexibilityFlexibility
Scala DOM Types is a low level library that is used by other libraries. As such, its API should be unopinionated and focused solely on providing useful data about DOM elements / attributes / etc. to consuming libraries in a way that is easy for them to implement.
Sanity Preservation MeasuresSanity Preservation Measures
We should provide a better API than the DOM if we can do that in a way that keeps usage discoverable and unsurprising.
Developers familiar with the DOM API should generally be able to discover the names of attributes / tags / etc. they need using IDE autocompletion (assuming they expect the names to match the DOM API). For example:
forId is a good name for the
for attribute. It avoids using a Scala reserved word, and it starts with
for like the original attribute, so it's easy to find. It also implies what kind of string is expected for a value (an
id of an element).
Within that constraint, we should also try to clean up the more insane corners of the DOM API.
- For example, the difference between
valueattribute vs
valueproperty trips up even experienced developers all the time. Scala DOM Types on the other hand has a
defaultValuereflected attribute and a
valueproperty, which behave the way everyone would expect from the given names or from their knowledge of the DOM API.
- For another example, enumerated attributes like
contentEditablethat in the DOM accept "true" / "false" or "on" / "off" or "yes" / "no" should be boolean attributes in Scala DOM Types.
All naming differences with the DOM API should be documented in the README file (see below). Type differences are generally assumed to be self-documenting.
DocumentationDocumentation
TODO:
- Write about general project structure, builders, etc.
- Provide links to specific implementation examples in other libraries (use my keys + implicits, or use your own keys)
CodecsCodecs
Scala DOM Types provides some normalization of the native HTML / DOM API, which is crazy in places.
For example, there are a few ways to encode a boolean value into an HTML attribute:
- As presence of the attribute – if attribute is present,
true, else
false.
- As string "true" for true, or "false" for false
- As string "yes" for true, or "no" for false.
Which one of those you need to use depends on the attribute. For example, attribute
disabled needs option #1, but attribute
contenteditable needs option #2. And then there are DOM Properties (as opposed to HTML Attributes) where booleans are encoded as actual booleans.
Similarly, numbers are encoded as strings in attributes, with no such conversion when working with properties.
Scala DOM Types coalesces all these differences using codecs. When implementing a function that builds an attribute, you get provided with the attribute's name (key), datatype, and a codec that knows how to encode / decode that datatype into a value that should be passed to Javascript's native DOM API.
For example, the codecs for the three boolean options above are
BooleanAsPresenceCodec,
BooleanAsTrueFalseStringCodec, and
BooleanAsYesNoStringCodec. They have concrete implementations of encode / decode methods but of course you don't have to use those.
Reflected AttributesReflected Attributes
HTML attributes and DOM properties are different things. As a prerequisite for this section, please read this StackOverflow answer first.
For more on this, read Section 2.6.1 of this DOM spec. Note that it uses the term "IDL attributes" to refer to what we call "DOM properties", and "Content attributes" to refer to what we here call "HTML attributes".
So with that knowledge,
id for example is a reflected attribute. Setting and reading it works exactly the same way regardless of whether you're using the HTML attribute
id, or the DOM property
id. Such reflected attributes live in
ReflectedHtmlAttrs trait, which lets you build either attributes or properties depending on what implementation of
ReflectedHtmlAttrBuilder you provide.
To keep you sane, Scala DOM Types reflected attributes also normalize the DOM API a bit. For example, there is no
value attribute in Scala DOM Types. There is only
defaultValue reflected attribute, which uses either the
value HTML attribute or the
defaultValue DOM property depending on how you implement
ReflectedHtmlAttrBuilder. This is because that attribute and that property behave the same even though they're named differently in the DOM, whereas the
value DOM property has different behaviour (see the StackOverflow answer linked above). A corresponding HTML attribute with such behaviour does not exist, so in Scala DOM Types the
value prop is defined in trait
Props. It is not an attribute, nor is it a a reflected attribute.
Reflected attributes may behave slightly differently depending on whether you implement them as props or attributes. For example, in HTML5 the
cols reflected attribute has a default value of
20. If you read the
col property from an empty
<textarea> element, you will get
20. However, if you try to read the attribute
col, you will get nothing because the attribute was never explicitly set.
Note that Javascript DOM performs better for reading/writing DOM props than reading/writing HTML attributes.
Complex KeysComplex Keys
Properties like
className often require special handling in consuming libraries. For example, instead of a
String based interface, you might want to offer a
Seq[String] based one for
className. To facilitate the development of such opinionated APIs we offer these keys in separate traits (
ComplexHtmlKeys and
ComplexSvgKeys) that allow for completely custom types.
If you don't need such customization at all, just use
CanonicalComplexHtmlKeys and
CanonicalComplexSvgKeys traits which implement these keys similar to all the others.
DOM Events &
dom.Event.target
When listening to
onChange,
onSelect,
onInput events found in
FormEventProps, you often need to access
event.target.value to get the new value of the input element the event was fired on. However,
dom.Event.target is an
EventTarget, whereas the
value property is only defined on
HTMLInputElement, which
EventTarget is not.
Properly typing
target in JS events is hard because almost all events in which we care about it could fire not only on
HTMLInputElement, but also
HTMLTextAreaElement, and even
HTMLElement in some cases (
onInput on element with
contentEditable set to
true).
Scala DOM Types provides a few type params in
FormEventProps to help deal with this mess, as well as the
TypedTargetEvent type refinement trait. Ultimately, you simply can't safely use
.target as something other than an
HTMLElement for most events due to the underlying JS API being very dynamic.
For related discussion see issue #13 and Outwatch issue #93, and some comments on PR #9.
CSSCSS
CSS is rather hard to type properly. A lot of CSS properties accept both numbers and a set of magic strings such as "auto", or specially formed strings in multiple formats such as "2px 5em 0 auto". We attempt to deal with this the same way that ScalaTags does, by defining objects for some CSS properties that have shorthand methods for applicable magic strings defined on them.
The downside of this approach is that this requires Scala DOM Types to venture outside of its design scope, as for such a setup to work Scala DOM Types needs to be aware of the concept of setters –
Modifiers that set a particular property to a particular value, such as
backGroundColor := "auto".
See Issue #2 for discussion.
SVGSVG
SVG attributes have the same typing problems as CSS properties (see above), but a different solution in Scala DOM Types. We basically type most SVG attributes as strings. Eventually we hope to find a better solution that will fit both SVG and CSS use cases. See Issue #2 for discussion.
Naming Differences Compared To Native HTML & DOMNaming Differences Compared To Native HTML & DOM
We try to make the native HTML & DOM API a bit saner to work with in Scala.
GeneralGeneral
- All identifiers are camelCased, (e.g.
datalistis
dataList) for consistency with conventional Scala style.
data-<suffix>attributes are created using
dataAttr(suffix: String)factory.
aria-<suffix>attributes are available without the
aria-prefix in the
AriaAttrstrait. You could thus create
object aria extends AriaAttrs[...]to namespace those attributes.
Attributes & PropsAttributes & Props
valueattribute is renamed
defaultValuebecause native naming is misleading and confusing (example)
- Note that the
valueproperty retains its name
checkedattribute is renamed to
defaultCheckedfor the same reason
- Note that the
checkedproperty retains its name
selectedattribute is renamed to
defaultSelectedfor the same reason
- Note that the
selectedproperty retains its name
classattribute is renamed to
classNamefor consistency with reflected property name, and to avoid Scala reserved word
forattribute and
htmlForproperty are available as reflected attribute
forIdfor consistency and to avoid Scala reserved word
idreflected attribute is renamed to
idAttr,
maxattribute to
maxAttr,
minto
minAttr, and
stepto
stepAttrto free up good names for end user code
offsetand
resultSVG attributes renamed to
offsetAttrand
resultAttrrespectively to free up good names for end user code
styleattribute is renamed to
styleAttrto let you implement a custom
styleattribute if you want.
contentattribute is renamed to
contentAttrto avoid conflict with
contentCSS property
formattribute is renamed to
formIdto avoid conflict with
formtag
heightattribute is renamed to
heightAttrto avoid conflict with
heightCSS property
widthattribute is renamed to
widthAttrto avoid conflict with
widthCSS property
listattribute is renamed to
listIdfor clarity and consistency
contextmenuattribute is renamed to
contextMenuIdfor clarity and consistency
TagsTags
- Tags renamed to free up good names for end user code:
style->
styleTag,
link->
linkTag,
param->
paramTag,
map->
mapTag
- Other tag renamings:
title->
titleTagto avoid conflict with
titlereflected attribute
object->
objectTagto avoid Scala reserved word
AliasesAliases
- Attribute
type==
typ==
tpeto avoid Scala reserved word
- Attribute
className==
clsfor consistency with Scala / ScalaTags
How to Use Scala DOM TypesHow to Use Scala DOM Types
So you're considering building a higher level DOM manipulation library on top of Scala DOM Types such as Laminar, Outwatch or ScalaJS-React (the former two use Scala DOM Types, the latter doesn't).
First off, if you're building such a library, you need to know a quite few things about how JS DOM works. Scala DOM Types is just a collection of types, it's not an abstraction layer for the DOM. You're building the abstraction layer. We can't cover everything about JS DOM here, but we did touch on some of the nastier parts above.
Laminar is my own library that uses Scala DOM Types in a pretty standard way. Open Laminar.scala, which is Laminar's public API, and let's see what it has.
As you see we're building up a big
private[laminar] object Laminar. It's marked private because it's exposed under a different alias,
com.raquo.laminar.L. End users are expected to import either this object, or all of its properties (
L._) depending on their preference, everything will work either way.
Our
object Laminar extends quite a few Scala DOM Types traits. Let's see what some of these traits these are, and what kinds of type params we pass to them.
We see
with ReflectedHtmlAttrs[ReactiveReflectedProp] – this brings in the
ReflectedHtmlAttrs trait. The type param of
ReflectedHtmlAttrs indicates the type of Reflected Attribute instanced that you get when you call
L.id or
L.alt etc. In Laminar, this type is
ReactiveProp which extends Scala DOM Types Prop class. So ReactiveProp adds Laminar functionality to Prop, but it doesn't actually have to extend Prop, Prop is only provided as a canonical Prop class to help potential interop between libraries and to reduce boilerplate.
Speaking of boilerplate,
ReflectedHtmlAttrs trait requires Laminar object to also implement
ReflectedHtmlAttrBuilder. Because Laminar uses its own
ReactiveProp type, we have to implement it ourselves, and we do it in
HtmlBuilders trait, which
object Laminar also extends. But if Laminar just used the canonical
Prop instead of its own
ReactiveProp, it could have simply used
Scala DOM Types CanonicalReflectedPropBuilder instead of re-implementing all the same
reflectedAttr method in
HtmlBuilders. But then I would need to add Laminar functionality to
Prop by means of implicit extension methods, and I personally prefer non-implicit APIs.
If you look at
HtmlBuilders you will see that it extends quite a few other traits – those follow the same pattern but for different types of keys – html attrs, event props, style props, tags, etc. And you'll see
object Laminar extending Scala DOM Types traits that provide these listings of attrs, event props, style props, tags, etc.
Lastly, Scala DOM Types code provides quite a few comments explaining what the traits are used for and what the classes represent. Hopefully together with the Laminar example it will be clear enough how to use Scala DOM Types from your library. If you've built libraries on top of ScalaTags, it's a somewhat similar pattern.
Notice that most Scala DOM Types def traits live in the
shared project. Some of them accept many type params to represent different types of DOM Events or HTML elements. This enables you to use Scala DOM Types on the JVM. For implementing a purely frontend library you will want to use the type aliases defined in the
js project, specifically in the package object there. Those aliases provide standard type params for Scala DOM Types definition traits from the scalajs-dom project. Laminar uses those type aliases, and there is no reason not to if you don't need JVM compatibility in your library.
My Related ProjectsMy Related Projects
- Laminar – Reactive UI library based on Scala DOM Types
- Scala DOM Builder – Low-level Scala & Scala.js library for building and manipulating DOM trees
- Scala DOM TestUtils – Test that your Javascript DOM nodes match your expectations
AuthorAuthor
License and CreditsLicense and Credits
Scala DOM Types is provided under the MIT license.
Files in
defs directory contain listings of DOM attributes, props, styles, etc. – Those were adapted from Li Haoyi's ScalaTags, which is also MIT licensed.
Comments marked with "MDN" or linking to MDN website are taken or derived from content created by Mozilla Contributors and are licensed under Creative Commons Attribution-ShareAlike license (CC-BY-SA), v2.5. | https://index.scala-lang.org/raquo/scala-dom-types/domtypes/0.14.3?target=_sjs1.x_2.13 | CC-MAIN-2022-05 | refinedweb | 3,429 | 51.99 |
A Zend Framework 2 tryout
Diving into the code
Since I have much familiarity with Zend Framework 1, I decided to jump into the 2.x beta version to discover what has changed and how it does impact my existing applications.
Rob Allen's step by step tutorial is probably the point of entry, also for people who have never used Zend Framework. You can installation via git cloning (or by grabbing a zip) a skeleton application ready for adding modules.
If you want to play with an already written application, grab the result of the tutorial instead, containing an Album module that performs CRUD over a database table. It's more functional and realistic than an Hello World.
The skeleton application is a good move, as it already includes a functional application and .htaccess, so you only need a bit of Apache configuration to get it up and running. Previously, code generation was the way to set up a Zend Framework 1 application for a newbie, and I can't say getting Zend_Tool and the zf binary to work was all that simple at the first try.
The noticeable change in the application structure is the module layout:
Application/ config/ src/ views/
Each module separates production code (classes) from config and from view templates, while not forcing further divisions between controllers, models, services and other objects.
It also means that theoretically you could drop a module in an application as a single folder, along with others. But it doesn't mean that modules do not interact: for example the Two Step View layout specified by the Application module is also used by the other modules. This is done by merging the configurations of each module into a single one.
Standards
With a new major version, the framework can adopt modern standards that interrupt backward compatibility, like Symfony 2 did:
- being PHP 5.3 only, classes are organized into namespaces and not only virtual packages separated by underscores.
- PSR-0 autoloading is adopted in all modules src/ folder, so there is no mismatch between classes and file names.
- no Interface suffix anymore for names.
- marker interfaces for exceptions, so that PHP 5.3 exceptions are used by inheritance and exceptions are associated to a module by implementing an interface (e.g. Zend\Json\Interface).
- no underscore prefix for private and protected properties and methods.
Nothing revolutionary, but it frees from some burdens while writing new code.
Not really lightweight
Configuration is a bit long-winded but scales well: for example, the 'di' key in a module's config file contains all classes along with their injection parameters. A full stack framework comes with accidental complexity, that we have to accept in order to borrow all its features.
Classes are considered equivalent to objects, but I'm sure there is some way to instance more than one object of the same class to inject, since I already see aliases available for controllers.
I would also try to limit this injection mechanism to the framework's classes, as it is a more flexible form of the same old configuration containing database credentials, drivers, paths and so on. I say more flexible as you can reconfigure the object graph instead of providing just a giant INI. For configuration of other layers, I would stick to my own mechanism (domain-related Factories and Builders), independent from external components.
Another point of boilerplate (necessary for performance, I think) is the Module.php configuration that act as a Facade for the module, defining autoloading. It is pretty standard code so it won't be a cause of pain.
Controllers can be very lightweight with respect to Zend Framework 1: they can specify setters for injection instead of grabbing around hunting for objects. It's not a magic feature: you specify their list in the configuration, and this would make you think about collaborators and their number. But the controller remains slim and even instantiable in isolation (in theory).
Actions return a view model so that they do not have to know the view object at all:
class AlbumController extends ActionController { /** * @var \Album\Model\AlbumTable */ protected $albumTable; public function indexAction() { return array( 'albums' => $this->albumTable->fetchAll(), ); } public function setAlbumTable(AlbumTable $albumTable) { $this->albumTable = $albumTable; return $this; } }
Similarities with 1.x
Some conventions and APIs are verbose as in Zend Framework 1, but that means you won't have to learn them again:
- the controller's API and its helpers: from a controller you have access to the request, and to features mixed in such as redirecting.
- Forms allow the definition of elements (with labels, validation, and filters) in subclasses providing init().
- Views conform to the usual folder structure <controller>/<action>.phtml; helpers like escape() and url() are available along with view variables on $this.
Basic database support is also present with the Table Data Gateway classes and its derivatives. Of course you can integrate Doctrine2 or other ORMs of your choice.
Conclusions
The following question is the elePHPant in the room: Why should I use Zend Framework 2 instead of Symfony 2? This analysis provided a point: ZF2 has an easier upgrade path for the code of Zend Framework 1 applications and for the know-how you have acquired. Furthermore, it's not less modern than Symfony in its design choices despite being a port instead of a full rewrite. | http://css.dzone.com/articles/zend-framework-2-tryout | CC-MAIN-2013-20 | refinedweb | 893 | 51.18 |
Package contains Matlab's MAT-File I/O API for .NET 2.0. Supports Matlab 5 MAT-file format.
27 Downloads
Updated 11 Sep 2007.
Inspired by: JMatIO - Matlab's MAT-file I/O in JAVA
Download apps, toolboxes, and other File Exchange content using Add-On Explorer in MATLAB.
Adhara (view profile)
I the library works well, but I have a couple of questions about the supported formats. Is still v5 the only one supported? Thanks.
Greg Brissey (view profile)
Hi,
can't seem to be able to post on Sourceforge, so I'll post here.
I'm trying to duplicate a struct with csmatio
The struct is 1x21 where each column has a field name then followed by values (approx 20 rows). thus there are 21 fields or variable names with 20 values (rows).
I can create the 1x21 struct, but I can seem to add the columns (using MLDouble) appropriately, I've tried a few combinations but always Matlab throws an error that it can not read the file.
Matlab output of struct I'm trying to create:
1x21 struct array with fields:
cfreq_Hz
fspan_Hz
Res_BW_Hz
Vid_BW_Hz
SweepTime_s
AbsStart_dBm
AbsStop_dBm
PeakExcursion_dB
PeakThreshold_dBm
Atten_dB
Detector1
Detector2
npts
rangeIndex
ActResBandwidth_Hz
spec_data.ranges.cfreq_Hz
list 21 values,
Documentation is not very clear on this.
How would I do this.
Thanks
Hervé Ndoguéfouba (view profile)
Ok, seems like I'm able to post my question in sourceforge.
I'll add the same topic there.
@+
rv.
Hervé Ndoguéfouba (view profile)
Hi all,
As I cannot post on the new discussion page, I'll post my question here.
Let's say I have some *.mat files in a zip file and that I need to unzip this file in memory because of some limitations (no access to the disk system for io operations).
I was thinking that using the class MatFileInputStream will be ok, but then I realize that there is no (easy) way to retrieve the arrays like in the MatFileReader class (Content method).
So is there a way to achieve this with the current implementation of csmatio... without updating the code?
Thanks in advance,
@+
rv.
Tobias Otto-Adamczak (view profile)
This download and discussion page is outdated. Please visit instead.
Ben Human (view profile)
Hi,
How do I create a Matlab cell array from a C# List<String> ? Any help would be much appreciated!
Ben
inigo navarro (view profile)
Hey, how can I write DateTime arrays? Cell arrays work just fine but they vastly increase the output file size...
Roberto Conte Rosito (view profile)
Tobias Otto-Adamczak (view profile)
Hi folks,
I created a public discussion forum for the csmatio code on SourceForge today:
I'd like to discuss further topics on csmatio at SourceForge.
Greetings
Tobias
Tobias Otto-Adamczak (view profile)
Hi Diego, this is a bug. Please create a ticket on sourceforge for that:
Diego (view profile)
Hi Tobias,
I need to create empty double variables in the mat file.
I've tried as follows:
MatFileWriter mfw;
matDataStream = new List<MLArray>();
matDataStream.Add(new MLEmptyArray("a"));
mfw = new MatFileWriter("test.mat",matDataStream,true);
This code generates a System.InvalidCastException when trying to write the file, because it is impossible to cast an object of type 'csmatio.types.MLEmptyArray' to 'csmatio.types.MLNumericArray`1[System.Double]'.
Any suggestions?
Thanks again for your support!
Tobias Otto-Adamczak (view profile)
Hi Diego, this is a bug. I created a ticket on sourceforge for that: Please switch over to sourceforge to discuss that.
Diego (view profile)
PS: I'm using csmatio rev07.
Diego (view profile)
Hi, thank you for the library. It's really useful. I've found a bug when writing a mat file containing variables of type single. The file is generated, but opening the file Matlab returns the following error:
??? Error using ==> load
Can't read file C:\csharp_matlab.mat.
Here the code I wrote to generate the file:
MatFileWriter mfw;
List<MLArray> MLdata = new List<MLArray>();
double [] val = {1,2};
float[][] val_float = new float[2][];
val_float[0] = new float[] { 1 };
val_float[1] = new float[] { 2 };
MLdata.Add(new MLDouble("pippo",val,1));
MLdata.Add(new MLSingle("pippo_single", val_float));
mfw = new MatFileWriter(@"C:\csharp_matlab.mat",MLdata,true);
Could you help me, please?
Thanks in advance
Diego
Tobias Otto-Adamczak (view profile)
Hi chelli, here is the complete example:
// create a reader for the file
MatFileReader mfr = new MatFileReader("chelli1.mat");
// get a reference to the matlab struct array named 'spin1'
MLStructure mlStruct = mfr.Content["spin1"] as MLStructure;
for (int i = 0; i < mlStruct.Size; ++i)
{
// get reference to struct member 'spinIm' (5x5 double array)
MLDouble mlSpin = mlStruct["spinIm", i] as MLDouble;
// get values
double[][] spin = mlSpin.GetArray();
}
HTH
Tobias
chelli takfarinas (view profile)
thank you very much
the solution is to use
MLDouble mlDouble = mlStruct.AllFields[index] as MLDouble;
double x = MLDouble.Get (index);
Tobias Otto-Adamczak (view profile)
Hi chelli,
a) For reading struct arrays you can use
mlStruct["Version", index] instead of mlStruct["Version"].
b) For reading a 5x5 double matrix instead a scalar double value you can use
mlW.GetArray() instead of mlW.Get(0).
HTH
Tobias
chelli takfarinas (view profile)
hi;
thank you for your help;
but in my files I have a variable of type 1xN struct array with fields:
spinIm(5x5);
how I Get data without using ContentToString
thanks.
Tobias Otto-Adamczak (view profile)
Hi chelli,
reading of structs has a bug in the original 2007 version of David. Make sure you use the latest csmatio version from
The csmatio source code contains a demo file named "struct.mat". Here is some demo code to read the content of this mat file:
// create a reader for the file
MatFileReader mfr = new MatFileReader("struct.mat");
// get a reference to the matlab struct named 'X'
MLStructure mlStruct = mfr.Content["X"] as MLStructure;
// get references to some struct member objects
MLChar mlVersion = mlStruct["Version"] as MLChar;
MLDouble mlW = mlStruct["w"] as MLDouble;
// get values from struct members
string version = mlVersion.GetString(0); // "1.0.5.23354"
double w = mlW.Get(0); // 3874.0
HTH
Tobias
chelli takfarinas (view profile)
hi;
I have files that contain structures;
how to read them in C#;
thank you;
Tobias Otto-Adamczak (view profile)
Thanks Anton! I fixed this. Latest version is available on sourceforge:
Anton (view profile)
File MLNumericArray.cs, line 76.
The Flags property should be
get{return (int)((uint)(base._type & MLArray.mtFLAG_TYPE) |(uint(base._attributes & 0xFFFFFF00));}
or it will make all the numberic data written as Double.
Duc (view profile)
Hi, I found a bug in the library.
File MLArray.cs, line 103.
The Flags property should be
get{ return (int)((uint)(_type & mtFLAG_TYPE) | (uint)(_attributes & 0xFFFFFF00)); }
or it will make all the numberic data written as Double.
And thank you very much for creating this awesome library.
Regards.
Jen (view profile)
Jayant,once you created the 3D array, then use: array.Set(value,row_ind,col_index). For example, your 3D array is m*n*3,for 1st dimension, use array.Set(value,row_ind,col_index);for 2nd dimension, use array.Set(value,row_ind,col_index+n);for 3rd dimension, use array.Set(value,row_ind,col_index+2n)
Jayant (view profile)
How can we save a 3 dimensional array of type MLUInt8? I am able to create the array of 3 dimensions but I am not able to figure out how to populate the data inside it.
Jen (view profile)
I took a look at inside of it and found the data types are little confusing. I will see what I can do, thanks for the trust. Let's solve this problem, everyone!
Tobias Otto-Adamczak (view profile)
Hi Jen, thank you for sharing this. AFAIK David is currently not working on the code. Neither do I. But if you could make some improvements I would be glad to test your patches and update the code on sourceforge.
HTH, Tobias
Jen (view profile)
one more thing, CSMatIO has no problem reading uint16/int8 files that is created by CSMatIO itself.
It only has problem when I tried Matlab "save()" function created mat files.
Thank you
Jen (view profile)
It still doesn't work Tobias. I didn't make it very accurate in the last post: For uint16, error can be catched, saying "invalid binary MAT-file!"; For int8, the debugger stopped at ln386 in MatFileReader.cs and error message says "Additional information: Unable to cast object of type 'csmatio.types.MLInt8' to type'csmatio.types.MLNumericArray`1[System.Byte]'."
Has anyone tried on uint16 or int8 before?
Thank you for your reply.
Tobias Otto-Adamczak (view profile)
Hi Jen, this might be due to one of the few shortcomings of Davids original code. Have a look at the latest version at sourceforge and tell me if it behaves the same way:
HTH, Tobias
Jen (view profile)
I have no problem reading in Uint8 and Double.
But I couldn't read in "int8" and "uint16" type MAT file, debug stopped at Ln386 in "MatFileReader.cs", and error message says"Additional information: Unable to cast object of type 'csmatio.types.MLInt8' to type'csmatio.types.MLNumericArray`1[System.Byte]'."
What might be wrong? Thank you
Tobias Otto-Adamczak (view profile)
Hello kushal, CSMatIO is about .NET, not Java. For Java consider JMatIO instead.
HTH, Tobias
kushal (view profile)
suppose a mat file consist of multiple matrices and tensor,
how to read those matrices as it is in local java variable so as to perform the operations on those variables such as transpose etc.
e.g: say sample.mat contains
A(mxn),
B(m x n x k),
C(m x k),
D(n x k)
how to read these matrices in java and store them with their original dimensions as it is.
Tobias Otto-Adamczak (view profile)
Hi Laurent, you can easily create a 3D double array as follows:
// init 3D double array (2x3x4 elements)
int[] dims = new int[] { 2, 3, 4 };
MLDouble array3Dim = new MLDouble("cube", dims);
HTH, Tobias
Laurent (view profile)
Very nice and useful library.
A question for Tobias, which was asked (I quote) "How to read the content of a three-dimensional double array with this library."
My question is : how to create a 3D double array with this library ? Methods of MLDouble don't seem to enable that kind of thing...
Nice work anyway.
M. Hakma (view profile)
Tobias Otto-Adamczak (view profile)
Le me give you a basic example how to create nested matlab vars with this lib.
// suppose we have following C# struct
public struct Score
{
public string Name;
public double Value;
}
// initialize an struct object of that type
Score highscore;
highscore.Name = "David";
highscore.Value = 47.3;
// create a corresponding MATLAB structure
MLStructure structure = new MLStructure("highscore", new int[] { 1, 1 });
// create a MATLAB char and double variable and add it to the structure
MLChar scoreName = new MLChar("", highscore.Name);
MLDouble scoreValue = new MLDouble("", new double[] { highscore.Value }, 1);
structure["Name", 0] = scoreName;
structure["Value", 0] = scoreValue;
// save the structure as mat file using MatFileWriter
List<MLArray> mlList = new List<MLArray>();
mlList.Add(structure);
MatFileWriter mfw = new MatFileWriter("data.mat", mlList, false);
HTH, Tobias
Ettore (view profile)
I need to write some nested vars in .mat file, like:
a.b.var1=10
a.b.var2=20
To my understanding, within the MatlabFileWriter, i can't do:
data.Add("a.b.var1", 10)
but I need to implement something like
- check if 'a' exists, create it if not
- check if 'b' exists under 'a', create it if not
- check if 'var' exists under 'b', create it if not
- set value and save
Is that correct?
Tobias Otto-Adamczak (view profile)
Today I was asked how to read the content of a three-dimensional double array with this library. The same goes for arrays with four or more dimensions.
So lets assume we have a 220x180x33 double array called "cube" and we want to read what is in matlab syntax "cube(7,18,29)":
// get a reference to our matlab 'cube' double matrix
MLDouble mlCube = (mfr.Content["cube"] as MLDouble);
if (mlCube != null)
{
// calculate the index of our element
idx = 7-1 + (18-1)*220 + (29-1)*180*220;
// now get the double value
double value = mlCube.Get(idx);
}
HTH, Tobias
Tobias Otto-Adamczak (view profile)
Hi all, I fixed some issues in this really useful libray. Have a look at
HTH, Tobias
diiiego83 (view profile)");
;)
Markus W (view profile)
Had problems reading mat-files with 2-dim arrays with larger number of elements. Saving the file with -v6 option solved the problem (so far).
Tobias Otto-Adamczak (view profile)
Tobias Otto-Adamczak (view profile)
...
Qirong (view profile)...
farid (view profile)!
Tobias Otto-Adamczak (view profile)
Qirong (view profile)?
Tobias Otto-Adamczak (view profile)
Really useful library. However it has some bugs (struct field access is buggy, writing empty strings doesn't work) und is somehow incomplete (unsupported data types). I try to provide some fixes in the next days.
Markus W (view profile)
Easy to use an very helpful
Writing empty string seems to result in exception. Any workaround?
Jörgen Ejr973jr (view profile)
Support for enumerations?
Any plans to add support for enumerations using declared as:
classdef(Enumeration) AdapRequestEnum < int32 ...
I have not found any description of the internal representation in MAT file format.
Bart Ribbens (view profile)
Does there exist a binary Level 7 Mat fileformat?
In the latest document (MAT-File Format
Version 7) they still use the Level 5 binary.
Lalit Parashar (view profile)
Any plans to upgrade it to manipulate binary Level 7 MAT-Files.
Mike (view profile)
Writes to binary memory stream which runs out of memory at ~270 MB (.NET managed heap limitation, I believe) before being written to disk. Would be nice to have the option to take a performance hit but stream directly to disk, enabling creation of larger files.
Missing cases MLArray.mxUINT16_CLASS, MLArray.mxINT16_CLASS, MLArray.mxUINT32_CLASS & MLArray.mxINT32_CLASS in the method ReadMatrix of MatFileReader class
A bit tricky to get into. But once you got it its easy to use. Works stable. | http://www.mathworks.com/matlabcentral/fileexchange/16319-csmatio--mat-file-i-o-api-for-net-2-0?requestedDomain=www.mathworks.com&nocookie=true | CC-MAIN-2017-17 | refinedweb | 2,335 | 57.57 |
Hackers are 'Terrorists' Under Ashcroft's New Act 1021
Carlos writes "Most computer crimes are considered acts of terrorism under John Ashcroft's proposed 'Anti-Terrorism Act,' according to this story on SecurityFocus. The Act would abolish the statute of limitations for computer crime, retroactively, force convicted hackers to give the government DNA samples for a special federal database, and increase the maximum sentence for computer intrusion to life in prison. Harboring or providing advice to a hacker would be terrorism as well. This is on top of the expanded surveillance powers already reported on. The bill could be passed as early as this week. I feel safer already."
There's too many of us (Score:2, Informative)
Put us all in prison, and prisons will be freer than out here.
The true hacker is absolutely, completely, devoted to freedom.
-wp
Re:There's too many of us (Score:2)
(Though security geeks are likely richer and whiter than drug offenders, on average, which will help.)
oh, crap... (Score:4, Funny)
Quick, smash your DSL modems, clear your logs, and run for the hills before the Feds arrive!
Umm, Thats not right... (Score:4, Interesting)
Re:Umm, Thats not right... (Score:2)
teaching someone how to disassemble a program?
teaching assembly language?
using a non-MS product?
Flying Instructors (Score:5, Interesting)
Well, you won't go to jail. But the FAA will take your pilot's license away. If you are a pilot, that's nasty. Check out news://rec.aviation.pilots for more.
Without passing a law, without recourse to a *single* elected person, thousands of US citizens have had their source of income removed.
Well, that makes us all safe doesn't it?
Re:Umm, Thats not right... (Score:4, Insightful)
"If you have programming skills, get the fuck out of the States and take your skills with you. Your country obviously doesn't want you anymore."
(Am I now a felon?)
Re:Umm, Thats not right... (Score:2)
Moderators - please mod my original post original post [slashdot.org] down. As in, "(1, Didn't Read The Fscking Article Before Posting)"
Ouch! (Score:5, Interesting)
All it takes is one bad customer relationship to cause a false accusation...
jeremiah cornelius
Re:Ouch! (Score:5, Funny)
Boy, was she vulnerable! Glad I was able to help her out, really!
Re:Ouch! (Score:3, Funny)
That's why John Ashcroft will be needing a DNA sample from you.
My DNA? (Score:5, Insightful)
So, who wants to take bets that the RIAA get's copyright violaters termed as hackers?
Re:My DNA? (Score:4, Funny)
Re:My DNA? (Score:2)
It's so they can identify you when you crash your jumbo-jet into the whitehouse.
Six degrees of separation. (Score:2)
This is a perfect example. Decrypting DVDs under the DMCA is circumvention. Circumvention is hacking. Hacking is now terrorism.
Crack a copy of your new CD so you can have burned copies in your car instead of the originals (in case they get stolen), and you are now a terrorist.
Re:Six degrees of separation. (Score:5, Funny)
Naturally, it takes a politically-connected DA about a month to remedy the situation, particularly if goose-whackers are a mostly misunderstood minority...
Now hang on just a sec... (Score:4, Redundant)
This thing needs to at least be tempered by a clause which adds or defines criminal intent. That is, if hacking is done with the intent to destroy or disable the United States government and/or make actual acts of terrorism (such as blowing people up) easier, then throw the bastards in jail. But defacing some web site doesn't harm the United States government; it's just annoying as hell. And annoying doesn't deserve life in prison without the possibility of parole--especially since actually killing someone is what I would consider slightly more annoying, yet many types of murder don't get anywhere near life.
Re:Now hang on just a sec... (Score:2)
I agree with this statement, unless you hack a major commerce site (the government's revenue source) or a major news site (the government's propaganda outlet). In either of those cases, you're actually threatening the government. The safest thing to do is probably to hack a government information website, since there's very little of value there and most likely no one will even see it for weeks.
Bryguy
Re:Now hang on just a sec... (Score:5, Funny)
I've said this before, but it's worth repeating. The laws that apply in the real world should apply in the cyber world.
Defacing a web face is the same as spraying some grafitti on a wall. Stealing credit card numbers or private information is the same as theft. Bringing down a government web site is sabotage. These should be dealt with the same as they are in the real world.
Defacing a web site is vandalism, and therefore should be treated as a misdemeanor. Stealing credit card numbers or private information would be a misdemeanor or a felony depending on how much was stolen and how much it's worth. Sabotage, deliberate, willful destruction of government property, including websites, *is* terrorism and should be dealt with as such.
I don't see why this is so frickin' hard.
Umm.. (Score:2)
def con (Score:2)
what next?
Re:def con (Score:2)
The FBI will arrest America's best and brightest, crippling high-tech innovation.
No they won't. They'll only arrest those of the best and brightest who bother them. Others of the best and brightest will be threatened arrest and forced to help the government. And then the best of the best and brightest of the brightest won't break the law (or at least won't get caught) in the first place.
what about bugtraq? (Score:5, Interesting)
Re:what about bugtraq? (Score:2)
Security sites often post code that can be used to exploit a particular hole, so that the hole can be better understood and more easily patched.
What about tools like L0phtcrack [atstake.com]?
perversion (Score:5, Insightful)
Re:perversion (Score:5, Insightful)
On that, we agree.
Upon reading the draft bill, I'm not happy with all of the provisions in the bill, but I really don't see anything that says "guy with programming sk1llz == terrorist."
I do see an expansion of The List Of Bad Things We Can Do To Felons (such as DNA sampling), but that's a far cry from "all [cr]ackers are terrorists", let alone "all Hackers are now terrorists and will have to give up DNA samples".
Indeed, only crackers who attack "protected systems" (meaning
.gov and .mil boxen - not the d00d who hax0rz the average web site) appear to be in line to get their asses handed to them on a silver platter under this Act, and those provisions I can support. (Hell, those are about the only provisions I'd support ;-)
Earlier, I made a post that said "If you've got programming skills, get the hell outa here." I retract that post. This bill, while odious for many means, is not a declaration that American doesn't want its programmers anymore.
Serves me right for replying to
/. before reading the fscking article ;-)
CFAA Applies TO EVERY COMPUTER (Score:4, Informative)
You are so wrong you can't believe it. The CFAA defines a "protected computer" to mean a computer that is used in interstate commerce. This means any computer connected to the internet or a modem.
I have litigated CFAA civil actions, and I am here to tell you that virtually ANY unauthorized access where virtually ANY valuable information is received, or where ANY valuable data is modified or changed is quite arguably sufficient to lay down a prima facie case.
This bill is as bad as you first thought it was.
Re:perversion (Score:5, Interesting)
From the bill:
"(19) `protected computer' has the meaning set forth in section 1030
"(20) `computer trespasser' means a person who accesses a protected computer without authorization and thus has no reasonable expectation of privacy in any communication transmitted to, through, or from the protected computer.";
From Title 18 Chapter 47 Sec. 10;
Used in interstate or foreing communication? How many of you connect to machines and/or through machines without crossing state lines?
Further from the bill:
""SS 25. Federal terrorism offense defined
"As used in this title, the term `Federal terrorism offense' means a violation of, or an attempt or conspiracy to violate-
-snip-
1030(a)(1), (a)(4), (a)(5)(A), or (a)(7) (relating to protection of computers)
-snip-
Okay, so now *maliciously* breaking into basically any computer system is a terrorist act. Couple this with the rest of the increases in anti-terroism this bill contains, and you're doing *LIFE* in FEDERAL PRISON (aka "no parole") because your Anti-CodeRed Perl script took down some dipshit's enterprise server. Meanwhile child molestors get time off for good behavior.
I don't think anyone thinks "computer crime" shouldn't be punished. Just not to this ridiculous degree.
Re:perversion (Score:4, Insightful)
Excuse me, but you are quite likely to be wrong. Was your computer, or any computer in your possession, infected with Code Red or Nimda? If so, and if it scanned any computers outside of your state, then it's not really a stretch to say that you were outside of the law.
OK, so as a Slashdot reader, you are less likely to be affected by the above. But how many of your friends were?
Also, this bill will eliminate the statute of limitations on these crimes and allow retroactive prosecution. Therefore, anybody who got Code Red or Nimda can quite plausibly be put in jail for life.
Would they win on defense? Maybe, but they're in jail until the trial is over. And maybe they won't win on defense...
This law hands the power to imprison damn near anyone running Windows IIS over the US government, such that only a lawsuit (inevitably protracted) would get them out.
Who still believes this is about preventing terrorism? What a sick joke! Frankly, I think those proposing this bill are traitors to the United States.
So... (Score:2, Insightful)
Bummer...
Interesting question (Score:2)
hmmm (Score:2, Funny)
Microsoft regularly gives advice to hackers with this thing called the Knowlege Base.
They even have a program (IIS) that aids hackers in break in attempts.
Their new advertisement [theregister.co.uk] advocates the destruction of buildings.
This is clearly one of the worst terror organizations
The US and it's allies must take action
Hack chinese websites.. (Score:2, Interesting)
God Damn, I hate John Ashcroft... (Score:2)
Seriously, I'm afraid that this line of reasoning is only going to continue under the Bush administration.
Anyone who violates the conservative faction's very narrow definition of legality and morality is going to face harsher and harsher penalties. It's the 'hackers' right now. I'll be charitable and say that that means anyone who illegally breaks into a computer system or network. It will be expanded in the very near future to include anyone who violates non-circumvention clause of the DMCA. Seriously, how far are those two apart?
It can be reasonably argued that violating copy protections will put illegal technology or information in the hands of terrorists.
The logical progression is pretty evident from that point on. Anyone caught breaking a copyright will be targeted, and then anyone who illegaly owns copyrighted material will be targeted.
Hmmm... I wonder if I should encrypt the stash of Anime fansubs on my HDD. Wait, encryption is going to be illegal to! I'm a terroist either way!
Congress will just keep passing laws to give Bush and Ashcroft what they want in the name of 'National Security'. Don't think for a second that they won't.
A backwards approach to legislation (Score:5, Interesting)
The DMCA and all these supposedly anti-terrorist laws, past and present, take a terribly backward approach to lawmaking. The best laws, like the best software, succeed on minimality and generality. Witness the excellent US constitution, which has been extremely effective considering how long it's been around. The constitution uses very broad terms -- "life", "property", "punishment", "vote" -- and very few specific terms. (Some parts are quite specific, like the quartering of soldiers bit. They seem very quaint now.)
Laws, like software, tend to break if they are designed in specificity but used in generality. The trouble with these new laws is that they create all kinds of special cases and extra circumstances designed for a particular moment in history, which we'll have to support for decades or even centuries. The new terrorist laws, in a way, are like the 640k RAM limit -- they seem good enough for now, but in the future, they'll cripple and break all kinds of things.
The difference is, in this case, it is our fundamental freedoms that are being to get crippled and broken. As always, please please please call your representatives and give them a piece of your mind. They are under a lot of pressure right now, and they need to hear from sensible people.
Giving advice to hackers (Score:2)
Unconstitutional (Score:2, Informative)
abolish the statute of limitations for computer crime, retroactively...
From Article I, section 9, paragraph 3:
"No Bill of Attainder or ex post facto Law shall be passed".
Ex Post Facto refers to laws having a retroactive effect, for those of you wondering.
So, as always, IANAL, but this sure doesn't sound constitutional to me.
So murder is less of an offense than hacking? (Score:4, Insightful)
Re:So murder is less of an offense than hacking? (Score:2, Funny)
Hacking a military site can affect THOUSANDS of lives and national security.
--jeff
Renting appartments might get hard... (Score:2)
We do not harbor terrorists!
Am I dreaming or is this country really THE America?
welcome to the New America (Score:2)
And the recording industry was happy because they convinced people that unauthorized duplication was somehow equivalent to theft of property or stealing from ships on the high seas. Well, I think this tops that!
I think the USA should just take a tip from the Taliban and make all crimes punishable by death or corporeal punishment.
And the message is clear. If you're a high school student thinking of hacking a bank web site and stealing credit card numbers, forget it, KILL THOUSANDS OF PEOPLE INSTEAD! You'll get the same punishment anyway, so do something more stylish!!
It seems a tad broad to me... (Score:2, Insightful)
-Henry
Does that include ... ? (Score:5, Funny)
Bill Gates had better pack his bags now! ("... the most cigarettes.")
is it just me... (Score:2)
Did terrorist actually use anything hightech? (Score:2, Redundant)
The highest tech I have heard of is using email at Kinko's.
Not broad enough! (Score:5, Flamebait)
Seems like this bill needs to be broadened to include itself and John Ashcroft, both of whom seem hell-bent on changing the purpose of government.
List of contacts (Score:5, Informative)-IA,
So let's do something about it (Score:5, Informative)
It takes TEN letters (dead tree letters, email gets deleted immediately) for a Senatorial office to open an issue. TEN. (According to Illinois Senator Dick Durban.) And regardless of the advertising and commercials that politicians raise huge war chests to fund, on election day it is YOUR VOTE that decides who ends up in DC. (East Coast, you have no say over the West Coast one.)
I'd like to issue a call to everyone who posted something modded up to 3 or above: Write a letter to your representatives with the same level of intelligence and Interesting/Insightful content. Write it once and send it three times, once to your Congressperson, and once to each Senator. Fax it if you'd prefer. (Snail mail and fax are what they like the most.) Keep it to one page. Reference the Constitution. Refer to yourself with your most impressive title. (Professor, Ph.d, Senior Engineer, Graduate Student, Independent Developer) and as a registered voter. In the name of the Tux do not tell them that you don't vote, even if that's the case (in which case you should be ashamed of yourself). Then when the next election rolls around, ignore the commercials, take an hour to do your own research, and vote for the candidate that did not support revoking the 4th Amendment and violating Ex Post Facto. It works. (See also: Former Senator Alan Dixon)
For those of you in countries outside of the US, the same applies to you. The Canadian, British, Australian, French, German, etc. governments are all popularly elected as well. (At least the active parts of the British government, anyway.) Politicians are the same everywhere. The same tactics apply. Use them. If you don't, you have no one to blame but yourselves.
I'd Complain But... (Score:2)
Of course Congress is also showing quite a bit of reason [cnn.com] in the face of Ashcroft's demands, too, so maybe calmer heads will prevail. Though I tend to be a glass-is-half-empty kind of guy when it comes to such things.
security through imprisonment. (Score:4, Funny)
The premise of STI is that civilian and military systems dont need to be secured, but instead laws need to be put in place that will require life sentances for so much as a failed telnet login attempt.
In response to our questions Ashcroft had the following statement: "Everyone is aware that securing Microsoft products is as futile as the war-on-drugs(TM), so we decided that rather than attempting to fix the systems - we will just send these E-Terrorists to prison for life for their crimes against Freedom(R). It is important for us to protect-our-children's(TM - H. Clinton) future in the wake of this terrible tragedy. Our new policy is called "If you cant do the right thing, then just do something"
Sure, but what can we do? (Score:4, Insightful)
What bothers/scares me... (Score:2)
That alone is scary enough, but now even stronger punishments, and treatment as what I am going to guess is a capital crime? Ouch. IT is looking even scarier.
(Is scarier a word?)
This is nothing new... (Score:5, Insightful)
As David Quinn put it quite eloquently: Quite depressing, really. (The whole text can be found here [ishmael.com], BTW)
But what can you expect when the whole world has bought into the idea that there is absolutely nothing that any one person can do to change things [ishmael.com]?
-- Shamus
Bleah!
Very disturbing, but not quite as bad as it seems. (Score:3, Informative)
The specific sections of "computer crime" law that appear to be reclassified as "terrorist acts" appear to be only:
1030(a)(1), (a)(4), (a)(5)(A), or (a)(7) (relating to protection of computers)
Which) knowingly causes the transmission of a program, information, code, or command, and as a result of such conduct, intentionally causes damage without authorization, to a protected computer;
The only one that concerns me very much here is 5A - it seems like high-paid corporate lawyers could easy "prove" that for example, if 1337D00D@scriptkiddy.com maliciously hacks into and puts a link to his website on the index page, that he's obtained at least $5000 worth of advertisement...
Come to think of it, I'm a little leery of the "or exceeds authorized access" bit in (4) - if one "accesses" a computer to purchase and legally download some proprietary "protected" piece of music or video, and finds a way to convert it to a nonproprietary format for personal use, has one "exceeded authorized access" and is therefore not merely a DMCA Criminal but a full-fledged DMCA Terrorist? It's a bit of a stretch, but I think a wealthy corporation can buy enough lawyer-approved powerpoint slides "proving" this to a non-technical jury...
NOT After Every Hacker (Score:4, Informative)
This list hardly seems to encompass "most computer crimes". For instance merely accessing or stealing non-classified information is not a terrorist act. Nor does it include breaking encryption ala DMCA. Defacing websites is not a terrorist act unless the computer belongs to one of the above categories and changing the website results in nontrivial financial losses. Writing viruses/worms is not a terrorist act unless you intentionally use it in a way that damages "protected" computers. (From the wording, I wouldn't interpret this to include merely releasing it into the wild, but a judicial ruling would have to clarify that issue). The crimes they are signaling out are pretty significant stuff and not just any old act of hacking. Let's not further contribute to the FUD.
What follows are excerpts of the laws in question:
From The Anti-Terrorism Act of 2001 (Draft 2)
Sec. 309: "...the term 'Federal terrorism offense' means a violation of, or an attempt or conspiracy to violate...1030(a)(1), (a)(4), (a)(5)(A), or (a)(7) (relating to protection of computers)..."
From US Code Title 18, Section 1030 [cornell.edu]
;
(a)(5)(A) knowingly causes the transmission of a program, information, code, or command, and as a result of such conduct, intentionally causes damage without authorization, to a protected computer;
(a)
Under the same Section, part (d)(e)(2) and (8): (2) the term "protected computer" means a computer -
Shifting blame (Score:3, Insightful)
Whistleblower protection with real teeth would be more effective in cleaning up inept government agencies. So would giving the federal Inspectors General the power to fire Federal employees. But no, Ashcroft's not asking for that.
Re:Somebody has to say it, but... (Score:3, Insightful)
Re:Somebody has to say it, but... (Score:2, Insightful)
Re:Somebody has to say it, but... (Score:4, Interesting)
That is an excellent example of a victimless "crime" that numerous goodhearted American people are rotting in jail for right now.
Ashcroft's new proposals, though, go far beyond making computer-crime 'crime'. It already is. What he's doing is making it terrorism. People could be jailed for life for the electronic equivilent of graffitti.
"I don't believe that our definition of terrorism is so broad," said Ashcroft. "It is broad enough to include things like assaults on computers, and assaults designed to change the purpose of government."
The irony is that he wants to fight assaults designed to change the purpose of government by changing laws in direct response to a terrorist attack.
The long-term damage from the terror attacks will come from our leaders as they exploit public rage to slip new crap like this into federal law.
Re:Somebody has to say it, but... (Score:4, Interesting)
now make assisting hacking terrorism.
now make hacking crimes retroactively punishable.
i've read bugtraq for years and have not informed the FBI about all the vulnerabilities released on that mailing list - will this make me negligent and punishable? will my punishment come in the form of an official court prosecution, or will special forces be sent in to take me out without ever letting anyone else know? if i move to Norway, will Norway allow the Navy SEALS to seize me?
Beware, that unmarked white van may be coming for you.
Yeah, sure, very paranoid to think that way, but consider history and consider how other police states have started their lives: will we be naive enough to let this one start as well?
Re:Nobody has to say it, but... (Score:4, Interesting)
If we're lucky, the laws will go that way. I sincerely doubt that the careers of the idiots will, though.
What we need in the US is a law that punishes those who pass blatantly unconstitutional laws. Of course, since Congress routinely exempts themselves from legislation, they'd exempt themselves from this, too!
Re:Somebody has to say it, but... (Score:3, Insightful)
> Say someone hax0rs an air traffic
> control system, do they deserve life
> imprisonment?
Yes, they do. For attempted murder, not for
computer crime. They should be tried and executed
or imprisoned for the crime, not for the means.
If we raise the computer crime to the level of a
capital offense, we DIMINISH the meaning of the
capital offenses we already have.
Re:Somebody has to say it, but... (Score:5, Interesting)
stop and think.
if someone commits credit card fraud with said stolen numbers, then we know who the victim is. but we already have a law for that. until some other crime is committed, there was no victim of simply stealing the numbers.
just because a computer was used to commit the crime, it doesn't mean the crime is somehow worse than the same thing done without a computer. theft is theft, and should be treated as such. it's not like we have separate murder laws for guns vs knives...
Re:Somebody has to say it, but... (Score:3, Insightful)
And if I drive home drunk and get away with it, what's the harm?
Re:Somebody has to say it, but... (Score:4, Insightful)
Of course this is all well known. Best way to hack into a network? Get a job there as a Janitor and find a computer that wasn't logged out of.
Anyhow, criminal Laws can be divided into two categories, I've always though:
Laws that prohibit things that are bad.
Laws that might make it easier to enforce the former laws.
So, killing people is bad, so it's illegal.
Owning a gun isn't bad, but making that illegal is believed to make it easier to enforce the killing people law.
Copyright theft is bad. Being able to back-up an acrobat document isn't bad, and in Russia is actually a right, but DCMA is supposed ot mkae it easier to enforce the "no stealing copyright materials" law.
Re:Somebody has to say it, but... (Score:3, Informative)
I'm not against bad things being a crime, but who gets to define what is a crime or not? And what about when new types of hacking/cracking come out? Maybe windows virus authors should be made criminals? How about websites that use cookies to track you (doubleclick anyone?).
The problem with computers and hacking in general is that it's very hard to narrowly define what is and isn't a crime. Mitnick is a sure sign of this, as is Dimitri. On one side ($$) it's a crime of epic proportions, on the other side it's harmless fun, investigation, proving a point, whatever. This has been a problem since phreaking and probably far before....
Re:Somebody has to say it, but... (Score:2, Insightful)
Oddly enough, according to the bill the deciding determiner of whether the unlawful act is a terrorist act is whether or not it was done for financial gain. So hacking a DB of credit card info ISN'T a terrorist act, while snooping around because you want to learn something IS.
I'm sure that violation of the DMCA will be covered under this act soon, as well...
Re:Somebody has to say it, but... (Score:2, Insightful)
You wouldn't think it was fair to sentence someone who scrawled "Kilroy wuz here '01" on the bathroom wall of a pizza parlor to life in prison, would you? Because that's what this law states: Scrawl your name on any website without the author's permission and be punished as if you were Osama bin Laden's personal hackmeister.
Re:Somebody has to say it, but... (Score:2)
which would get you in deep shit if you were caught.
Re:Somebody has to say it, but... (Score:2)
I kmew this Ashcroft guy was trouble.
Evidence of a social breakdown in the US? (Score:3, Interesting)
Yes, the U.S. may be becoming a police state. Not only does the U.S. have at least three agencies that police the entire world, the NSA, the FBI, and the CIA, but the U.S. has the highest percentage of its citizens in prison of any country ever, in the history of the world.
Here are the official December 31, 2000 prison statistics from the U.S. Department of Justice [usdoj.gov]. Sorry about the formatting. The lameness filter is lame. It won't let me post enough leading dots.People in federal and state prisons... 1,312,354
People in local jails... 621,149
People on probation... 3,839,532
People on parole... 725,527
Total number of citizens... 6,498,562
The total population of the United States [census.gov], projected to September 24, 2001 at 6:34:55 PM PDT is 285,218,008. Therefore, 2.3 percent of the entire U.S. population is in prison or involved with the criminal justice system. But remember, many of those are babies or children. About 3.1 percent of all adult U.S. citizens are in prison, jail, or on probation or parole.
An April 20, 2000 ABC News article, U.S. Prison Population Rising [go.com] says that the percentage of growth of the U.S. prison population is rising.
There is other evidence of social breakdown: An August 19, 1998 BBC News article, The United States of murder [bbc.co.uk], says that the city with the highest murder rate, Washington, D.C., has a murder rate 170 times higher than the city with the lowest murder rate, Brussels, Belgium. The nine U.S. cities in this study of murder rates all were in the list of the 12 cities with highest murder rate.
There is evidence that the secret agencies of the U.S. government and the weapons manufactureres have too much control: What should be the Response to Violence?
[hevanet.com].
Re:Somebody has to say it, but... (Score:2, Funny)
Sigh.
Re:Somebody has to say it, but... (Score:3, Funny)
Re:Somebody has to say it, but... (Score:3, Insightful)
But crime punishable by life in prison? With no statute of limitations? Doesn't murder have no statute of limitations and get you life?
There's a difference between 'crime is crime' and having some sense of proportion. geez.
Re:Somebody has to say it, but... (Score:4, Insightful)
Break into their computer, and you're instantly labelled a terrorist. Think there's any chance you'll get much less than the maximum penalty of life? Hell, my high school once informally accused me of piracy (which, incidentally, I was not guilty of) just on the basis that I knew enough and therefore could have done it. If there's anything that makes people paranoid, it's hearing that the Big Bad Hacker is right outside their computer's door.
Fair, no?
Re:Somebody has to say it, but... (Score:2)
Re:Somebody has to say it, but... (Score:2)
But maybe you're right. After we all, we all know the goverment has the best intentions in mind when they pass laws about computer and high-tech crimes. (*cough* DMCA *cough*)
Here's the story. (Score:2, Informative)..
As a "Federal terrorism offense,".
Re:Here's the story. (Score:5, Insightful)
The difference (Score:3, Insightful)
It's still stupid though.
Re:Here's the story. (Score:4, Interesting)
I propose a new Constitutional amendment. The Three-Constitutional Strikes And You're Out amendment. If an elected official votes for three laws that are later found unconstitutional (no statue of limitation, applied retroactively), they are kicked out of office and barred from all government work for life. These people are supposed to know what they are doing and have no fucking excuse for voting for unconstitutional laws.
Re:Somebody has to say it, but... (Score:2, Insightful)
Also, does this mean that we no longer need virus programs and firewalls? I mean, who needs to lock their door when burglary is illegal?
And of course, how does this bode for tech workers? I often have to gain access to a customer's servers. Does this mean a simple "here's some credentials for you to use" is no longer enough? Do I have to have the admin at the customer's site file a contract with his boss and have his boss and himself and myself sign it each and every time I help them out, even if I'm just entering to check their logs because -- hey -- someone might later say it was unauthorized?
Ashcroft can suck my cock -- but we all know these things will be passed. And projects like mozilla.org that have sections on "hacking the code" will become villified for contributing to terrorism. Welcome to the witch-hunts; i'm finding a new line of fucking work.
Re:Somebody has to say it, but... (Score:5, Interesting)
Computer crime should be a crime.
But it already *is* a crime. The question is what is a just response to computer crimes. Some things which are *not* just:
To put it another way, if this law passes then someone could be given life in prison without parole for documenting vulnerabilities which allow systems to be compromised by a cracker or a worm. Indeed, it isn't clear that, with the removal of the statute of limitations, they couldn't charge the people documented the vulnerabilities responsible for eg. Code Red or Nimda under this law.
This provision is like the anti-circumvention provision of the DMCA writ large. Whereas at least the DMCA only applies to access-control restrictions on copyrighted material, this law could potentially make all discussion of any vulnerabilities which allow systems or information to be compromised illegal.
These provisions are so utterly preposterous and out of proportion to the crimes (or so-called crimes) discussed as to boggle the mind.
Re:Somebody has to say it, but... (Score:2, Informative)
Re:Somebody has to say it, but... (Score:2, Insightful)
The problem here isn't so much that they're saying that computer crime is illegal - more that the punishment is ridiculously severe. When deciding on a punishment, you have to decide what the aim of punishment is and how best to achieve that aim. In this case however, the law makers seem to have the aim of getting votes and the best method is to be tough on terrorism of any kind. It pulls at the heart strings of the nation so of course it gets votes.
Besides the political goals though, there are two main aims people have for utilising jail terms as punishment. The first is to remove the villian from society so that we can all forget about them and feel safe again - the death penalty is much more effective at achieving this aim so why not just use it? Some countries take this approach and it works, there is almost zero crime because people know if they commit a crime they are either executed or deported. The problem with this approach is twofold, firstly it expects everyone to lead a near perfect life and never make a mistake (think of how many teenagers commit once off offences to look cool and later learn from their mistakes and go on to be useful to society. The other problem is that eventually you punish the wrong guy and there's no way to set him free again.
The other aim for imprisonment is to teach people a lesson so they can rejoin society and live happily with everyone else again. Countries such as the US and Australia (and many others) with long jail terms don't acheive this goal at all well. The revolving door prison system is well known - most offenders wind up committing more crimes and going back into the system. However, countries which use shorter jail terms tend to have much lower crime rates. Instead of being locked up for 20 years and becoming bitter against society, you spend one or two years in a correctional facility where you are taught skills to help you survive in the world, go through drug rehabilitation if needed and work with councellors to deal with a disturbed past that may be haunting you. After that you have a much better chance of coming back out into society and not only abiding by the law, but also contributing to the community. If you think the cost of this approach is just too great, think about the cost of keeping people in prison for those extra 18 years and you'll find it works out a lot cheaper. It is not a 100% effective measure, some people will recommit and you need to have ways to deal with that - either through different methods of punishment or by longer imprisonments. It does however give criminals a chance to learn from their errors and adopt new skills to remove the temptation to recommit. After all, isn't that what punishment is all about?
I'd love so see some of these bastards go down (Score:2)
I doubt this bill would give me that and I'm not willing to pay the price asked even if it would. Uncle Sam will make his own definition of "protected computer" and it aint me. Enforceability? What a joke. Why should I trade non existent protection for further erosion of the security of my property, papers and personal effects from unreasonable search and seizure?
Anger and vengence are poor advisors and they make bad laws. This set of laws are hyserical.
Re:Harboring the hackers (Score:2)
They harbor data, quite possibly for "crackers", along with other "questionable" sources (along with many legitamite ones too). If I were them, I'd be a little worried.
MadCow.
USA harbors terrorists! (Score:3, Funny)
Is everone infected with Code Red a terrorist?
Silly huh? Well, people thought it was silly to say that the attack would be used as an excuse to abridge our rights further.
Re:calm down (Score:2, Insightful)
Re:calm down (Score:2)
Party politics and blind partisanship is the root cause of an awful lot of bullshit in our government.
Lee
Re:Enough with the whining (Score:2)
Re:Enough with the whining (Score:2, Insightful)
Crime is crime, yes, but the punishment should fit the crime. Adding a few words to a web page as a publicity stunt should not be punished in the same manner as multiple homicide, or armed robbery, or collaborating on a terrorist attack.
I suppose you'd feel comfortable in a society where the judicial system lopped off criminal's bodyparts, as well? Or caned you silly? No thank you. As it is, I think prison should be for VIOLENT OFFENDERS ONLY. There are many ways to pass a sentence on non-violent offenders, without prison, and without impacting society in such a heavy-handed legal and financial way.
--ksw2
Re:Enough with the whining (Score:2)
While there are many things in life that I MUST do, obey the law is not one of them. I think you're a bit confused about the meaning of the word must. Eating is a must, breathing is a must, doing what the state tells me to is not.
Lee
Re:Enough with the whining (Score:2, Insightful)
Re:Well Gee (Score:2)
Re:Hmmmm... (Score:2)
Re:The answer is simple (Score:2)
Re:The answer is simple (Score:3, Insightful)
What you're saying is that smart people like him, who sometimes use a little poor judgment, should be given life sentences in prison? You're saying that was Randall did is on the same level as murder?
Re:oh jesus... (Score:2, Insightful)
It is the same problem people had with the outbreak of school violence. They immediately went to blame violent video games as the 'sole' cause. Also, take cell phones and automobile accidents for example. People blamed those, even though they are one of the smallest causes.
In essence, people look for the easiest thing to blame, which usually ends up being technology, since its 'new,' it must be the source of 'new' problems like terrorism, even though there is no true solution or source to blame for such occurances.
Re:discover a LAN, go to JAIL (Score:3, Informative)
I found this text (from 1030(a)(1), (a)(4), (a)(5)(A)):
(5) (A) knowingly causes the transmission of a program, information, code, or command, and as a result of such conduct, intentionally causes damage without authorization, to a protected computer;
so, does this also mean that if I happen to ping some windows box and maybe it crashes when I ping it (that doesn't surprise me, does it surprise you?), and that windows box belongs to some whitehouse bigwig, am I now a terrorist? | http://slashdot.org/story/01/09/24/2044242/hackers-are-terrorists-under-ashcrofts-new-act | CC-MAIN-2014-42 | refinedweb | 6,867 | 64.91 |
😉 Here’s the link:
PingBack from
Michael, this is an interesting read – thanks. How does this approach affect your ability to restore individual My Sites? I have always considered that one of the advantages of hosting My Sites in a seperate web application is that they are in a seperate set of content databases to your main ‘portal’ which makes restoration easier.
Hi Martin – Nice to meet you. We’ve never had a problem restoring individual mysites on a shared web app, however, we worry about upgrade and patching. While we’ve never had a problem with upgrade and patching, we think keeping them separate ensures operations flexibility for future releases. However, customers love a single url for all things SharePoint and it does simplify procurement and maintenance of DNS namespaces and certificates. We are actually considering splitting mysites into a separate web app in our next release because of the aforementioned concerns.
😉
Great post and interesting ideas. Not disagreeing with you, but I would add the following thoughts:
1. If My Sites are in their own provider (Web App), then new My Sites inherit navigation (among others) from the My Site Host in the root managed path. This is very useful to modify settings without changing the My Site site def.
2. When My Sites are in their own Web App, we can change the web app policies and available permission levels to all my sites in that Web app, en masse.
Food for thought. Loved the presentations in Vegas. Keep up the great work!
-ben | https://blogs.msdn.microsoft.com/mikewat/2007/10/22/hosting-sites-and-personal-sites-in-the-same-web-app/ | CC-MAIN-2016-40 | refinedweb | 257 | 63.8 |
Hello Dear Friends,.
<uses-permission android:
<uses-permission android:
<uses-permission android:
<uses-permission android:
1)MainActivity.java
package com.manish.googleprintdemo;
import java.io.File;
import android.app.Activity;
import android.content.Context;
import android.content.Intent;
import android.net.ConnectivityManager;
import android.net.NetworkInfo;
import android.net.Uri;
import android.os.Bundle;
import android.os.Environment;
import android.util.Log;
import android.view.View;
import android.view.View.OnClickListener;
import android.widget.Button;
import android.widget.Toast;
/**
*
* @author manish
*
*/
public class MainActivity extends Activity {
Button btnPrint;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
btnPrint=(Button)findViewById(R.id.button1);
btnPrint.setOnClickListener(new OnClickListener() {
@Override
public void onClick(View v) {
// TODO Auto-generated method stub
if (isNetworkAvailable() == false) {
Toast.makeText(MainActivity.this,
"Network connection not available, Please try later",
Toast.LENGTH_SHORT).show();
}", "Android print demo");
startActivity(printIntent);
}
}
});
}()) {
Log.e("Network Testing", "***Available***");
return true;
}
Log.e("Network Testing", "***Not Available***");
return false;
}
}
2)PrintDialogActivity.java
package com.manish.googleprintdemo;
import android.annotation.SuppressLint;
import android.app.Activity;
import android.content.ActivityNotFoundException;
import android.content.ContentResolver;
import android.content.Intent;
import android.os.Bundle;
import android.util.Base64;
import android.webkit.WebSettings;
import android.webkit.WebView;
import android.webkit.WebViewClient;
import java.io.ByteArrayOutputStream;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.io.InputStream;
public class PrintDialogActivity extends Activity {
private static final String PRINT_DIALOG_URL = "";
private static final String JS_INTERFACE = "AndroidPrintDialog";
private static final String CONTENT_TRANSFER_ENCODING = "base64";
private static final String ZXING_URL = "";
private static final int ZXING_SCAN_REQUEST = 65743;
/**
* Post message that is sent by Print Dialog web page when the printing dialog
* needs to be closed.
*/
private static final String CLOSE_POST_MESSAGE_NAME = "cp-dialog-on-close";
/**
* Web view element to show the printing dialog in.
*/
private WebView dialogWebView;
/**
* Intent that started the action.
*/
Intent cloudPrintIntent;
@SuppressLint("JavascriptInterface") @Override
public void onCreate(Bundle icicle) {
super.onCreate(icicle);
setContentView(R.layout.print_dialog);
dialogWebView = (WebView) findViewById(R.id.webview);
cloudPrintIntent = this.getIntent();
WebSettings settings = dialogWebView.getSettings();
settings.setJavaScriptEnabled(true);
dialogWebView.setWebViewClient(new PrintDialogWebClient());
dialogWebView.addJavascriptInterface(
new PrintDialogJavaScriptInterface(), JS_INTERFACE);
dialogWebView.loadUrl(PRINT_DIALOG_URL);
}
@Override
public void onActivityResult(int requestCode, int resultCode, Intent intent) {
if (requestCode == ZXING_SCAN_REQUEST && resultCode == RESULT_OK) {
dialogWebView.loadUrl(intent.getStringExtra("SCAN_RESULT"));
}
}
final class PrintDialogJavaScriptInterface {
public String getType() {
return cloudPrintIntent.getType();
}
public String getTitle() {
return cloudPrintIntent.getExtras().getString("title");
}
public String getContent() {
try {
ContentResolver contentResolver = getContentResolver();
InputStream is = contentResolver.openInputStream(cloudPrintIntent.getData());
ByteArrayOutputStream baos = new ByteArrayOutputStream();
byte[] buffer = new byte[4096];
int n = is.read(buffer);
while (n >= 0) {
baos.write(buffer, 0, n);
n = is.read(buffer);
}
is.close();
baos.flush();
return Base64.encodeToString(baos.toByteArray(), Base64.DEFAULT);
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
return "";
}
public String getEncoding() {
return CONTENT_TRANSFER_ENCODING;
}
public void onPostMessage(String message) {
if (message.startsWith(CLOSE_POST_MESSAGE_NAME)) {
finish();
}
}
}
private final class PrintDialogWebClient extends WebViewClient {
@Override
public boolean shouldOverrideUrlLoading(WebView view, String url) {
if (url.startsWith(ZXING_URL)) {
Intent intentScan = new Intent("com.google.zxing.client.android.SCAN");
intentScan.putExtra("SCAN_MODE", "QR_CODE_MODE");
try {
startActivityForResult(intentScan, ZXING_SCAN_REQUEST);
} catch (ActivityNotFoundException error) {
view.loadUrl(url);
}
} else {
view.loadUrl(url);
}
return false;
}
@Override
public void onPageFinished(WebView view, String url) {
if (PRINT_DIALOG_URL.equals(url)) {
// Submit print document.
view.loadUrl("javascript:printDialog.setPrintDocument(printDialog.createPrintDocument("
+ "window." + JS_INTERFACE + ".getType(),window." + JS_INTERFACE + ".getTitle(),"
+ "window." + JS_INTERFACE + ".getContent(),window." + JS_INTERFACE + ".getEncoding()))");
// Add post messages listener.
view.loadUrl("javascript:window.addEventListener('message',"
+ "function(evt){window." + JS_INTERFACE + ".onPostMessage(evt.data)}, false)");
}
}
}
}
3)activity_main.xml
<RelativeLayout xmlns:android=""
xmlns:tools=""
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:
<Button
android:id="@+id/button1"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_centerHorizontal="true"
android:layout_centerVertical="true"
android:
</RelativeLayout>
4)print_dialog.xml
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android=""
android:layout_width="fill_parent"
android:
<WebView
android:id="@+id/webview"
android:layout_width="fill_parent"
android:
</RelativeLayout>
5)AndroidManifest.xml
<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android=""
package="com.manish.googleprintdemo"
android:versionCode="1"
android:
<uses-sdk
android:minSdkVersion="8"
android:
<uses-permission android:
<uses-permission android:
<uses-permission android:
<uses-permission android:
<application
android:allowBackup="true"
android:icon="@drawable/ic_launcher"
android:label="@string/app_name"
android:
<activity
android:name="com.manish.googleprintdemo.MainActivity"
android:
<intent-filter>
<action android:
<category android:
</intent-filter>
</activity>
<activity android:
</application>
</manifest>
Thanks!
This is awesome. I learnt a lot from your blogs. hope all will learn. Thanks for posting this.
Your welcome Gaurav.
great work!
Thanks..
Hey manish, I tried using your code, but everytime I try to print a document, it says "Document missing". There is no error in my address, I have checked repeatedly. Instead of using Environment.getExternalStorageDirectory().getAbsolutePath() + "/personal/xyz.pdf", i have hard - coded the path. Can you please help me?
Hi Nancy, I think some thing wrong in your path because it is working code as you can see log screen shot. well let me suggest some thing-
1)are you try to print pdf file?
2)file is inside your sdcard or in phone memory? if in sdcard please move to phone memory.
And can you print your file path log and error please? So I can suggest right way..
Thanks
Hey Manish, I wanted to print a pdf stored in some URL, since this option seemed to be the best for the app I'm making, any suggestions on how to print a pdf stored online ?
Thanks a lot!
Hi Manish Srivastava, Thank you for sharing. I have a question: Are there other way to print by wifi in android?
Hi Luan,
As this needs an internet connection, it depends at end user whether it is SIM Card internet or a wi-fi.internet. So I don't think wi-fi matters. Yes it matters if we want an internet connection. At code perspective, I don't think so. I hope you understand now.
And If you're talking about printing through wi-fi printer, you'll have to follow some steps, like scan the wi-fi device , then make a connection, and then make a request for printing. Below are some link that may help you:
Thanks you!
Thanks Manish.....Great work
your welcome Yamini!
hi Manish,
i have added a classic Epson printer to Google drive and ur code is running as well and give the message print job added.
But problem is I have'nt get print from printer.
could u suggest me?
My email-nishantgupta1205@gmail.com
ph no:+919810680536
follow below steps hope it will help you-
1) login with gmail account in your desktop google chrome ans use same gmail account in your mobile device for printing.
2)go to setting option in your chrome and from there add cloud printer which one attached to your desktop.
3)now print from mobile device it will show you all attached printer list and choose one from them.
Thanks,
Hello Manish,
i have done these steps already. Problem is my file sometimes added in print jobs with error sign.
have u any idea?
I think you are getting data==null because of big size of attachment or any other reason. Please test on any other device is it working fine or same issue you are facing?
Hi Manish,
Do you know how to print data by using USB hosting?. If you know please post that too.
Dear Manish, I wanted to ask you a few questions about your kind posts above for Goolge Cloud Print. I am using Android "Persistent Storage" rather than doing File/IO using Local Data Storage (which appears to be how you have created that PDF file that you are printing in your demo above). I have not studied your code above too much, but right away I realized that my approach is a bit different (since I am using an ArrayList of Notes that are simply TextView String Items). What I want to kindly ask you is... in my case, when my user presses and holds a ListView item (a note item)...I present a Context Menu with some options they can perform. I would like to use that same context menu to add a feature called "Google Cloud Print" and have that Context Menu trigger something similar to what you are doing above. But as I say, I am not loading any PDF files or anything like that... all my data is already loaded from Persistent Storage, and I simply want to dynamically print that note item that the user has selected from my custom ListView Adapter. How can I utitlize your sample code above, for my own project ? What has to change ? And also, do I still need to use ALL of those Manifest Permissions ? You have four of them. two of them I am certainly not sure I still need (since I use Persisnte Storage) are: [android.permission.READ_EXTERNAL_STORAGE] and the other one which is related called: [android.permission.READ_INTERNAL_STORAGE]. I am also not too sure about whether I need the other two permissions you have as well ... the (android.permission.INTERNET) and the (android.permission.ACCESS_NETWORK_STATE)
Anyhow...please give me some guidance on how best (easily) to adapt your sample codes above, to make them work in my own application which is utitlizing "persistent storage" and ListView custom adapter to allow user to select their notes from a list, and be able to print them.
Ps. Manish, I tried to also ask about this on a "Hangout" on Google+ community. I am new to Google + as I am to this androidHub forum so please excuse anything from my posts that seem a bit strange, long or out of place. I just wanted to be clear as to where I am with my own Android Project and how best you can help me. I also did not know that Google+ "Hangout" posts are limited by the number of characters you can post. So thats why I came here and decided to elaborate on my own issue.
Thanks so much for your immediate and kind reply to this.
Sincerely yours,
Alexander
Hello Manish,
I tried to implement your code into my project...and did all the necessary debugging to get most (if not all the bugs out). To try and implement your code...i show below (all the following changes I had to make).... and now when I try to use the context menu option to Print out a note item to my Printer, nothing happens. I am not getting any errors, but Nothing happens and nothing is printed on my printer whatsoever). I am testing this on a Samsung Galaxy Note 3 --Android Kitkat operating system.
Please keep in mind that I have already registered my HP LaserJet P2035n printer with Google Cloud Print (on my Google + account using the Chrome Browser --successfully. In fact, when I go to a different Note Taking application that you may have heard of called "S Note... I am able to use the "Print" option on a Note from that application, and I am able to Print the item just fine. So I seem to have setup my Printer as well as my Android Smartphone device, to utitlize Google Cloud Print successfuly, as I am able to print a S Note item. However, using your code, nothing is printing and I need to figure out why. Here is what I have done in my own application thusfar.
1) In my public boolean onContextItemSelected(MenuItem item) , I have created a new Context menu item and check for when it is selected when the user is pressing on a Note Item in my custom ListView. Here is what the code looks like:
if (item.getItemId() == MENU_CLOUD_PRINT_ID) //if user selected to Cloud Print their note, we handle that
{
NoteItem note = notesList.get(currentNoteId); //grab the NoteItem that was selected from list
String message = note.getText(); //get the message of the note that we wish to print
if (isNetworkAvailable() == false) {
Toast.makeText(MainActivity.this,
"Network connection not available, Please try later",
Toast.LENGTH_SHORT).show();}
else
{
Intent printIntent = new Intent (MainActivity.this, PrintDialogActivity.class);
printIntent.putExtra("title", message); // pass the message the user slected , to Print activity
}
}
2) I have created my PrintDialogActivity.java exactly as you have prescribed. I have also bound that activity to the print_dialog.xml layout file as you suggest. Which means my print_dialog.xml file is also exactly setup like yours.
The only thing I did not do (*since I dont think I need to in my case*) was to creaet a "Print" button inside my activity_main.xml ..... the reason is because I am using a context menu to allow the user to initiate the print job. And you are using a button.
So please tell me why Nothing is printing out on my printer ? I am totally lost.
Thanks
Dear Manish,
After some more careful analysis of my code, I realized that in my intent call from my MainActivity.java, I was misssing some critical lines that you actually HAD , in your else clause. Let me show you what my current else clause looks like.", message);
startActivity(printIntent);
}// end of else clause
My problem now is that my Activity Crashes when I run my code. As soon as I add the lines of:
File file = new File(Environment.getExternalStorageDirectory().getAbsolutePath() + "/personal/xyz.pdf");
printIntent.setDataAndType(Uri.fromFile(file), "application/pdf");
and started my activity, with:
startActivity(printIntent);
Now my MainActivity.java crashes. I have a feeling its because this PDF file (xyz.pdf)
at the path == > /personal/xyz.pdf DOES NOT EXIST
Can you please tell me how I can adapt your code to work with a case like MINE where I AM NOT reading any PDF file that I wish to print. In my case, as I stated earlier, I am allowing the user to Select a ListView Note Item from a LIST and I display a Conext Menu with an option allowing them to HOPEFULLY be able to Print their Note to their Cloud Printer (which has already been setup presumably ahead of time by that user). I dont want my Application to be responsible for helping them setup (configure) their cloud printer (as far as registering it with their Google + account in Chrome -- meaning finding it on their network and setting it in their Google + account for Cloud Printing). I also dont want to be responsible for them downloading and setting up the Google Cloud Print (free app from market place). I ONLY WANT that they should be able to do a print.
So given that I dont have a file xyz.pdf (which is not created by my app and is not existing)... How do I modify your code to make it work in my situation ?
Please reply ASAP.
Thanks
Alexander
Hey friend, thanks for your onwer. And i am sorry this time your problem seems big and i have tight dead line so i cant able to look on it..
Hey friend, thanks for your onwer. And i am sorry this time your problem seems big and i have tight dead line so i cant able to look on it..
Hello Manish. Thanks for the kind and honest reply. Actually, my challenge is not that complicated if I explain to you my question in another way. Suppose you have a String stored in a variable called "statement" - - which may be a statement about let's say the cruelty of killing elephants for their tusks. Now, let's say you want to print this statement. How can you achieve (hopefully without the requirement to first create an external PDF file out of the statement, then storing it to disk, and the reloading it and passing it into your PrintDialogActivity.class
I mean Manish, should there not be some simple way to print a string to a printer directly (without resorting to fancy File/IO requirements?)
This is my real question from anyone who can help me please.
Thank you all very much.
I can understand what you are trying to print. But as per my knowledge you need a file at-least in any format HTML, PDF, DOC any but you need. In android there are only two type of printing option-
1) Wireless Bluetooth printing
2)Using cloud printer what I am doing.
But in both case you need a file like you have a listview nd want to print it so you need to write this data into a file then only you can print.
Well why you are thinking like that? Make it at your end user will not know about your internal code. After print the file just remove that from disc.
Just forget android for a minute and let me know can you print a variable in java or in any language using desktop when there a printer attach with your desktop? You need to print an object, file etc..
Please don't mind try over the world and do Google there are some chargeable library and they have own printing machine, might be they help you. Well if you got anything please let me know I am interested to know.
Thank you!
Since you're writing to the file system you also need to add .
When using Android 4.4 you do not need to use above code to print, you can also use android.print
Also add following permission to the manifest: android.permission.WRITE_EXTERNAL_STORAGE
Thank Guy! | http://www.androidhub4you.com/2013/10/android-printer-integration-google.html | CC-MAIN-2014-42 | refinedweb | 2,875 | 50.84 |
Terms defined: Visitor pattern, bare object, dynamic scoping, environment, lexical scoping, stack frame, static site generator
Every program needs documentation in order to be usable, and the best place to put that documentation is on the web. Writing and updating pages by hand is time-consuming and error-prone, particularly when many parts are the same, so most documentation sites use some kind of static site generator to create web pages from templates.
At the heart of every static site generator is a page templating system. Thousands of these have been written in the last thirty years in every popular programming language (and one language, PHP, was created for this purpose). Most of these systems use one of three designs ():
Mix commands in a language such as JavaScript with the HTML or Markdown using some kind of marker to indicate which parts are commands and which parts are to be taken as-is. This approach is taken by EJS, which we used to write these lessons.
Create a mini-language with its own commands like Jekyll (which is used by GitHub Pages). Mini-languages are appealing because they are smaller and safer than general-purpose languages, but experience shows that they eventually grow most of the features of a general-purpose language. Again, some kind of marker must be used to show which parts of the page are code and which are ordinary text.
Put directives in specially-named attributes in the HTML. This approach has been the least popular, but since pages are valid HTML, it eliminates the need for a special parser.
In this chapter we will build a simple page templating system using the third strategy. We will process each page independently by parsing the HTML and walking the DOM to find nodes with special attributes. Our program will execute the instructions in those nodes to do the equivalent of loops and if/else statements; other nodes will be copied as-is to create text.
What will our system look like?
Let's start by deciding what "done" looks like. Suppose we want to turn an array of strings into an HTML list. Our page will look like this:
<html> <body> <p>Expect three items</p> <ul z- <li><span z-</li> </ul> </body> </html>
The attribute
z-loop tells the tool to repeat the contents of that node;
the loop variable and the collection being looped over are separated by a colon.
The attribute
z-var tells the tool to fill in the node with the value of the variable.
When our tool processes this page, the output will be standard HTML without any traces of how it was created:
<html> <body> <p>Expect three items</p> <ul> <li><span>Johnson</span></li> <li><span>Vaughan</span></li> <li><span>Jackson</span></li> </ul> </body> </html>
Human-readable vs. machine-readable
The introduction said that mini-languages for page templating quickly start to accumulate extra features. We have already started down that road by putting the loop variable and loop target in a single attribute and splitting that attribute to get them out. Doing this makes loops easy for people to type, but hides important information from standard HTML processing tools. They can't know that this particular attribute of these particular elements contains multiple values or that those values should be extracted by splitting a string on a colon. We could instead require people to use two attributes, as in:
<ul z-
but we have decided to err on the side of minimal typing.
And note that strictly speaking,
we should call our attributes
data-something instead of
z-something
to conform with the HTML5 specification,
but by the time we're finished processing our templates,
there shouldn't be any
z-* attributes left to confuse a browser.
The next step is to define the API for filling in templates. Our tool needs the template itself, somewhere to write its output, and some variables to use in the expansion. These variables might come from a configuration file, from a YAML header in the file itself, or from some mix of the two; for the moment, we will just pass them into the expansion function as an object:
const variables = { names: ['Johnson', 'Vaughan', 'Jackson'] } const dom = readHtml('template.html') // eslint-disable-line const expander = new Expander(dom, variables) // eslint-disable-line expander.walk() console.log(expander.result)
How can we keep track of values?
Speaking of variables, we need a way to keep track of their current values; we say "current" because the value of a loop variable changes each time we go around the loop. We also need to maintain multiple sets of variables so that variables used inside a loop don't conflict with ones used outside it. (We don't actually "need" to do this—we could just have one global set of variables—but experience teaches us that if all our variables are global, all of our programs will be buggy.)
The standard way to manage variables is to create a stack of lookup tables. Each stack frame is an object with names and values; when we need to find a variable, we look through the stack frames in order to find the uppermost definition of that variable..
Scoping rules
Searching the stack frame by frame while the program is running is called is dynamic scoping, since we find variables while the program is running. In contrast, most programming languages used lexical scoping, which figures out what a variable name refers to based on the structure of the program text.
The values in a running program are sometimes called
an environment,
so we have named our stack-handling class
Env.
Its methods let us push and pop new stack frames
and find a variable given its name;
if the variable can't be found,
Env.find returns
undefined instead of throwing an exception
().
class Env { constructor (initial) { this.stack = [] this.push(Object.assign({}, initial)) } push (frame) { this.stack.push(frame) } pop () { this.stack.pop() } find (name) { for (let i = this.stack.length - 1; i >= 0; i--) { if (name in this.stack[i]) { return this.stack[i][name] } } return undefined } toString () { return JSON.stringify(this.stack) } } export default Env
How do we handle nodes?
HTML pages have a nested structure,
so we will process them using
the Visitor design pattern.
Visitor's constructor takes the root node of the DOM tree as an argument and saves it.
When we call
Visitor.walk without a value,
it starts recursing from that saved root;
if
.walk is given a value (as it is during recursive calls),
it uses that instead.
import assert from 'assert' class Visitor { constructor (root) { this.root = root } walk (node = null) { if (node === null) { node = this.root } if (this.open(node)) { node.children.forEach(child => { this.walk(child) }) } this.close(node) } open (node) { assert(false, 'Must implemented "open"') } close (node) { assert(false, 'Must implemented "close"') } } export default Visitor
Visitor defines two methods called
open and
close that are called
when we first arrive at a node and when we are finished with it
().
The default implementations of these methods throw exceptions
to remind the creators of derived classes to implement their own versions.
The
Expander class is specialization of
Visitor
that uses an
Env to keep track of variables.
It imports a handler
for each type of special node we support—we will write those in a moment—and
uses them to process each type of node:
If the node is plain text, copy it to the output.
If there is a handler for the node, call the handler's
openor
closemethod.
Otherwise, open or close a regular tag.
import assert from 'assert' import Visitor from './visitor.js' import Env from './env.js' import z_if from './z-if.js' import z_loop from './z-loop.js' import z_num from './z-num.js' import z_var from './z-var.js' const HANDLERS = { 'z-if': z_if, 'z-loop': z_loop, 'z-num': z_num, 'z-var': z_var } class Expander extends Visitor { constructor (root, vars) { super(root) this.env = new Env(vars) this.handlers = HANDLERS this.result = [] } open (node) { if (node.type === 'text') { this.output(node.data) return false } else if (this.hasHandler(node)) { return this.getHandler(node).open(this, node) } else { this.showTag(node, false) return true } } close (node) { if (node.type === 'text') { return } if (this.hasHandler(node)) { this.getHandler(node).close(this, node) } else { this.showTag(node, true) } } } export default Expander
Checking to see if there is a handler for a particular node and getting that handler are straightforward—we just look at the node's attributes:
hasHandler (node) { for (const name in node.attribs) { if (name in this.handlers) { return true } } return false } getHandler (node) { const possible = Object.keys(node.attribs) .filter(name => name in this.handlers) assert(possible.length === 1, 'Should be exactly one handler') return this.handlers[possible[0]] }
Finally, we need a few helper methods to show tags and generate output:
showTag (node, closing) { if (closing) { this.output(`</${node.name}>`) return } this.output(`<${node.name}`) for (const name in node.attribs) { if (!name.startsWith('z-')) { this.output(` ${name}="${node.attribs[name]}"`) } } this.output('>') } output (text) { this.result.push((text === undefined) ? 'UNDEF' : text) } getResult () { return this.result.join('') }
Notice that this class adds strings to an array and joins them all right at the end rather than concatenating strings repeatedly. Doing this is more efficient and also helps with debugging, since each string in the array corresponds to a single method call.
How do we implement node handlers?
At this point we have built a lot of infrastructure but haven't actually processed any special nodes. To do that, let's write a handler that copies a constant number into the output:
export default { open: (expander, node) => { expander.showTag(node, false) expander.output(node.attribs['z-num']) }, close: (expander, node) => { expander.showTag(node, true) } }
When we enter a node like
<span z-
this handler asks the expander to show an opening tag
followed by the value of the
z-num attribute.
When we exit the node,
the handler asks the expander to close the tag.
The handler doesn't know whether things are printed immediately,
added to an output list,
or something else;
it just knows that whoever called it implements the low-level operations it needs.
Note that this expander is not a class,
but instead an object with two functions stored under the keys
open and
We could use a class for each handler
so that handlers can store any extra state they need,
but bare objects are common and useful in JavaScript
(though we will see below that we should have used classes).
So much for constants; what about variables?
export default { open: (expander, node) => { expander.showTag(node, false) expander.output(expander.env.find(node.attribs['z-var'])) }, close: (expander, node) => { expander.showTag(node, true) } }
This code is almost the same as the previous example. The only difference is that instead of copying the attribute's value directly to the output, we use it as a key to look up a value in the environment.
These two pairs of handlers look plausible, but do they work? To find out, we can build a program that loads variable definitions from a JSON file, reads an HTML template, and does the expansion:
import fs from 'fs' import htmlparser2 from 'htmlparser2' import Expander from './expander.js' const main = () => { const vars = readJSON(process.argv[2]) const doc = readHtml(process.argv[3]) const expander = new Expander(doc, vars) expander.walk() console.log(expander.getResult()) } const readJSON = (filename) => { const text = fs.readFileSync(filename, 'utf-8') return JSON.parse(text) } const readHtml = (filename) => { const text = fs.readFileSync(filename, 'utf-8') return htmlparser2.parseDOM(text)[0] } main()
We added new variables for our test cases one by one as we were writing this chapter. To avoid repeating text repeatedly, we show the entire set once:
{ "firstVariable": "firstValue", "secondVariable": "secondValue", "variableName": "variableValue", "showThis": true, "doNotShowThis": false, "names": ["Johnson", "Vaughan", "Jackson"] }
Our first test: is static text copied over as-is ()?
<html> <body> <h1>Only Static Text</h1> <p>This document only contains:</p> <ul> <li>static</li> <li>text</li> </ul> </body> </html>
node template.js vars.json input-static-text.html
<html> <body> <h1>Only Static Text</h1> <p>This document only contains:</p> <ul> <li>static</li> <li>text</li> </ul> </body> </html>
Good. Now, does the expander handle constants ()?
<html> <body> <p><span z-</p> </body> </html>
<html> <body> <p><span>123</span></p> </body> </html>
What about a single variable ()?
<html> <body> <p><span z-</p> </body> </html>
<html> <body> <p><span>variableValue</span></p> </body> </html>
What about a page containing multiple variables? There's no reason it should fail if the single-variable case works, but we should still check—again, software isn't done until it has been tested ().
<html> <body> <p><span z-</p> <p><span z-</p> </body> </html>
<html> <body> <p><span>firstValue</span></p> <p><span>secondValue</span></p> </body> </html>
How can we implement control flow?
Our tool supports two types of control flow:
conditional expressions and loops.
Since we don't support Boolean expressions like
and and
or,
implementing a conditional is as simple as looking up a variable
(which we know how to do)
and then expanding the node if the value is true:
export default { open: (expander, node) => { const doRest = expander.env.find(node.attribs['z-if']) if (doRest) { expander.showTag(node, false) } return doRest }, close: (expander, node) => { if (expander.env.find(node.attribs['z-if'])) { expander.showTag(node, true) } } }
Let's test it ():
<html> <body> <p z-This should be shown.</p> <p z-This should <em>not</em> be shown.</p> </body> </html>
<html> <body> <p>This should be shown.</p> </body> </html>
Spot the bug
This implementation of
if contains a subtle bug.
The
open and
close functions both check the value of the control variable.
If something inside the body of the
if changes that value,
the result could be an opening tag without a matching closing tag or vice versa.
We haven't implemented an assignment operator,
so right now there's no way for that to happen,
but it's a plausible thing for us to add later,
and tracking down a bug in old code that is revealed by new code
is always a headache.
Finally we come to loops. For these, we need to get the array we're looping over from the environment and do something for each of its elements. That "something" is:
Create a new stack frame holding the current value of the loop variable.
Expand all of the node's children with that stack frame in place.
Pop the stack frame to get rid of the temporary variable.
export default { open: (expander, node) => { const [indexName, targetName] = node.attribs['z-loop'].split(':') delete node.attribs['z-loop'] expander.showTag(node, false) const target = expander.env.find(targetName) for (const index of target) { expander.env.push({ [indexName]: index }) node.children.forEach(child => expander.walk(child)) expander.env.pop() } return false }, close: (expander, node) => { expander.showTag(node, true) } }
Once again, it's not done until we test it ():
<html> <body> <p>Expect three items</p> <ul z- <li><span z-</li> </ul> </body> </html>
<html> <body> <p>Expect three items</p> <ul> <li><span>Johnson</span></li> <li><span>Vaughan</span></li> <li><span>Jackson</span></li> </ul> </body> </html>
Notice how we create the new stack frame using:
{ [indexName]: index }
This is an ugly but useful trick. We can't write:
{ indexName: index }
because that would create an object with the string
indexName as a key,
rather than one with the value of the variable
indexName as its key.
We can't do this either:
{ `${indexName}`: index }
though it seems like we should be able to. Instead, we create an array containing the string we want. Since JavaScript automatically converts arrays to strings by concatenating their elements when it needs to, our expression is a quick way to get the same effect as:
const temp = {} temp[indexName] = index expander.env.push(temp)
Those three lines are much easier to understand, though, so we should probably have been less clever.
How did we know how to do all of this?
We have just implemented a simple programming language. It can't do arithmetic, but if we wanted to add tags like:
<span z-<span z-<span z-num="1"//>
we could.
It's unlikely anyone would use the result—typing all of that
is so much clumsier than typing
width+1 that people wouldn't use it
unless they had no other choice—but the basic design is there.
We didn't invent any of this from scratch, any more than we invented the parsing algorithm of . Instead, we did what you are doing now: we read what other programmers had written and tried to make sense of the key ideas.
The problem is that "making sense" depends on who we are. When we use a low-level language, we incur the cognitive load of assembling micro-steps into something more meaningful. When we use a high-level language, on the other hand, comprehension curve looks like the one on the left of , then an expert's looks like the one on the right. Experts don't just understand more at all levels of abstraction; their preferred level has also shifted so that \(\sqrt{x^2 + y^2}\). In an ideal world our tools would automatically re-represent programs at different levels, so that with a click of a button we could view our code as either:
const. But today's tools don't do that, and I suspect that any IDE smart enough to translate between comprehension levels automatically would also be smart enough to write the code without our help.
Exercises
Tracing execution
Add a directive
<span z-
that prints the current value of a variable using
console.error for debugging.
Unit tests
Write unit tests for template expansion using Mocha.
Trimming text
Modify all of the directives to take an extra optional attribute
z-trim="true"
If this attribute is set,
leading and trailing whitespace is trimmed from the directive's expansion.
Literal text
Add a directive
<div z-…</div> that copies the enclosed text as-is
without interpreting or expanding any contained directives.
(A directive like this would be needed when writing documentation for the template expander.)
Including other files
Add a directive
<div z-that includes another file in the file being processed.
Should included files be processed and the result copied into the including file, or should the text be copied in and then processed? What difference does it make to the way variables are evaluated?
HTML snippets
Add a directive
<div z-…</div> that saves some text in a variable
so that it can be displayed later.
For example:
<html> <body> <div z-<strong>Important:</strong></div> <p>Expect three items</p> <ul> <li z- <span z-<span z- </li> </ul> </body> </html>
would printed the word "Important:" in bold before each item in the list.
YAML headers
Modify the template expander to handle variables defined in a YAML header in the page being processed. For example, if the page is:
--- name: "Dorothy Johnson Vaughan" --- <html> <body> <p><span z-</p> </body> </html>
will create a paragraph containing the given name.
Expanding all files
Write a program
expand-all.js that takes two directory names as command-line arguments
and builds a website in the second directory by expanding all of the HTML files found in the first
or in sub-directories of the first.
Counting loops
Add a directive
<div z-…</div>
that loops from zero to the value in the variable
limitName,
putting the current iteration index in
indexName.
Auxiliary functions
Modify
Expanderso that it takes an extra argument
auxiliariescontaining zero or more named functions:
const expander = new Expander(root, vars, { max: Math.max, trim: (x) => x.trim() })
Add a directive
<span z-that looks up a function in
auxiliariesand calls it with the given variables as arguments. | https://stjs.tech/page-templates/ | CC-MAIN-2021-39 | refinedweb | 3,341 | 63.7 |
Tests for EC2 tags
Project description
EC2 Tag conditionals
This is a python library and shell command that answers the question:
"Is this instance tagged with the given tag and have a given value"
It is designed to be run on AWS's EC2 instances.
It will always fail if it's not on AWS, so tags should only be tested for truthiness, not falseness.
As a Library
from ec2_tag_conditional import InstanceTags tags = InstanceTags() if tags['Env'] == 'prod': do_prod_thing() else: do_other_thing()
As a command line script
> instance-tags "Env=prod" > echo $? 0 > instance-tags "Madeup=NotThere" > echo $? 1 > instance-tags "Env=prod" && do_prod_thing
Example use cases
This code was written with the following use case in mind:
You have
n servers in a auto scaling group, launched from a custom
AMI (golden image). The nature of the application running on the
servers is that, for some functions to work (backup, reporting),
a given set of tasks should only be run by one server.
This server is called a 'controller'. The script that created the ASG
also tags (in the AWS metadata) one (and only one) of the servers
with
controller=True.
When the AMI is baked, the images don't need to know if they are a controller or not, as cron tasks can be written like:
instance-tags "controller=True" && do_controller_only
Or for controllers in production (rather than dev or staging environments):
instance-tags "controller=True" && instance-tags "Env=prod" && do_controller_only
Because the exit code of the
instance-tags script is 1 if the tag
with the given value isn't found on the instance, the script wont
run on any server that isn't an EC2 instance with the given values.
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/ec2_tag_conditional/ | CC-MAIN-2019-47 | refinedweb | 310 | 61.7 |
Opened 4 years ago
Closed 4 years ago
Last modified 4 years ago
#26215 closed Bug (fixed)
FloatRangeField/IntegerRangeField with None as a range boundary doesn't round trip in serialization
Description (last modified by )
- FloatRangeField db data == None
- ./manage dumpdata
- ./manage loaddata
Conversion problem. accept string u'None' to float type
Traceback:
File "/local/lib/python2.7/site-packages/django/db/models/fields/__init__.py", line 1803, in to_python params={'value': value},
Decision:
def to_python(self, value): if value == u'None': return None elif value is None: return value try: return float(value) except (TypeError, ValueError): raise exceptions.ValidationError( self.error_messages['invalid'], code='invalid', params={'value': value}, )
And IntegerField too
Source data dump example:
<field name="rate" type="FloatRangeField">{"upper": "None", "lower": "1.45", "bounds": "[)"}</field>
Attachments (2)
Change History (13)
comment:1 Changed 4 years ago by
comment:2 Changed 4 years ago by
Changed 4 years ago by
comment:3 follow-up: 4 Changed 4 years ago by
comment:4 Changed 4 years ago by
I tried to reproduce with the attached test but it works fine. Could you provide a failing test?
Sorry.
I do not write the test so far.
I did everything manually via console.
You test field DateRange and DateTimeTZRange
But i used FloatRangeField and IntegerRangeField and xml format.
Please try these
Changed 4 years ago by
comment:5 Changed 4 years ago by
I've understood the problem now and attached a test. Not sure if the proposed fix is correct.
I tried to reproduce with the attached test but it works fine. Could you provide a failing test? | https://code.djangoproject.com/ticket/26215 | CC-MAIN-2020-24 | refinedweb | 267 | 54.63 |
This content has been marked as final. Show 112 replies
90. Re: Oracle 10g DB on Mac OS X 10.4 (Tiger)462055 Oct 29, 2005 11:04 AM (in response to Ronald Rood)Hi ,
Thanks for your help i was able to successfully install database on Powerbook
In case if i want to start and stop the database any idea how can i do it.Incase if i shut down machine database services gonna stop.Ho do i restart it.
Let me know.
Thanks
91. Re: Oracle 10g DB on Mac OS X 10.4 (Tiger)Ronald Rood Oct 29, 2005 11:11 AM (in response to 462055)I have an example of an autostart script on my site. Pick it and see what you can do with it.
Ronald
92. Re: Oracle 10g DB on Mac OS X 10.4 (Tiger)428011 Oct 29, 2005 11:32 PM (in response to 462055)I'm not an oracle expert but this works for me.
Make sure ORACLE_HOME is set, then run:
$ORACLE_HOME/bin/lsnrctl start
$ORACLE_HOME/bin/sqlplus "connect / as sysdba"
SQL> startup
SQL> quit
$ORACLE_HOME/bin/sqlplus "connect / as sysdba"
SQL> shutdown
SQL> quit
$ORACLE_HOME/bin/lsnrctl stop
93. Re: Oracle 10g DB on Mac OS X 10.4 (Tiger)124751 Nov 9, 2005 12:29 PM (in response to 446412)Did you find a solution to this problem? I think I am experiencing the exact same issue (see thread: ORA-03113 error when running the Java stored proc demos
94. Re: Oracle 10g DB on Mac OS X 10.4 (Tiger)438268 Nov 29, 2005 5:09 PM (in response to 429074)Gee. This working fine for so long. So Installed the latest developer tools, then 10.4.3 upgrade, but now Oracle is crashing on startup with this in the crash log:
**********
Host Name: jeffsmachine-2
Date/Time: 2005-11-29 08:13:43.965 -0800
OS Version: 10.4.3 (Build 8F46)
Report Version: 3
Command: oracle
Path: /Volumes/JAL_Drive/app/oracle/product/10.1.0/db_1/bin/oracle
Parent: sqlplus [245]
Version: ??? (???)
PID: 338
Thread: 0
Exception: EXC_BAD_ACCESS (0x0001)
Codes: KERN_PROTECTION_FAILURE (0x0002) at 0x00000000
Thread 0 Crashed:
0 oracle 0x02815614 slrac + 236 (crt.c:355)
1 oracle 0x000dc92c ksedst + 1016 (crt.c:355)
2 oracle 0x000db1ec ksedmp + 764 (crt.c:355)
3 oracle 0x0046e124 ksfdmp + 28 (crt.c:355)
4 oracle 0x026f8a20 kgeriv + 224 (crt.c:355)
5 oracle 0x000d9b18 ksesic2 + 120 (crt.c:355)
6 oracle 0x0142f5a0 kturdb + 1144 (crt.c:355)
7 oracle 0x006470a4 kcoapl + 3000 (crt.c:355)
8 oracle 0x001cb3e4 kcbapl + 160 (crt.c:355)
9 oracle 0x0025edd4 kcrfw_redo_gen + 9612 (crt.c:355)
10 oracle 0x001c9968 kcbchg1 + 8216 (crt.c:355)
11 oracle 0x006bed80 ktuchg + 5372 (crt.c:355)
12 oracle 0x00687994 ktbchg2 + 236 (crt.c:355)
13 oracle 0x01d488a8 kdu_array_flush + 3264 (crt.c:355)
14 oracle 0x01d3a960 kdusru + 1208 (crt.c:355)
15 oracle 0x01698d1c kauupd + 540 (crt.c:355)
16 oracle 0x00ef8ca8 updrow + 3652 (crt.c:355)
17 oracle 0x0220a180 qerupRowProcedure + 104 (crt.c:355)
18 oracle 0x022096b0 qerupFetch + 1236 (crt.c:355)
19 oracle 0x00efc950 updaul + 4076 (crt.c:355)
20 oracle 0x00efe2e8 updThreePhaseExe + 268 (crt.c:355)
21 oracle 0x00efd93c updexe + 1100 (crt.c:355)
22 oracle 0x010350a0 opiexe + 9220 (crt.c:355)
23 oracle 0x0175cad8 opiall0 + 3760 (crt.c:355)
24 oracle 0x0176b9dc opikpr + 652 (crt.c:355)
25 oracle 0x001465c4 opiodr + 2332 (crt.c:355)
26 oracle 0x0014e6b0 rpidrus + 228 (crt.c:355)
27 oracle 0x027a2444 skgmstack + 320 (crt.c:355)
28 oracle 0x0014b888 rpidru + 148 (crt.c:355)
29 oracle 0x0014b2a8 rpiswu2 + 880 (crt.c:355)
30 oracle 0x00c66018 kprball + 1692 (crt.c:355)
31 oracle 0x01b2a990 kkblobu + 1200 (crt.c:355)
32 oracle 0x00e90f2c kdltnfy + 204 (crt.c:355)
33 oracle 0x004dab10 kscnfy + 292 (crt.c:355)
34 oracle 0x00afa2f8 adbdrv + 20332 (crt.c:355)
35 oracle 0x010357a4 opiexe + 11016 (crt.c:355)
36 oracle 0x0177134c opiosq0 + 7016 (crt.c:355)
37 oracle 0x01241ccc kpooprx + 272 (crt.c:355)
38 oracle 0x0123fd58 kpoal8 + 776 (crt.c:355)
39 oracle 0x001465c4 opiodr + 2332 (crt.c:355)
40 oracle 0x028bb4e8 ttcpip + 4796 (crt.c:355)
41 oracle 0x00149320 opitsk + 2776 (crt.c:355)
42 oracle 0x007a6bd4 opiino + 1660 (crt.c:355)
43 oracle 0x001465c4 opiodr + 2332 (crt.c:355)
44 oracle 0x00006588 opidrv + 1028 (crt.c:355)
45 oracle 0x00007f40 sou2o + 144 (crt.c:355)
46 oracle 0x000024b8 main + 228 (crt.c:355)
47 oracle 0x00001c30 _start + 340 (crt.c:272)
48 oracle 0x00001ad8 start + 60
Thread 0 crashed with PPC Thread State 64:
srr0: 0x0000000002815614 srr1: 0x000000000000f030 vrsave: 0x0000000000000000
cr: 0x22442288 xer: 0x0000000000000004 lr: 0x00000000028155fc ctr: 0x0000000090001b80
r0: 0x0000000000000000 r1: 0x00000000bffe6930 r2: 0x0000000000000000 r3: 0x0000000000000000
r4: 0x0000000000000000 r5: 0x00000000028155fc r6: 0x0000000004a89aa8 r7: 0x00000000000000ff
r8: 0x0000000004a89a94 r9: 0x0000000000002000 r10: 0x00000000bffe9ef0 r11: 0x0000000004acfa44
r12: 0x0000000090001b80 r13: 0x0000000000000000 r14: 0x0000000000000000 r15: 0x0000000000000100
r16: 0x00000000bffe7730 r17: 0x0000000000000000 r18: 0x0000000000000000 r19: 0x00000000bffe7770
r20: 0x00000000bffe6ee0 r21: 0x00000000bffe7750 r22: 0x0000000000000000 r23: 0x0000000000000000
r24: 0x00000000bffe7770 r25: 0x00000000bffe9e60 r26: 0x0000000000000001 r27: 0x0000000000000033
r28: 0x0000000000000006 r29: 0x0000000000000000 r30: 0x0000000000000000 r31: 0x0000000002815534
Binary Images Description:
0x1000 - 0x4a76fff oracle /Volumes/JAL_Drive/app/oracle/product/10.1.0/db_1/bin/oracle
0x4eb3000 - 0x4eb3fff libodmd10.dylib /Volumes/JAL_Drive/app/oracle/product/10.1.0/db_1/lib/libodmd10.dylib
0x4eb6000 - 0x4eb7fff libskgxp10.dylib /Volumes/JAL_Drive/app/oracle/product/10.1.0/db_1/lib/libskgxp10.dylib
0x4eba000 - 0x4ebafff libskgxn2.dylib /Volumes/JAL_Drive/app/oracle/product/10.1.0/db_1/lib/libskgxn2.dylib
0x4ebd000 - 0x4eeafff libocr10.dylib /Volumes/JAL_Drive/app/oracle/product/10.1.0/db_1/lib/libocr10.dylib
0x4ef1000 - 0x4f12fff libocrb10.dylib /Volumes/JAL_Drive/app/oracle/product/10.1.0/db_1/lib/libocrb10.dylib
0x4f18000 - 0x4f1cfff libocrutl10.dylib /Volumes/JAL_Drive/app/oracle/product/10.1.0/db_1/lib/libocrutl10.dylib
0x4f2c000 - 0x4f35fff libdbcfg10.dylib /Volumes/JAL_Drive/app/oracle/product/10.1.0/db_1/lib/libdbcfg10.dylib
0x5a05000 - 0x5ac2fff libhasgen10.dylib /Volumes/JAL_Drive/app/oracle/product/10.1.0/db_1/lib/libhasgen10.dylib
0x5b81000 - 0x61c0fff libjox10.dylib /Volumes/JAL_Drive/app/oracle/product/10.1.0/db_1/lib/libjox10.dylib
0x8fe00000 - 0x8fe54fff dyld 44.2 /usr/lib/dyld
0x90000000 - 0x901b3fff libSystem.B.dylib /usr/lib/libSystem.B.dylib
0x9020b000 - 0x9020ffff libmathCommon.A.dylib /usr/lib/system/libmathCommon.A.dylib
Now, this is the same message I received way back before someone figured out the relinking problem. So, I switched to gcc 3.3 and relinked everything but it hasn't helped. Does anyone have any suggestions?
I am running into the OC4J problem, but that is less a concern at this point as getting the database actually running.
95. Re: Oracle 10g DB on Mac OS X 10.4 (Tiger)438268 Nov 30, 2005 4:19 PM (in response to 429074)Hadn't checked my alert log. I hosed another database. It was time for a cleanout anyways. I needed to get rid of one tablespace as I didn't need it anymore.
Need to setup RMAN stuff soon :)
96. Re: Oracle 10g DB on Mac OS X 10.4 (Tiger)462055 Dec 21, 2005 8:31 PM (in response to 428011)amit:/Volumes/u01/app/oracle/product/10.1.0/db_7/bin oracle$ $ORACLE_HOME/bin/lsnrctl start
LSNRCTL for MacOS X Server: Version 10.1.0.3.0 - Production on 21-DEC-2005 12:31:07
Starting /Volumes/u01/app/oracle/product/10.1.0/db_7/bin/tnslsnr: please wait...
TNS-12537: TNS:connection closed
TNS-12560: TNS:protocol adapter error
TNS-00507: Connection closed
MacOS X Server Error: 29: Illegal seek
I keep getting following error message.
Anyidea how can i start db. DB is succesfully installed but not sure why i can't start it.
97. Re: Oracle 10g DB on Mac OS X 10.4 (Tiger)485433 Jan 25, 2006 1:55 AM (in response to 371635)Has any found a solution for this problem.... missing kumainglobals on Tiger. I'm an online student in database apps at CPCC and need to install Oracle on my Mac. I have installed 9i (recommended by non-Mac professor) and get the same error msg. when I try to log into the database. Any help suggestions would be appreciated.
this is my sequence....
john-minters-computer:~ oracle$ sqlplus "/ as sysdba"
SQL*Plus: Release 9.2.0.1.0 - Developer's Release on Fri Jan 20 14:45:44 2006
dyld: Symbol not found: kumainglobals
Referenced from: /Users/oracle/9iR2/orahome/lib/libcommon9.dylib
Expected in: flat namespace
ERROR:
ORA-12547: TNS:lost contact
98. Re: Oracle 10g DB on Mac OS X 10.4 (Tiger)Ronald Rood Jan 25, 2006 5:08 AM (in response to 485433)Forget about 9i on tiger.
10gR1 works but after installation of the software you have some extra work. Just follow the instructions <>
Ronald.
99. Re: Oracle 10g DB on Mac OS X 10.4 (Tiger)485457 Jan 25, 2006 5:37 AM (in response to Ronald Rood)Thanks that fix work (moving the lib).
Just about to try a RAC Install has anyone had any luck on a tiger RAC ?
100. Re: Oracle 10g DB on Mac OS X 10.4 (Tiger)Ronald Rood Jan 25, 2006 6:13 AM (in response to 485457)All RAC's on mac I know about are on 10.3.9. You can always try to play with it but supported is only macos 10.3.6.
regards,
Ronald
101. Re: Oracle 10g DB on Mac OS X 10.4 (Tiger)371635 Jan 25, 2006 6:22 AM (in response to 429074)... and no hint for oracle on Mac OS X "Tiger" (PPC) nor Mac OS X "Tiger" (Intel) ... Oracle 10iR2 ist projected and planned, but ...
102. Re: Oracle 10g DB on Mac OS X 10.4 (Tiger)Ronald Rood Jan 25, 2006 6:52 AM (in response to 371635)Yes, I really wonder what oracle is going to do. Since 10gR1 a few details changed
1) 10.3 -> 10.4 : lots of API changes
2) PPC -> Intel
3) Big Endian -> Little Endian
4) renewed Sun/Oracle aliance
Also some questions like will the servers also be included in the intel transition ?
If so, when ?
Given the quality of the 10gR1 on such a new platform I certainly hope it gets a follow-up. Oracle did a very good job in the porting effort. It would be a pity if they throw that away. As always it is the money that rules. If there are enough licences sold the mac will stay. Question is: are there enough oracle users on mac. Oracle has stopped development on more platforms in the past ...
Ronald.
103. Re: Oracle 10g DB on Mac OS X 10.4 (Tiger)485457 Jan 26, 2006 8:14 AM (in response to Ronald Rood)OK I know it is supported but installed RAC(10.1.0.3) on 10.4(tiger)
all seems OK until I use ASM.
When shutting down I get
Connected to:
Oracle Database 10g Enterprise Edition Release 10.1.0.3.0 - Production
With the Partitioning, Real Application Clusters, OLAP and Data Mining options
SQL> shutdown
and on start-up
SQL> startup
ORA-24324: service handle not initialized
ORA-01041: internal error. hostdef extension doesn't exist
SQL>
Not a biggie as not supported just trying to get ahead of the game .
104. Re: Oracle 10g DB on Mac OS X 10.4 (Tiger)491839 Feb 18, 2006 1:31 PM (in response to 443830)There is one vital peace of information missing.
Before you cd to $ORACLE_HOME/lib, do:
. /usr/local/bin/oraenv
ORACLE_SID = [] ? <the name you entered in the database field during installation>
Cannot locate ORACLE_HOME. # You can ignore this but, you'll get the next question
ORACLE_HOME = [] ? <enter the location were you installed oracle, the bin and lib directories are here>
You may get some more messages, but the point is that this will set your ORACLE_HOME and PATH environment correctly. | https://community.oracle.com/thread/301951?start=90&tstart=0 | CC-MAIN-2017-17 | refinedweb | 1,973 | 69.68 |
The Essence of Google Dart: Building Applications, Snapshots, Isolates
>>IMAGE without some of the problems images bring. Isolates keep code single threaded with shared-nothing, message passing concurrency like Javascript's WebWorkers or Erlang's processes. Some language features and decisions allow for scalable, modular development. Dart can be compiled to plain Javascript with the DartC compiler or executed on the Dart VM.
InfoQ takes a look at the most interesting aspects of Dart for application development, with a focus on the Dart VM and some of the notable language features.
Dart is an Application Language: Snapshots And Initialization
Is an application's startup time really relevant? How often a day do users restart their IDE or word processor? With the rise of memory constrained mobile devices, application startup happens a lot; the Out Of Memory (OOM) killer process is very trigger happy on mobile OSes and will kill suspended applications without hesitation. iOS's multitasking model and the prominent physical home button, have also shortened the average life span of mobile apps. Before iOS 4, pressing the Home button always killed the running application; with iOS 4 the situation has become a bit more complicated, but applications still have to be prepared to die at any time, be it at the hand of the user or the OOM process.
This behavior won't stay on mobile OSes. "Sudden Termination" and "Automatic Termination" are application properties introduced in recent OS X versions that declare an application can handle being killed at any point (eg. when the available memory is low) and then restarted, all transparent to the user.
Slow startup has plagued Java GUI applications since Java 1.0. Booting up a large Java application is a huge amount of work: thousands of classes need to be read, parsed, loaded and linked; before Java 1.6, that process included generating the stack map of methods for bytecode verification. And once classes are loaded, they still need to be initialized, which includes running static initializers.
That's a whole lot of work for a modern Java GUI applications - just to show an initial GUI. The introduction of a SplashScreen API in Java 6 shows that it's a problem that hasn't been solved, and that's affecting developers and their users.
Snapshots vs Smalltalk Images
Dart addresses this with the heap snapshot feature, which.
The snapshot facility is also used to serialize object graphs sent between Isolates in the Dart VM.
In the initial tech preview of Dart, there doesn't seem to be a Dart language API for initiating a snapshot, although there shouldn't be a fundamental reason for that.
Technical Details of Snapshots
The Dart team put a lot of effort into the snapshot format. First off, it can be moved between machines, whether they're 32 bit, 64 bit or else. The format's also made to be quick to read into memory with a focus on minimizing extra work like pointer fixups.
For details see runtime/vm/snapshot.cc, and runtime/vm/snapshot_test.cc for some uses of the Snapshot system, іe. writing out full snapshots, reading them back in, starting Isolates from snapshot, etc.
Snapshots vs Smalltalk Images
Smalltalk's images are not universally popular; Gilad Bracha, wrote about the problems of Smalltalk images in practice. Smalltalk development usually takes place in an image which is then stripped of unused code and frozen for deployment. Dart's snapshots are different because they're optional and need to be generated by loading up an application and then taking a snapshot. Dart's lack of dynamic code evaluation and other code loading features can allow the stripping process to be more thorough.
Dart's snapshots aren't currently supported in code compiled to Javascript with DartC.
Currently Snapshots are used in message passing between Isolates; objects sent across in messages are serialized using
SnapshotWriter and read in on the other side.
In any case, the snapshot facility is in the Dart VM and tools, and as with many other features of Dart, it's up to the community to come up with uses for it.
Finally, a snapshot feature is already present in Google's V8, where it's used to improve startup speed by loading the Javascript standard library from a snapshot.
Initialization
Even without snapshots, Dart has been designed to avoid initialization at startup if possible. Classes are declarative, ie. no code is executed to create them. Libraries can define
final top level elements, ie. functions and variables outside of a class, but they must be compile time constants (see section 10.1 in the language spec).
Compare this to static initializers in Java or languages that rely on various metaprogramming methods at startup to generate data structures, object systems or else. Dart is optimized for applications that start up quickly.
Dart doesn't come with a Reflection mechanism at the moment, although one based on Mirrors (PDF) is supposed to come to the language in the near future, possibly with the ability to construct code using an API and load it in a new Isolate, bringing metaprogramming to Dart.
The Units of Concurrency, Security and the Application: Isolates
Concurrency
The basic unit of concurrency in Dart is the Isolate. Every Isolate is single threaded. In order to do work in the background or use multiple cores or CPUs, it's necessary to launch a new Isolate.
Google V8 has also recently gained Isolates, although it's a feature mostly interesting for embedders of V8 and for implementing cheaper Web Workers by launching them in the same OS process; the feature is not exposed to Javascript code.
The model of having multiple independent Isolates for concurrency is similar to Javascript or Erlang. Node.js also needs to use processes to make use of more than one CPU or core; a host of solutions for managing Node.js processes has popped up.
Other single or green threaded languages have similar process herding solutions. Ruby's Phusion Passenger is an example which also tried to fix the overhead problem when loading the same code in multiple processes: Phusion Passenger loads up a Rails application and then uses the OS'
fork call to quickly create multiple processes with the same program contents, thus avoiding parsing and initializing the same applications many times over. Dart's snapshot feature would another way to solve the problem.
Reliability
The first tech preview of Dart uses one thread per Isolate, although other modes are being considered, ie. multiplexing multiple Isolates onto one thread or having Isolates run in different OS processes, which would also allow them to run on different machines.
Splitting up an application into independent processes (or Isolates) helps with reliability: іf one Isolate crashes, other Isolates are unaffected and a clean restart of the Isolate is possible. Erlang's model of supervision trees is helpful with this model in that it allows to monitor the life and death of groups of processes and write custom policies to handle their death.
This interview with the creators of Akka and Erjang gives a good overview of the advantages of Erlang's model.
Security
Untrusted code can be run in its own Isolate. All communication with it must take place over message passing, which will be enhanced with a capability-style mechanism that permits to restrict who can talk to which ports in Isolates. An Isolate must be given a port to send messages to; without one, it can't do anything.
Compartmentalization of Memory
Another benefit of splitting an application into Isolates: each Isolate's heap is standalone; all objects in it clearly belong only to it, unlike the objects in a shared memory environment. The key benefit: if an Isolate was launched for a task and it's done - the whole Isolate can be deallocated in one go; no GC run necessary.
What's more: if an application is split into Isolates, that means the application's used memory is split into smaller heaps, ie. smaller than the total amount of memory the application uses. Each heap is governed by its own GC with the effect that a full GC run in one Isolate only stops the world in that Isolate, the other Isolates won't notice. Good news for GUI apps as well as server applications that are sensitive to GC pauses: time sensitive components are unaffected by one or a few messy, garbage spewing Isolates that will keep the GC busy. Hence, having one heap per isolate improves modularity: each Isolate controls its own GC pause behavior and is not affected by some other component.
While GCs in Java and .NET have been improving a lot, GC pauses are still an important issue for GUI applications and time sensitive server applications. Solutions like Azul's GC have managed to make pauses managable or even nearly disappear, but they need either special hardware or access to low level OS infrastructure, as in their x86-based Zing. Realtime GCs do exist, but they also slow down execution in exchange for predictable pauses.
Splitting up the memory into seperate heaps means that GC implementations can be simple yet still be fast enough. Of course, it all depends on the developer - to benefit from these characteristics, an application must be split into multiple Isolates.
No more Dependency Injection Ceremony: Interfaces and Factories in Dart
"Program To Interface" is common advice, in practice it gets a bit harder as someone has to call
new with an actual class name. In the Java world this issue has led to the creation of Dependency Injection (DI) frameworks. Adopting a DI framework first means to inject a dependency on the specific DI framework into a project.
What problem does DI solve? Calling
new on s apecific class hardcodes the class, creating problems for testing and simple flexibility of the code. After all, if all code is written to an interface, the specific implementation shouldn't matter and someone should choose the right implementation for a use case.
Dart now ships with one DI solution, making it unnecessary to chose from a host of different options. It does so in the language by linking an interface to code that can instantiate an object for it. All flexibility that's required can be hidden in that Factory, whether it's deciding which class to instantiate or whether to allocate a new object at all and just return a cached object.
The interface refers to a factory by name, which can be provided by a library; different implementations of a factory can live in their own libraries and it's up to the developer to include the best implementation.
Language
Google Dart is a new language, but it's designed to look familiar to many developers. The language resembles curly braced languages, and comes with OOP that focuses on interfaces. Dart's OOP system does come with classes, unlike some other recent languages like Clojure (which does OOP with a combo of protocols and types) or Google Go (Go has interfaces, but no classes). One advantage of having one OOP system built into the language is not getting a new OOP system and paradigm with every other used library, like in Javascript.
For details official Dart language specification or for a quick overview see 'Idiomatic Dart' on the Dart website.
Modularity
Namespacing in Dart is done with the library mechanism, which is different to Java where the class names are the only way to namespace things like methods or variables. One consequence: libraries in Dart can contain top level elements other than classes, ie. variables and functions outside of classes.
The
#import("foo.dart", "foo") will import the library and make all its elements available with the prefix "foo.".
Optional Typing
The key word in "Optional Typing" is "Optional". The developer can add type annotations to the code, but these annotations have no impact at all on the behavior of the code. As a matter of fact, it's possible to specify nonsensical types - the code will still execute fine.
Having the types in the code allows for the various type checkers to do their work. The editor shipped with Dart has a type checker and can highlight type errors as warnings. Dart also comes with a checked mode in which the type annotations are used to check the code and violations will be reported as warnings or errors.
The optional type annotations allow to actually have type information in the code where it's useful for documentation purposes; no more hunting for documentation that explains that an argument must implement a certain list of methods in order to be considered an acceptable duck. The presence of interfaces, ie. a named set of methods with method signatures, and optional type annotations allows to document APIs.
Crucially, the language is always dynamic and arguments can be specified as dynamic, ie. of type
Dynamic.
Runtime Extensibility and Mutability - or lack thereof
Let's get it out of the way: No Monkeypatching. No
eval. No Reflection at the moment, although a Mirror-based system is in the works (for details see this paper introducing Mirrors). The plan seems to be to limit construction of new code to a new Isolate, not the currently running process.
noSuchMethod
Dart allows some dynamic magic with the
noSuchMethod feature, similar to Ruby's
method_missing, Smalltalk's DNU or other similar language features. Future versions of Javascript are also supposed to have a similar feature in the form of Dynamic Proxies, which are slowly making their way into current Javascript VMs, such as V8.
Closed Classes, no Eval
Languages like Ruby allow to change classes, even at runtime, which is referred to as open classes. Not having this feature helps with performance: all members are known at compile time, which allows to analyze code and remove functionality that's never referenced. Refer to the 'Criticisms' section below to see the current status and what current solutions exist in other languages.
Future Language Features
An async/await-style extension is being considered to facilitate writing I/O code. Many of the I/O APIs in Dart are async, and hence some support with making that easier is welcome. The reason to stay away from adding features like Coroutines, Fibers and their variants, is to avoid adding synchronization features. Once coroutines are in the system, it's possible to schedule and interleave their execution and in order to write correct code, it's necessary to synchronize shared resources. Hence the focus on single threading; concurrency is done with Isolates: explicit communication, no sharing, Isolates can be locked away etc.
Criticisms
Nothing riles up developers more than a new programming language. A quick look at some common criticisms.
DartC compiles Dart to huge Javascript files
A link's been going around showing a Dart "Hello World" application that's compiled into thousands of lines of Javascript code. The short answer is: adding optimizations such as tree shaking, ie. removing unused functions, are on the Google Dart and DartC team's ToDo list.
Certain characteristics of Dart make these optimizations possible, in particular the closed classes which means that all functions are known at compile time. The lack of
eval means that all at compile time, the compiler knows which functions are used, and more importantly: which are not. The latter can be safely removed from the output.
Users of Google's Closure tools will know this as the Advanced Compilation feature. Closure brings a class system to Javascript and allows to annotate classes with information. In the advanced mode, the Closure compiler assumes the developer conforms with certain rules, and with that knowledge can assume that if a function isn't explicitly referenced in code, it can be dumped. Obviously, if the programmer breaks these rules, uses
eval or other features, the code wіll break.
Google Dart doesn't need to rely on the programmer to stick to the rules, the language's restrictions provide the necessary guarantees to the compiler.
Another example of a language that makes use of Google Closure's Advanced Compilation is ClojureScript (that's Clojure with a 'j'). ClojureScript is also meant to be an application language and lacks
eval or other dynamic code loading features. It's compiled to Javascript that complies with Google Closure's Advanced Compilation tools in order to allow the compiler to remove unused library functions.
Why not use static typing for runtime optimization?
Why is the typing optional and when it's present, why isn't it used to improve the generated code. Surely, knowing that something's an
int must help optimize the generated code.
As it turns out, the team behind Dart knows about these ideas, they've done a VM or two in the past, Google V8 and Oracle's HotSpot are just two examples.
Using static type information in Dart doesn't help with the runtime code for several reasons. One is: the types the developer specifieѕ have no impact on the semantics at all and, as a matter of fact, they can be totally incorrect. If that's the case, the program will run fine, although you'll get warnings from the type checker. What's more, since the given types can be nonsensical, the VM can not use them for optimizations as is, because they're unreliable. Generating, say,
int specific code just because the developer specified it is wrong if the actual objects at runtime are really Strings.
The static type system is an aid for tooling and documentation, but it has nothing to do with the executed code.
There is another reason why the static types aren't very helpful in generating optimized code: Dart is interface-based. Operators for, say,
ints are actual methods calls - method calls on an interface. Dart isn't kidding, eg.
int is actually an interface, not a class.
Calling interface methods means resolving them at runtime, based on the actual object and its class. Concepts like (polymorphic) inline caches at callsites can help remove the overhead of method lookup. StrongTalk and its direct descendent HotSpot use feedback based optimization to figure out what code is actually executed and generate optimized code. V8 has also gained these optimizations recently in the form of Crankshaft.
Where's my favorite language feature?
Google has released Dart at a rather early stage. It's easy to get fooled by the lanugage spec, IDE, VM, DartC etc.: the clear message from the Dart team is: now is the time to try Dart and provide feedback. A lot of features are already planned but haven't been finished or implemented yet; Reflection and Mixins, are but a few ideas that have been mentioned as potential future features.
If a feature isn't in the Dart repository or language spec, now is the time to provide feedback and suggest fixes or changes to the language and runtime environment.
Wrapping Up
A lot of work has been done on Dart: the language spec, the Dart VM, the DartC compiler to compile Dart to plain Javascript, the editor that's based on SWT and some Eclipse bundles, etc.
However, the initial release of Dart is a technology preview and the language, APIs and tools are very much a work in progress. Now is the time to give the Dart team feedback and actually have a chance to have an impact on the language. The language will change, some of the proposed and planned changes were mentioned in this article.
Some have already started experimenting with Dart, for instance a Java port has been started with the JDart project, which makes heavy use of Java 7's
invokedynamic features.
In this article, we focused on the language and features of the Dart VM, but with DartC it's possible to compile Dart code to plain Javascript; some of the samples shown at the introduction, including the one running on the iPad, were actually Dart applications compiled to Javascript, running in standard browsers.
While the initial development of Google Dart was done in secret, the whole project, source, tools, ticket system, etc. is now out in the open. It remains to be seen if and where Dart will be adopted. As mentioned, the Dart VM comes with features that will make Dart appealing to both client developers as well as server developers.
Links:
- Dart Website
- Google Code Project for Google Dart
- Instructions to get the Google Dart source. Note: some people have complained about having to enter a Google password in order to fetch the necessary tools and sources. As the linked page says, this will only happen if you access the sources as a committer via
https; the code can be downloaded anonymously.
- Try Dart allows to write and run Dart code in the browser.
- Google Dart Mailing List
About the Author
Werner Schuster (murphee) sometimes writes software, sometimes writes about software.
Nice Article
by
anjan bacchu
I'll wait for an easy windows downloadable Dart Install.
Hello stranger!You need to Register an InfoQ account or Login or login to post comments. But there's so much more behind being registered.
Get the most out of the InfoQ experience.
Tell us what you think | http://www.infoq.com/articles/google-dart/ | CC-MAIN-2015-48 | refinedweb | 3,541 | 60.55 |
With growing data velocity the data size easily outgrows the storage limit of a machine. A solution would be to store the data across a network of machines. Such filesystems are called distributed filesystems. Since data is stored across a network all the complications of a network come in.
This is where Hadoop comes in. It provides one of the most reliable filesystems. HDFS (Hadoop Distributed File System) is a unique design that provides storage for extremely large files with streaming data access pattern and it runs on commodity hardware. Let’s elaborate the terms:
- Extremely large files: Here we are talking about the data in range of petabytes(1000 TB).
- Streaming Data Access Pattern: HDFS is designed on principle of write-once and read-many-times. Once data is written large portions of dataset can be processed any number times.
- Commodity hardware: Hardware that is inexpensive and easily available in the market. This is one of feature which specially distinguishes HDFS from other file system.
Nodes: Master-slave nodes typically forms the HDFS cluster.
- MasterNode:
- Manages all the slave nodes and assign work to them.
- It executes filesystem namespace operations like opening, closing, renaming files and directories.
- It should be deployed on reliable hardware which has the high config. not on commodity hardware.
- NameNode:
- Actual worker nodes, who do the actual work like reading, writing, processing etc.
- They also perform creation, deletion, and replication upon instruction from the master.
- They can be deployed on commodity hardware.
HDFS deamons: Deamons are the processes running in background.
- Run on the master node.
- Store metadata (data about data) like file path, the number of blocks, block Ids. etc.
- Require high amount of RAM.
- Store meta-data in RAM for fast retrieval i.e to reduce seek time. Though a persistent copy of it is kept on disk.
- Run on slave nodes.
- Require high memory as data is actually stored here.
Data storage in HDFS: Now let’s see how the data is stored in a distributed manner.
Lets assume that 100TB file is inserted, then masternode(namenode) will first divide the file into blocks of 10TB (default size is 128 MB in Hadoop 2.x and above). Then these blocks are stored across different datanodes(slavenode). Datanodes(slavenode)replicate the blocks among themselves and the information of what blocks they contain is sent to the master. Default replication factor is 3 means for each block 3 replicas are created (including itself). In
hdfs.site.xml we can increase or decrease the replication factor i.e we can edit its configuration here.
Note: MasterNode has the record of everything, it knows the location and info of each and every single data nodes and the blocks they contain, i.e. nothing is done without the permission of masternode.
Why divide the file into blocks?
Answer: Let’s assume that we don’t divide, now it’s very difficult to store a 100 TB file on a single machine. Even if we store, then each read and write operation on that whole file is going to take very high seek time. But if we have multiple blocks of size 128MB then its become easy to perform various read and write operations on it compared to doing it on a whole file at once. So we divide the file to have faster data access i.e. reduce seek time.
Why replicate the blocks in data nodes while storing?
Answer: Let’s assume we don’t replicate and only one yellow block is present on datanode D1. Now if the data node D1 crashes we will lose the block and which will make the overall data inconsistent and faulty. So we replicate the blocks to achieve fault-tolerence.
Terms related to HDFS:
- HeartBeat : It is the signal that datanode continuously sends to namenode. If namenode doesn’t receive heartbeat from a datanode then it will consider it dead.
- Balancing : If a datanode is crashed the blocks present on it will be gone too and the blocks will be under-replicated compared to the remaining blocks. Here master node(namenode) will give a signal to datanodes containing replicas of those lost blocks to replicate so that overall distribution of blocks is balanced.
- Replication:: It is done by datanode.
Note: No two replicas of the same block are present on the same datanode.
Features:
- Distributed data storage.
- Blocks reduce seek time.
- The data is highly available as the same block is present at multiple datanodes.
- Even if multiple datanodes are down we can still do our work, thus making it highly reliable.
- High fault tolerance.
Limitations: Though HDFS provide many features there are some areas where it doesn’t work well.
- Low latency data access: Applications that require low-latency access to data i.e in the range of milliseconds will not work well with HDFS, because HDFS is designed keeping in mind that we need high-throughput of data even at the cost of latency.
- Small file problem: Having lots of small files will result in lots of seeks and lots of movement from one datanode to another datanode to retrieve each small file, this whole process is a very inefficient data access pattern.
Recommended Posts:
- Hadoop - HDFS (Hadoop Distributed File System)
- How Does Namenode Handles Datanode Failure in Hadoop Distributed File System?
- Difference between Hadoop 1 and Hadoop 2
- Difference Between Hadoop 2.x vs Hadoop 3.x
- Hadoop - Features of Hadoop Which Makes It Popular
- Distributed Cache in Hadoop MapReduce
- Hadoop - File Blocks and Replication Factor
- Hadoop - File Permission and ACL(Access Control List)
- Introduction to Hadoop
- Hadoop - Introduction
- Hadoop | History or Evolution
- Hadoop YARN Architecture
- Hadoop Ecosystem
- Map Reduce in Hadoop
- Sum of even and odd numbers in MapReduce using Cloudera Distribution Hadoop(CDH)
- How to Execute WordCount Program in MapReduce using Cloudera Distribution Hadoop(CDH)
- Volunteer and Grid Computing | Hadoop
- Data with Hadoop
- RDMS vs Hadoop
- Difference Between Hadoop. | https://www.geeksforgeeks.org/introduction-to-hadoop-distributed-file-systemhdfs/ | CC-MAIN-2020-50 | refinedweb | 985 | 65.83 |
Implement an const iterator over an unbounded set. More...
#include <Unbounded_Set_Ex.h>
Implement an const iterator over an unbounded set.
Move forward by one element in the set. Returns 0 when all the items in the set have been seen, else 1.
Returns 1 when all items have been seen, else 0.
Dump the state of an object.
Move to the first element in the set. Returns 0 if the set is empty, else 1.
Pass back the next_item that hasn't been seen in the Set.
Returns a reference to the internal element
this is pointing to.
Prefix advance.
Postfix advance.
Check if two iterators point to the same position.
Declare the dynamic allocation hooks.
Pointer to the current node in the iteration.
Pointer to the set we're iterating over. | http://www.dre.vanderbilt.edu/Doxygen/5.7.7/html/ace/a00774.html | CC-MAIN-2013-20 | refinedweb | 132 | 70.6 |
My previous post talked about the ChaCha random number generator and how Google is using it in a stream cipher for encryption on low-end devices. This post talks about how to implement ChaCha in pure Python.
First of all, the only reason to implement ChaCha in pure Python is to play with it. It would be more natural and more efficient to implement ChaCha in C.
RFC 8439 gives detailed, language-neutral directions for how to implement ChaCha, including test cases for intermediate results. At its core is the function that does a “quarter round” operation on four unsigned integers. This function depends on three operations:
- addition mod 232, denoted
+
- bitwise XOR, denoted
^, and
- bit rotation, denoted
<<<=n.
In C, the
+= operator on unsigned integers would do what the RFC denotes by +=, but in Python working with (signed) integers we need to explicitly take remainders mod 232. The Python bitwise-or operator
^ can be used directly. We’ll write a function
roll that corresponds to
<<<=.
So the following line of pseudocode from the RFC
a += b; d ^= a; d <<<= 16;
becomes
a = (a+b) % 2**32; d = roll(d^a, 16)
in Python. One way to implement
roll would be to use the
bitstring library:
from bitstring import Bits def roll(x, n): bits = Bits(uint=x, length=32) return (bits[n:] + bits[:n]).uint
Another approach, a little harder to understand but not needing an external library, would be
def roll2(x, n): return (x << n) % (2 << 31) + (x >> (32-n))
So here’s an implementation of the ChaCha quarter round:
def quarter_round(a, b, c, d): a = (a+b) % 2**32; d = roll(d^a, 16) c = (c+d) % 2**32; b = roll(b^c, 12) a = (a+b) % 2**32; d = roll(d^a, 8) c = (c+d) % 2**32; b = roll(b^c, 7) return a, b, c, d
ChaCha has a state consisting of 16 unsigned integers. A “round” of ChaCha consists of four quarter rounds, operating on four of these integers at a time. All the details are in the RFC.
Incidentally, the inner workings of the BLAKE2 secure hash function are similar to those of ChaCha. | https://www.johndcook.com/blog/tag/python/page/2/ | CC-MAIN-2019-51 | refinedweb | 364 | 59.03 |
On Wed, Sep 7, 2011 at 8:33 AM, Barry Warsaw <barry at python.org> wrote: > Still not disagreeing with you, but in some sense, you *can* use keywords as > attributes today: You can actually use arbitrary strings, such as "1234" and "not a legal identifier" that way, so I see that behaviour as still being consistent. Some implementations, including CPython, even let you use arbitrary non-string objects via the attribute dict interface, but whether or not that works has been explicitly deemed an implementation detail by Guido. If you treat a namespace like a string-keyed dictionary, you can use any string, even those that aren't valid identifiers. Treat it like an actual namespace, though, and the identifier restrictions come into play (including those disallowing the use of keywords). While it is indeed another rule, it's still a fairly simple and consistent one, since the identifier rules are there to allow consistent parsing without dedicated delimiters. That does give me an idea, though. Rather than allowing keywords-as-attributes, it may make more sense to allow string literals to stand in for identifiers without obeying the normal rules, permitting things like: class Foo: normal = 1 'class' = 'This is probably a terrible idea' '1234' = 'as is this' 'or does it' = 'have some merit?' >>> Foo.normal 1 >>> Foo.'normal' 1 >>> Foo.'class' 'This is probably a terrible idea' >>> Foo.'1234' 'as is this' >>> Foo.'or does it' 'have some merit?'). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia | https://mail.python.org/pipermail/python-ideas/2011-September/011415.html | CC-MAIN-2018-05 | refinedweb | 252 | 63.8 |
.
I was having Sunday brunch with a colleague a couple of weeks ago, and we started talking about tracing the execution of a program. I wanted to figure out which subroutines the program was calling and which subroutines those subroutines were calling, and so on. Most of you have probably been given the Big Ball of Mud that the previous developer left behind right after he burned his hand-written documentation and right before he walked out the door with all of his personal possessions (and maybe a Springline stapler) never to be seen again. Where do you start?
My first requirement is that I don't want to modify the code to see what's going on. Truthfully, I don't want to ever modify the code and I wish I had never seen it, but back here on planet Earth a lot of people have had this same problem and you can find a lot of tools to help you tear apart a program while leaving its source intact. Look in the Devel::* namespace. Some people have even used things like GraphViz to make pictures out of the results.
After using some of those tools, I figure out which subroutines I need to pay attention to. Instead of working over the program flow with a sledge hammer, I want a scalpel. Consider this little program which makes a subroutine call (and pretend its in the middle of a Big Ball of Mud). I want to watch what happens when I pass those arguments to that subroutine, and I want to see what it returns. I don't want to modify the subroutine though.
#!/usr/bin/perl cant_touch_this( qw( Fred Barney Betty Wilma ) ); sub cant_touch_this { ...do stuff... }
The way that we already know involves a lot of code, and it's a lot of code that I have to repeat everywhere that I want to see what's going on (and yes, debuggers do exist, and I could set all sorts of breakpoints, but I'm going to pretend that doesn't exist since there is some other cool stuff coming up). I have one line to store the arguments ahead of the subroutine call so I can print them and then pass them to the subroutine, and then I have to save the result of the subroutine so I can print that afterwards. I could make this a bit shorter with some Perl golf, but things are already getting to look ugly.
my @args = qw( Fred Barney Betty Wilma ); print "Args are [@args]\n"; my $result = cant_touch_this( @args ); print "Result was $result\n";
One possibility is Damian Conway's
Hook::LexWrap. I can define
handlers that execute before and after the actual subroutine call, and
those will work for every call, not just one (more on that coming up).
I load the
Hook::LexWrap module and use its
wrap() function to tell it which subroutine I want to examine. Now my program is only a bit
longer
#!/usr/bin/perl use Hook::LexWrap; wrap 'cant_touch_this', pre => sub { print "The arguments are [@_]\n" }, post => sub { print "Result was [$_[-1]]\n" }; cant_touch_this( qw( Fred Barney Betty Wilma ) ); sub cant_touch_this { return 42; }
The
Hook::LexWrap wrappers get the original argument list in their
@_,
but with and additional value at the end. That extra value is the
return value. In the pre-wrapper the return value will be undefined
(we wouldn't have to call the subroutine if we already knew the
value!), and that element could have some value after the original
subroutine runs. In my post-wrapper, I just want to see the result, so
I only look at
$_[-1].
However, when I run this, I don't see the a result. Why not? I called
cant_touch_this() in void context, so there is no return value.
The arguments are [Fred Barney Betty Wilma ] Result was []
I only get the return value in the post wrapper when I would actually save the result. I have to call the routine in either scalar or list context.
#!/usr/bin/perl use Hook::LexWrap; wrap 'cant_touch_this', pre => sub { print "The arguments are [@_]\n" }, post => sub { print "Result was [$_[-1]]\n" }; my $result = cant_touch_this( qw( Fred Barney Betty Wilma ) ); sub cant_touch_this { return 42; }
Now I see the return value.
The arguments are [Fred Barney Betty Wilma ] Result was [42]
There are other modules that do this, such as
Hook::PreAndPost and
Hook::WrapSub, but if Damian is writing a module, there must be some
cool trick to it. In
Hook::LexWrap,
caller() keeps working.
#!/usr/bin/perl use Hook::LexWrap; wrap 'who_called_me', pre => sub { print "pre caller is @{[caller]}\n" }, post => sub { print "post caller is @{[caller]}\n" }; who_called_me(); sub who_called_me { print "caller is @{[caller]}\n" }
When I run this, I get the same output for all three print statements because all of the subroutines think they are being called from the same place: not only the same file and package, but they also think they come from the same line.
pre caller is main /Users/brian/Desktop/wrap-caller.pl 9 caller is main /Users/brian/Desktop/wrap-caller.pl 9 post caller is main /Users/brian/Desktop/wrap-caller.pl 9
Think about what has to actually happen behind the scenes to make this happen: we have to construct a brand new subroutine that gets the original argument list, calls the pre-subroutine, calls the original subroutine, and then calls the post-subroutine. Not only does this brand new subroutine have to do all that, it has to insert itself into the named slot where the original subroutine lives. Damian uses a lot of typeglob magic to move the original subroutine out of the way (more on this coming up), and more typeglob magic to move the replacement subroutine into its place.
Let me give you just a little taste of what is going on in there.
Here's the bit from
Hook::LexWrap that stores the original subroutine
right before the module replaces it. Simple, right? Grab the
subroutine definition out of the symbol table and store it in
$original. The
$typeglob is the first argument to
wrap(), which is the
subroutine name. You noticed the lack of
strict at the top of the
module, right? For extra credit, which of the three strict checks does
this violate? It's okay to do this stuff if you know why you need to
do it, but don't try this at home.
my $original = ref $typeglob eq 'CODE' && $typeglob || *$typeglob{CODE} || croak "Can't wrap non-existent subroutine ", $typeglob;
Now that the original subroutine is out of the way, we can make the
new one. I'll spare you the details on the inside. If you have a hour
(or a weekend) free, call up the source of
Hook::LexWrap and take a
look between these two braces.
$imposter = sub { ... };
Once we have the
$imposter subroutine, we shove it into the place where
the original subroutine use to live.
*{$typeglob} = $imposter;
It's actually not that much code if you care to peek under the hood.
Still, with all that magic, the replacement subroutine is virtually
invisible to
caller(). It's like its not there even though it is.
That's not enough though. As I've done it so far in this article,
every instance of the target subroutine gets the new behavior. Check
out the name of the module: it's got that "Lex" in it, and just like
other things in Perl that start with those letters, I can limit the
scope of this effect. In this example, I have to wrap the subroutine
differently. Previously, I called
wrap() in void context (meaning I
did not use the result for anything, so
wantarray returns
undef) so
Hook::LexWrap made the effect global. If I store the result of
wrap()
in a lexical variable, the effect disappears when that lexical
variable goes out of scope. That is, I've effectively unwrapped the
subroutine at the end of the naked block which I used to define the
scope.
#!/usr/bin/perl use Hook::LexWrap; { my $lexical = wrap 'who_called_me', pre => sub { print "pre caller is @{[caller]}\n" }, post => sub { print "post caller is @{[caller]}\n" }; who_called_me(); } who_called_me(); sub who_called_me { print "caller is @{[caller]}\n" }
If
Hook::LexWrap is the scalpel that I mentioned earlier, this lexical
feature is arthroscopic surgery with miniature cameras. This is pretty
close to what my colleague and I were talking about at brunch. This
module can have limited scope by wrapping particular subroutines, and
it can even have a shorter effect by working only within a lexical scope.
I still have to add some code, and that's what I really wanted to avoid.
Why can't I just turn this on from the command line and let it do its
magic? Well, maybe I can, but you'll have to read about that in a future
article.
TPJ | http://www.drdobbs.com/web-development/wrapping-subroutines/184416218 | CC-MAIN-2015-14 | refinedweb | 1,497 | 66.98 |
#include <standard.disclaimer>
bigreddog: Aeroparks - current special has it $5 a day for over 6 days - that's cheap anywhere!
Used it 4-5 times over the last few years, always good - even arriving back in Sunday evening at the same time as several Island/Aust flights.
nate: Best option is to not park at the airport, but investigate one of the many alternatives.
I often use Air NZ's parking. Not the cheapest but good friendly service and I've had my car cleaned a couple times - they do a great job and for a pretty reasonable price.
Regards,
Old3eyes | https://www.geekzone.co.nz/forums.asp?forumid=162&topicid=116640 | CC-MAIN-2020-45 | refinedweb | 101 | 69.11 |
# GraphQL
GraphQL is a fundamental part of Redwood. Having said that, you can get going without knowing anything about it, and can actually get quite far without ever having to read the docs. But to master Redwood, you'll need to have more than just a vague notion of what GraphQL is; you'll have to really grok it.
The good thing is that, besides taking care of the annoying stuff for you (namely, mapping your resolvers, which gets annoying fast if you do it yourself!), there's not many gotchas with GraphQL in Redwood. GraphQL is GraphQL. The only Redwood-specific thing you should really be aware of is resolver args.
Since there's two parts to GraphQL in Redwood, the client and the server, we've divided this doc up that way. By default, Redwood uses Apollo for both: Apollo Client for the client and Apollo Server for the server, though you can swap Apollo Client out for something else if you want. Apollo Server, not so much, but you really shouldn't have to do that unless you want to be on the bleeding edge of the GraphQL spec, in which case, why are you reading this doc anyway? Contribute a PR instead!
# Client-side
# RedwoodApolloProvider
By default, Redwood Apps come ready-to-query with the
RedwoodApolloProvider. As you can tell from the name, this Provider wraps ApolloProvider. Omitting a few things, this is what you'll normally see in Redwood Apps:
// web/src/App.js import { RedwoodApolloProvider } from '@redwoodjs/web/apollo' // ... const App = () => ( <RedwoodApolloProvider> <Routes /> </RedwoodApolloProvider> ) // ...
You can use Apollo's
useQuery and
useMutation hooks by importing them from
@redwoodjs/web, though if you're using
useQuery, we recommend that you use a Cell:
// web/src/components/MutateButton.js import { useMutation } from '@redwoodjs/web' const MUTATION = ` # your mutation... ` const MutateButton = () => { const [mutate] = useMutation(MUTATION) return ( <button onClick={() => mutate({ ... })}> Click to mutate </button> ) }
Note that you're free to use any of Apollo's other hooks, you'll just have to import them from
@apollo/client instead. In particular, these two hooks might come in handy:
# Swapping out the RedwoodApolloProvider
As long as you're willing to do a bit of configuring yourself, you can swap out
RedwoodApolloProvider with your GraphQL Client of choice. You'll just have to get to know a bit of the make up of the RedwoodApolloProvider; it's actually composed of a few more Providers and hooks:
FetchConfigProvider
useFetchConfig
GraphQLHooksProvider
For an example of configuring your own GraphQL Client, see the redwoodjs-react-query-provider. If you were thinking about using react-query, you can also just go ahead and install it!
Note that if you don't import
RedwoodApolloProvider, it won't be included in your bundle, dropping your bundle size quite a lot!
# Server-side
# Understanding Default Resolvers
According to the spec, for every field in your sdl, there has to be a resolver in your Services. But you'll usually see fewer resolvers in your Services than you technically should. And that's because if you don't define a resolver, Apollo Server will.
The key question Apollo Server asks is: "Does the parent argument (in Redwood apps, the
parent argument is named
root—see Redwood's Resolver Args) have a property with this resolver's exact name?" Most of the time, especially with Prisma Client's ergonomic returns, the answer is yes.
Let's walk through an example. Say our sdl looks like this:
// api/src/graphql/user.sdl.js export const schema = gql` type User { id: Int! email: String! name: String } type Query { users: [User!]! } `
So we have a User model in our
schema.prisma that looks like this:
model User { id Int @id @default(autoincrement()) email String @unique name String? }
If you create your Services for this model using Redwood's generator (
yarn rw g services user), your Services will look like this:
// api/src/services/user/user.js import { db } from 'src/lib/db' export const users = () => { return db.user.findMany() }
Which begs the question: where are the resolvers for the User fields—
id,
name?
All we have is the resolver for the Query field,
users.
As we just mentioned, Apollo defines them for you. And since the
root argument for
id,
name has a property with each resolvers' exact name (i.e.
root.id,
root.email,
root.name), it'll return the property's value (instead of returning
undefined, which is what Apollo would do if that weren't the case).
But, if you wanted to be explicit about it, this is what it would look like:
// api/src/services/user/user.js import { db } from 'src/lib/db' export const users = () => { return db.user.findMany() } export const Users = { id: (_args, { root }) => root.id, email: (_args, { root }) => root.email, name: (_args, { root }) => root.name, }
The terminological way of saying this is, to create a resolver for a field on a type, in the Service, export an object with the same name as the type that has a property with the same name as the field.
Sometimes you want to do this since you can do things like add completely custom fields this way:
export const Users = { id: (_args, { root }) => root.id, email: (_args, { root }) => root.email, name: (_args, { root }) => root.name, age: (_args, { root }) => new Date().getFullYear() - root.birthDate.getFullYear()}
# Redwood's Resolver Args
According to the spec, resolvers take four arguments:
args,
obj,
context, and
info. In Redwood, resolvers do take these four arguments, but what they're named and how they're passed to resolvers is slightly different:
argsis passed as the first argument
objis named
root(all the rest keep their names)
root,
context, and
infoare wrapped into an object; this object is passed as the second argument
Here's an example to make things clear:
export const Post = { user: (args, { root, context, info }) => db.post.findUnique({ where: { id: root.id } }).user() }
Of the four, you'll see
args and
root being used a lot.
There's so many terms!
Half the battle here is really just coming to terms. To keep your head from spinning, keep in mind that everybody tends to rename
objto something else: Redwood calls it
root, Apollo calls it
parent.
objisn't exactly the most descriptive name in the world.
# Context
In Redwood, the
context object that's passed to resolvers is actually available to all your Services, whether or not they're serving as resolvers. Just import it from
@redwoodjs/api:
import { context } from '@redwoodjs/api
# The Root Schema
Did you know that you can query
redwood? Try it in the GraphQL Playground (you can find the GraphQL Playground at when your dev server is running—
yarn rw dev api):
query { redwood { version currentUser } }
How is this possible? Via Redwood's root schema. The root schema is where things like currentUser are defined.
Now that you've seen the sdl, be sure to check out the resolvers.
# Why Doesn't Redwood Use Something Like Nexus?
This might be one of our most frequently asked questions of all time. Here's Tom's response in the forum:
We started with Nexus, but ended up pulling it out because we felt like it was too much of an abstraction over the SDL. It’s so nice being able to just read the raw SDL to see what the GraphQL API is. | https://redwoodjs.com/docs/graphql | CC-MAIN-2021-17 | refinedweb | 1,224 | 72.26 |
Top 10 Zend Framework interview questions
1. What is Zend Framework? What is the use of it?
2. Which version of PHP does Zend Framework require?
3. How to check whether a form posted or not in Zend framework?
4. What is the difference between Zend_Auth and Zend_Acl?
5. What is the difference between Zend_Auth and Zend_Acl?
6.Explain what is lucene in Zend framework?
7.What are Decorators in zend framework?
8.List the default methods provided by decorators in zend framework?
9. What is zend engine?
10.What are Plugins in zend framework?
11. What is Zend Framework 2?
Zend Framework 2 is an open source framework for developing web applications and services using PHP 5.3+ .
Zend Framework 2 uses 100% object-oriented code and utilizes most of the new features of PHP 5.3, like namely namespaces, late static binding, lambda functions and closures.
12. What is minimum php version required to run Zend Framework 2?
PHP 5.3 or above is required to run Zend Framework 2. | https://www.onlineinterviewquestions.com/zend-framework-interview-questions/ | CC-MAIN-2018-26 | refinedweb | 172 | 79.16 |
Hottest Forum Q&A on CodeGuru - February 22,,.
Amit Sebiz is using the rand() function in a for loop but unfortunately rand always returns the same number. Do you know why?
I am using the following code to generate a random string. But each time I get the same string output. Can anyone help. Here is the code:
char myarray[22]; char* CGeneratorApp::GenerateString() { myarray[21]='\0'; for(int k=0;k<21;k++){ myarray[k] = (char)(((int)rand()%25)+65); } char first[2]; intUsed++; sprintf(first,"%d",intUsed); if(intUsed<10){ first[1]=first[0]; first[0]='0'; } first[0]=(char)((int)first[0]+27); first[1]=(char)((int)first[1]+27); myarray[8]=first[0]; myarray[10]=first[1]; AfxMessageBox(myarray); return myarray; }
You need to initialize the random seed using srand() before calling rand(). For example,
srand(time(NULL));
Besides that, take a look at the following thread, which contains some more information. Also, take a look at the following VC++ FAQs:
- Good random number generators
- Why does my random number generator always return the same set of numbers?
Kibble is working on a application that loads a DLL. The DLL has some classes called CObject and CArchive. Now, he gets a name conflict in the DLL header and in the MFC header.
I have been coding a game engine for a while now, the core part of it is in a DLL file. I have started coding the editor for it, and I'm trying to use MFC. The problem is, I have classes named CObject and CArchive. The first thing I tried to do to fix this is include MFC into a namespace, but I seriously doubted it would work:
namespace MFC { // MFC includes here, just the classwizard-generated ones };
Then use fully qualified MFC names for everything. but, surprise surprise, this didn't work. Many hundreds of include file errors, lots of 'name' undefined (particularly time_t), and 'name' is not a member of 'Global namespace' (resulting from things like ::GetWindowTextA in the MFC headers). I could put my engine in a namespace, but that would be a LOT of work (300 files +), so this is my last resort. What are my options here?
Either change your names or place your code in a namespace. As a matter of fact, it would be beneficial if you did place your names in a namespace and change the names. Changing the names to CGameObject, or CGameArchive is a much better choice than CObject or CArchive. If you have many source files, usage of a good editor (one that has multiple file "search and replace") makes changing the names of classes very easy.
Placing MFC names in a namespace will not work at link-time. The linker will be looking for "CObject" for the MFC names, but will only find "MFC::CObject". Then, you will get "unresolved external" errors. In other words, you're wasting your time by placing MFC in a namespace unless you want to rebuild the entire MFC library with the MFC namespace.
As an alternative to using another editor, also have a look at the articles section; there are add-ins and macros such as this one that adds search & replace across multiple files to VisualStudio.
| http://www.developer.com/net/cplus/article.php/3320261/Hottest-Forum-QA-on-CodeGuru---February-22-2004.htm | CC-MAIN-2014-15 | refinedweb | 542 | 71.34 |
-- . -- | Contains a process for easily using stdin, stdout and stderr as channels. module Control.Concurrent.CHP.Console where import Control.Concurrent import Control.Concurrent.STM import qualified Control.Exception.Extensible as C import Control.Monad import Control.Monad.Trans --import Data.Maybe import System.IO import Control.Concurrent.CHP -- | A set of channels to be given to the process to run, containing channels -- for stdin, stdout and stderr. data ConsoleChans = ConsoleChans { cStdin :: Chanin Char, cStdout :: Chanout Char, cStderr :: Chanout Char } -- | A). consoleProcess :: (ConsoleChans -> CHP ()) -> CHP () consoleProcess mainProc = do [cin, cout, cerr] <- replicateM 3 oneToOneChannel tvs@[tvinId, tvoutId, tverrId] <- liftIO $ atomically $ replicateM 3 $ newTVar Nothing runParallel_ [ inHandler tvinId (writer cin) , outHandler tvoutId stdout (reader cout) , outHandler tverrId stderr (reader cerr) , do ids <- mapM getId tvs (mainProc $ ConsoleChans (reader cin) (writer cout) (writer cerr)) `onPoisonTrap` (return ()) poison (reader cin) poison (writer cout) poison (writer cerr) -- Poison won't do it if the handlers are blocked on input or -- output. Therefore we throw them an exception to "knock them -- off" their current action and make them exit. liftIO yield liftIO $ mapM_ killThread ids ] where getId :: TVar (Maybe a) -> CHP a getId tv = liftIO $ atomically $ readTVar tv >>= maybe retry return -- Like liftIO, but turns any caught exceptions into throwing poison liftIO' :: IO a -> CHP a liftIO' m = liftIO (liftM Just m `C.catches` handlers) >>= maybe throwPoison return where response :: C.Exception e => e -> IO (Maybe a) response = const $ return Nothing handlers = [C.Handler (response :: C.IOException -> IO (Maybe a)) ,C.Handler (response :: C.AsyncException -> IO (Maybe a)) #if __GLASGOW_HASKELL__ >= 611 ,C.Handler (response :: C.BlockedIndefinitelyOnSTM -> IO (Maybe a)) #else ,C.Handler (response :: C.BlockedIndefinitely -> IO (Maybe a)) #endif ,C.Handler (response :: C.Deadlock -> IO (Maybe a)) ] inHandler :: TVar (Maybe ThreadId) -> Chanout Char -> CHP () inHandler tv c = do liftIO $ myThreadId >>= atomically . writeTVar tv . Just if rtsSupportsBoundThreads then (forever $ do ready <- liftIO $ hWaitForInput stdin 100 checkForPoison c when ready $ liftIO' getChar >>= writeChannel c) `onPoisonTrap` poison c else (forever $ liftIO' getChar >>= writeChannel c) `onPoisonTrap` poison c outHandler :: TVar (Maybe ThreadId) -> Handle -> Chanin Char -> CHP () outHandler tv h c = do liftIO $ myThreadId >>= atomically . writeTVar tv . Just (forever $ readChannel c >>= liftIO' . hPutChar h) `onPoisonTrap` poison c | http://hackage.haskell.org/package/chp-plus-1.0.1/docs/src/Control-Concurrent-CHP-Console.html | CC-MAIN-2014-35 | refinedweb | 357 | 51.55 |
The new async/await asynchronous programming facilities in .NET solve one of its long standing problems - how to write and an elegant application that uses the UI correctly.
One of the biggest problems facing any Windows forms or WPF programmer is that you can't use the UI thread to do much work. If you do the result is an unresponsive application. The proper solution is to use a new thread to do all of the heavy computation and leave the UI thread free to get on with what it is supposed to do - deal with user events. However the computation thread usually has to provide evidence that it has done something and this means that it has to interact with the UI components.
As is well known, UI components aren't threadsafe - hence the rule that only the thread that created a component can access it. You can optionally turn this rule off and allow the worker thread to access the UI but this isn't a good idea. The correct way to to allow the worker thread to access the UI is to use the Invoke method to ask the UI Thread to run a delegate that does the update using data from provided by the worker thread.
This may be the correct way but it results in very messy code and given it is such a common requirement we really could do with a simpler way of implementing two or more threads working with the UI. The new C# and VB facilities provided by the Visual Studio Async CTP seem to solve the problem neatly and in a way that is simple enough for even a beginner to user.
If you want to follow the practical details of the example you will first need to download the CTP from the Microsoft Download site. The installation is automatic but at this early stage things are not completely integrated with Visual Studio. Notice that the CTP works with Visual Studio 2010 or the Express editions of either C# or Visual Basic. It also works in the same way with Silverlight.
After installing the CTP you can start a new WPF project and you next need to reference the new assembly. To do this right click on References and select Add Reference. Then select the Browse tab and navigate to the installation directory - usually
My Documents\Microsoft Visual Studio Async CTP\Samples
and select AsyncCtpLibrary.dll
(AsyncCtpLibrary_Silverlight.dll for a Silverlight project).
Next put a button and two TextBlocks on the form - the button will start the process off and the TextBlocks will record its progress.
First let's look at the problem that we are trying to solve. The Button's click event handler calls a method that does a lot of work:
private void button1_Click(object sender, RoutedEventArgs e){ textBlock1.Text = "Click Started"; DoWork(); textBlock2.Text = "Click Finished";}
You can see that the first two textBlocks are changed to show what is happening. For the purpose of this example DoWork can be simulated by a routine that just loops and so keeps its thread occupied. You get the same overall result if DoWork simply waits for an I/O operation to complete - the important point is that as written it keeps the attention of the UI thread until it is complete. That is:
void DoWork(){ for (int i = 0; i < 10; i++) { Thread.Sleep(500); }}
keeps the UI thread busy for 5 seconds and to use Thread.Sleep you also have to add:
using System.Threading;
What do you think you see if you run this program?
If you aren't familiar with the way WPF works you might think that you see "Click Started" appear and then after 5 seconds "Click Finished".
What actually happens is that you see both messages appear after the 5 seconds are up during which time the UI if frozen.
The reason for this behaviour is simply that the UI is frozen from the moment the DoWork method is called and this is usually before the "Click Started" text has been rendered to the display. This is exactly the reason you don't want any intensive computation on the UI thread - in fact you really don't want any computation on the UI thread at all!
Now we can look at how to implement this property using the new async and await.
The first thing is to know is that any method that you put async in front of is an asynchronous method, which means it can be started and stopped rather than just run from first to last instruction. We could create a new method and mark is as asynchronous but to keep the example as much like the synchronous case described above we can simply change the Click event handler into an asynchronous method:
private async void button1_Click( object sender, RoutedEventArgs e){ textBlock1.Text = "Click Started"; DoWork(); textBlock2.Text = "Click Finished";}
If you do this the compiler will complain that you have an asynchronous method without any awaits and so it will run it as a synchronous method anyway.
To take advantage of the asynchronous nature of the new event handler we have to await the completion of the DoWork method. However if you write:
private async void button1_Click( object sender, RoutedEventArgs e){ textBlock1.Text = "Click Started"; await DoWork(); textBlock2.Text = "Click Finished";}
Then the compiler will complain that you can't await a method that returns a void. Any method that you await has to return a Task or a Task<T> object where T is the type it would normally return.
In this case as nothing is returned we can use return a Task object:
Task DoWork()
The next question is what Task object do we actually return?
<ASIN:0596159838>
<ASIN:1430225491>
<ASIN:0672331012>
<ASIN:0321718933>
<ASIN:0735627045> | http://www.i-programmer.info/programming/c/1514-async-await-and-the-ui-problem.html | CC-MAIN-2016-22 | refinedweb | 970 | 67.69 |
instruction for quiz
Next, write a class Quiz that represents a quiz consisting of true/false questions. This class should have exactly two instance variables: a one-dimensional array of TFQuestions that stores all the true/false questions of the quiz, and an int variable that records the number of true/false questions in the array (because the array may contain unused cells). This class should provide the following public methods:
* public Quiz(int capacity): this constructor initializes the two instance variables by creating an array of TFQuestions of size capacity and setting the number of true/false questions to zero.
* public boolean add(String questionStmt, boolean correctAnswer): this method creates a TFQuestion object representing a true/false question with statement questionStmt and correct answer correctAnswer, inserts this question into the array, and returns true. However, if the array is already full, then no TFQuestion object will be created and added to the array, and this method should return false instead.
* public double play(): this method allows the user to take the quiz once. It should display a true/false question on the screen, prompt the user to enter the answer for that question in the form "t" (for true) or "f" (for false), then display the next question, and so on. The questions should be displayed in the same order that they were added to the array. Finally, this method should return the score as the percentage of questions that have been answered correctly. Note that if the array contains no true/false questions, then this method should display the sentence "This quiz has no questions!" and simply return 100%.
please help me
so far i got code for tfquestion because someone helped me in this site
public class TFQuestion { String tfQuestion; boolean cAnswer; public TFQuestion(String tfQtion, boolean tfvalue) { tfQuestion = tfQtion; cAnswer = tfvalue; } public String toString() { return tfQuestion; } public boolean checkAnswer(boolean userAnswer) { return userAnswer == cAnswer; } }
And I am trying to make another code which is quiz, this one connected from tfquestion and also i have to make quizdriver so i have to make 3 classes tfquestion, quiz, quizdriver. But i really want to ask is about quiz.I couldn't ask about quizdriver yet, because I didn't do it yet...
here is my code for quiz
public class Quiz { private TFQuestion[] Quiz; private int count; public Quiz(int capacity) { capacity = 20; Quiz = new TFQuestion[capacity]; count = 0; } public boolean add (String QuestionSTMT, boolean Answer) { if (count >= quiz.length) return false; else quiz[count] = new TFQuestion (QuestionSTMT, tfvalue); count++; return true; } public double play() { Scanner keyboard = new Scanner (System.in); for (int i = 0; i < quiz.length; i++) { System.out.println(quiz[i]); System.out.println("Enter the answer for this question with 't' for true and 'f'" + " for false."); char answer = keyboard.nextChar(); double numQuestions = ((double) i); if (answer == correctAnswer) double numCorrect++; } double percentage = (numCorrect/numQuestions) * 100.0; return percentage; } }
the quiz code it self uses array, too...
I really not sure about code in boolean add and double play
please help me
and thx to the repliers for tfquestion code | https://www.daniweb.com/programming/software-development/threads/121832/java-code-help-2 | CC-MAIN-2017-51 | refinedweb | 515 | 58.62 |
>>
Minimum Swaps to Make Strings Equal1 and s2 of equal length consisting of letters only "x" and "y". Our task is to make these two strings equal to each other. We can swap any two characters that belong to different strings, which means − swap s1[i] and s2[j]. We have to find the minimum number of swaps required to make s1 and s2 equal, or return -1 if it is impossible to do so. So if the strings are s1 = “xy” and s2 = “yx”, then the output will be 2. If we swap s1[0] and s2[0], s1 = "yy", s2 = "xx". Then swap s1[0] and s2[1], s1 = "xy", s2 = "xy".
To solve this, we will follow these steps −
- Set x1, x2, y1 and y2 as 0
- for i in range 0 to size of s1
- a := s1[i] and b := s2[i]
- if a is not same as b, then
- if a = ‘x’ then increase x1, otherwise increase y1 by 1
- if b = ‘x’ then increase x2, otherwise increase y2 by 1
- if (x1 + x2) is odd or (y1 + y2) is odd, then return -1
- return x1/2 + y1/2 + (x1 mod 2) * 2
Example(C++)
Let us see the following implementation to get a better understanding −
#include <bits/stdc++.h> using namespace std; class Solution { public: int minimumSwap(string s1, string s2) { int x1 = 0, x2 = 0, y1 = 0, y2 = 0; for(int i = 0; i < s1.size(); i++){ char a = s1[i]; char b = s2[i]; if(a != b){ if(a == 'x')x1++; else y1++; if(b == 'x')x2++; else y2++; } } if ((x1 + x2) & 1 || (y1 + y2) & 1)return -1; return x1/2 + y1/2 + (x1 % 2) * 2; } }; main(){ Solution ob; cout <<ob.minimumSwap("xy", "yx"); }
Input
"xy" "yx"
Output
2
- Related Questions & Answers
- Minimum Swaps To Make Sequences Increasing in C++
- Minimum move to end operations to make all strings equal in C++
- Minimum number of given operations required to make two strings equal using C++.
- Minimum swaps required to make a binary string alternating in C++
- Program to find minimum swaps required to make given anagram in python
- Program to find minimum deletions to make strings strings in Python
- Find the minimum number of preprocess moves required to make two strings equal in Python
- Minimum Cost To Make Two Strings Identical in C++
- Program to count number of minimum swaps required to make it palindrome in Python
- Finding minimum steps to make array elements equal in JavaScript
- Minimum operation to make all elements equal in array in C++
- Minimum Cost to make two Numeric Strings Identical in C++
- Minimum swaps required to bring all elements less than or equal to k together in C++
- Program to find minimum operations to make array equal using Python
- Minimum Flips to Make a OR b Equal to c in C++ | https://www.tutorialspoint.com/minimum-swaps-to-make-strings-equal-in-cplusplus | CC-MAIN-2022-40 | refinedweb | 475 | 53.48 |
Created on 2019-08-05 18:03 by xtreak, last changed 2020-07-09 23:47 by terry.reedy. This issue is now closed.
Currently, the basic repl for python provides keywords as part of autocompletion but IDLE doesn't provide them. I was trying to build an async repl on top of IDLE to support top level await statements as part of IDLE since "python -m asyncio" doesn't provide a good repl and found during usage keywords like async/await being part of autocomplete to provide a good experience like the basic repl to type faster. I couldn't find any old issues with search around why keywords were excluded so I thought of filing a new one for this suggestion.
The global completion list (i.e. when not completing a file name or object attribute) is already full of all the built-ins, imported modules and variables. So IMO we'd need a good reason to add yet more options into the completions list.
Personally, I don't think that adding all of the keywords to that list would be helpful: They are all short words and most of them must be memorized anyways to work with Python.
For instance, I don't recall this being brought up by those who often teach newcomers with IDLE, such as Raymond Hettinger, when discussing what's missing in IDLE. I'd be happy to get more input from them on this.
To be clear, I'm currently -1 on this suggestion.
If keywords are included when the REPL has tab completions (which Windows doesn't), then it is plausible that IDLE should. It could be considered part of 'Shell should (mostly) imitate REPL'. But I can see Tal's point, though the relative expansion is pretty small. And there is nothing on the master completions issue #27609 where I collected known not-yet-rejected suggestions and ideas.
The implementation is trivial. Add two new lines to autocomplete.py. So you can easily patch a private copy. I am preparing a minimal PR.
import keyword # add
...
def fetch_completions(self, what, mode):
...
bigl = eval("dir()", namespace)
bigl.extend(keyword.kwlist) # add
bigl.sort()
True, False, and None are also in builtins, so cannot serve as a test.
---
A separate idea: annotate completion list, at least as an option, with 'keyword' or class, possibly prefixed with 'built-in', so 'built-in function', 'function', and so on.
Thanks Terry, I used a similar patch. My main use case was around typing where normal shell autocompletes it and was curious if it was intentional. I didn't know that windows didn't give keywords. The keywords are short and added very rarely and perhaps the bigger completion list to actual usage might be low since no one opened this issue as Tal mentioned I am open to others feedback.
After.
Since the word "main" is short and since the dunder prefix is common, I don't expect to get much value out of adding '__main__'. ISTM, this just increases the risk of a false positive for a given dunder method.
> To be clear, I'm currently -1 on this suggestion.
I concur.
Raymond, since there is no proposal to 'add __main__', I don't understand your response. For the case in question, the completion list continues to be based on the keys in __main__.__builtins__.__dict__, and main__.__dict__, as it always has been.
Cheryl said "This looks good" on the PR." while noting that True should not be added as After trying out REPL completions in macOS Terminal, I *really* want to be able to type 'im'<tab> and have 'import' appear. (When there is just one match, it is filled in without displaying a list of one item.) I increasingly suffer from 'dystypia' (which I coined as the reverse of 'dyslexia'), and 'import' is one of my worst words. And I have to type it daily. On #17238, Ramchandra Apte also requested completion of 'import'.
Sorting keywords by length, we get:
>>> sorted(keyword.kwlist, key=lambda s: len(s))
['as', 'if', 'in', 'is', 'or', 'and', 'def', 'del', 'for', 'not', 'try', 'None', 'True', 'elif', 'else', 'from', 'pass', 'with', 'False', 'async', 'await', 'break', 'class', 'raise', 'while', 'yield', 'assert', 'except', 'global', 'import', 'lambda', 'return', 'finally', 'continue', 'nonlocal', '__peg_parser__']
I agree that adding 2 and 3 letter keywords is not useful. Among 4 letter keywords, None and True are already present from builtins. 'elif' and 'else' would need at least 3 and 4 keystrokes to complete ('e', <Tab>, Down for 'else', <Enter>). 'from' would need at least 4 because of 'filter' and 'frozenset'. 'pass' would need 3 because of 'pow'. 'with' would require at least 5 if 'while' were added. So skip length 4 keywords also.
So I am changing the proposal to adding the 17 keywords (other than False, already present) of length 5 or more. These include 'async' and 'await', requested by Karthikeyan in the opening post above.
Auto-completion is not just about saving keystrokes. For example, I assume that many Python beginners just read through the completion list to see what the options are. This inconsistency would be hard to grok.
I think that including only some of the keywords in the completions list could potentially be very confusing. We'd have "class" but not "def", "finally" but not "else", "while" but not "for".
If the standard REPL completes keywords (at least on some platforms) that's a good enough argument to include them in IDLE, in my opinion.
Also, note that the keywords would only be included in the suggested completions when not in a string and when not completing an attribute. So, for example, such a change could not possibly affect the completion of dunder method names.
Tal, I suggested the compromise because of your original objection. Since you think half is worse than all, I will revert the change. It did get me to do a needed rewrite of the Completions section of the IDLE doc.
New changeset bce2eb4646021910aa4074d86f44a09b32d0b2b2 by Terry Jan Reedy in branch 'master':
bpo-37765: Add keywords to IDLE tab completions (GH-15138)
New changeset fd27fb7f3dd157294f05bb060f7efd243732ab2d by Miss Islington (bot) in branch '3.9':
bpo-37765: Add keywords to IDLE tab completions (GH-15138)
New changeset 3d1c06e8b9eec5fc1ea2ed4dc1ea79c705da8ab8 by Miss Islington (bot) in branch '3.8':
bpo-37765: Add keywords to IDLE tab completions (GH-15138)
PR 15138 always adds keywords to the big list for the current module. They are also normally present in the small list, when it only excludes '_' names. But if the module being edited contains '__all__', the small list, which is the first list presented, is currently just __all__. This excludes builtins and now keywords and possibly non-_ names defined in the module. I think this restriction is a mistake; __all__ defines a limited external view of the module. It is not intended to restrict the names used in the module. I will remove the restriction (and a crash bug it contains) in a partly completed PR for #37766. | https://bugs.python.org/issue37765 | CC-MAIN-2020-45 | refinedweb | 1,169 | 72.97 |
Opened 4 years ago
Closed 4 years ago
Last modified 4 years ago
#24658 closed Bug (fixed)
Schema tests fail when run in isolation
Description
Because tables are deleted in
tearDown, when running an individual test, the tables are still existing and any
create_model operation fail.
Change History (8)
comment:1 Changed 4 years ago by
comment:2 Changed 4 years ago by
comment:3 Changed 4 years ago by
comment:4 Changed 4 years ago by
Also, I ran this on both the 1.8 branch, and the master on the commit f043434174db3432eb63c341573c1ea89ef59b91, with Python version Python 2.7.5+
I'm releasing this ticket, so that someone else can take a look. If you provide more information, I'll check it out from time to time.
comment:5 Changed 4 years ago by
Claude, could you more details on which tests fail when run in isolation.
I couldn't reproduce with:
./runtests.py schema.tests.SchemaTests --settings=test_postgres
or
./runtests.py schema.tests.SchemaTests.test_creation_deletion
or
./runtests.py schema.tests.SchemaTests.test_creation_deletion schema.tests.SchemaTests.test_fk --settings=test_postgres
comment:6 Changed 4 years ago by
I found the issue. The problem happens only with tests using the Note model, and the cause is that the Note model misses the
apps = new_apps Meta attribute. I'll fix that ASAP.
I've tried to test your scenario, but I think more information.
I dont know if I understood correctly, but I tried to run just one individual test, and use the SchemaEditor.create_model method to create a new model class, and instantiate it.
Here's my code, what works (I can create a dynamic model within the test and instantiate it - no errors)
from django.test import TestCase
Then from the command line, I ran the test suites with all 3 of these commands:
Please give more details on how you're getting this issue. | https://code.djangoproject.com/ticket/24658 | CC-MAIN-2019-09 | refinedweb | 315 | 62.07 |
Created on 2010-04-15 00:51 by george.hu, last changed 2013-11-18 11:09 by serhiy.storchaka. This issue is now closed.
Have this problem in python 2.5.4 under windows.
I'm trying to return a list of files in a directory by using glob. It keeps returning a empty list until I tested/adjusted folder name by removing "[" character from it. Not sure if this is a bug.
glob.glob("c:\abc\afolderwith[test]\*") returns empty list
glob.glob("c:\abc\afolderwithtest]\*") returns files
When you do :
glob.glob("c:\abc\afolderwith[test]\*") returns empty list
It looks for all files in three directories:
c:\abc\afolderwitht\*
c:\abc\afolderwithe\*
c:\abc\afolderwiths\*
Ofcourse they do not exist so it returns empty list
06:35:05 l0nwlf-MBP:Desktop $ ls -R test
1 2 3
06:35:15 l0nwlf-MBP:Desktop $ ls -R test1
alpha beta gamma
>>> glob.glob('/Users/l0nwlf/Desktop/test[123]/*')
['/Users/l0nwlf/Desktop/test1/alpha', '/Users/l0nwlf/Desktop/test1/beta', '/Users/l0nwlf/Desktop/test1/gamma']
As you can see, by giving the argument test[123] it looked for test1, test2, test3. Since test1 existed, it gave all the files present within it.
See the explanation at , which uses the same rules.
Ok, what if the name of the directory contains "[]" characters? What is the escape string for that?
The documentation for fnmatch.translate, which is what ultimately gets called, says:
There is no way to quote meta-characters.
Sorry.
If you want to see this changed, you could open a feature request. If you have a patch, that would help!
You probably want to research what the Unix shells use for escaping globs. the only way is to program a filter on the
listdir.
On Wed, Apr 14, 2010 at 6:34 PM, Shashwat Anand <report@bugs.python.org>wrote:
>
> Shashwat Anand <anand.shashwat@gmail.com> added the comment:
>
>'
>
> ----------
> status: pending -> open
> type: behavior -> feature request
>
> _______________________________________
> Python tracker <report@bugs.python.org>
> <>
> _______________________________________
> be the only way is to write a filter on the listdir.
You repeated the same comment twice and added an 'unnamed' file. I assume you did it by mistake.
Shouldn't the title be updated to indicate the fnmatch is the true source of the behavior (I'm basing this on indicating the fnmatch is invoked by glob). I'm not using glob, but fnmatch in my attempt to find filenames that look like "Ajax_[version2].txt".
If nothing else, it would have helped me if the documentation would state whether or not the brackets could be escaped. It doesn't appear from my tests (trying "Ajax_\[version2\].txt" and "Ajax_\\[version2\\].txt") that 'escaping' is possible, but if the filter pattern gets turned into a regular expression, I think escaping *would* be possible. Is that a reasonable assumption?
I'm running 2.5.1 under Windows, and this is my first ever post to the bugs list.
Following up...
I saw Eric Smith's 2nd note (2010-04-15 @1:27) about fnmatch.translate documentation stating that
"There is no way to quote meta-characters."
When I looked at:
did not see this statement appear anywhere. Would this absence be because someone is working on making this enhancement?
I don't think so. That quote came from the docstring for fnmatch.translate.
>>> help(fnmatch.translate)
Help on function translate in module fnmatch:
translate(pat)
Translate a shell PATTERN to a regular expression.
There is no way to quote meta-characters.
The 3.1.2 doc for fnmatch.translate no longer says "There is no way to quote meta-characters." If that is still true (no quoting method is given that I can see), then that removal is something of a regression.
The note about no quoting meta-chars is in the docstring for fnmatch.translate, not the documentation. I still see it in 3.1. I have a to-do item to add this to the actual documentation. I'll add an issue.
As a workaround, it is possible to make every glob character a character set of one character (wrapping it with [] ). The gotcha here is that you can't just use multiple replaces because you would escape the escape brackets.
Here is a function adapted from [1]:
def escape_glob(path):
transdict = {
'[': '[[]',
']': '[]]',
'*': '[*]',
'?': '[?]',
}
rc = re.compile('|'.join(map(re.escape, transdict)))
return rc.sub(lambda m: transdict[m.group(0)], path)
[1]
i m agree with answer number 6. the resolution mentioned is quite easy and very effectve
thanks
The attached patch adds support for '\\' escaping to fnmatch, and consequently to glob.
I have comments on the patch but a review link does not appear. Could you update your clone to latest default revision and regenerate the patch? Thanks.
Noblesse oblige :)
> The attached patch adds support for '\\' escaping to fnmatch, and consequently to glob.
This is a backward incompatible change. For example glob.glob(r'C:\Program Files\*') will be broken.
As flacs says a way to escape metacharacters in glob/fnmatch already exists. If someone want to match literal name "Ajax_[version2].txt" it should use pattern "Ajax_[[]version2].txt". Documentation should explicitly mentions such way.
It will be good also to add new fnmatch.escape() function.
Here is a patch which add fnmatch.escape() function.
I am not sure if escape() should support bytes. translate() doesn't.
I think the escaping workaround should be documented in the glob and/or fnmatch docs. This way users can simply do:
import glob
glob.glob("c:\abc\afolderwith[[]test]\*")
rather than
import glob
import fnmatch
glob.glob(fnmatch.escape("c:\abc\afolderwith[test]\") + "*")
The function might still be useful with patterns constructed programmatically, but I'm not sure how common the problem really is.
> I think the escaping workaround should be documented in the glob and/or fnmatch docs.
See issue16240. This issue left for enhancement.
Patch updated (thanks Ezio for review and comments).
The workaround is now documented.
I'm still not sure if this should still be added, or if it should be closed as rejected now that the workaround is documented.
A third option would be adding it as a recipe in the doc, given that the whole functions boils down to a single re.sub (the user can take care of picking the bytes/str regex depending on his input).
It is good, if stdlib has function for escaping any special characters, even if this function is simple. There are already escape functions for re and sgml/xml/html.
Private function glob.glob1 used in Lib/msilib and Tools/msi to prevent unexpected globbing in parent directory name. ``glob.glob1(dirname, pattern)`` should be replaced by ``glob.glob(os.path.join(fnmatch.escape(dirname), pattern)`` in external code.
I've attached fnmatch_implementation.py, which is a simple pure-Python implementation of the fnmatch function.
It's not as susceptible to catastrophic backtracking as the current re-based one. For example:
fnmatch('a' * 50, '*a*' * 50)
completes quickly.
I think it should be a separate issue.
Escaping for glob on Windows should not be such trivial. Special characters in the drive part have no special meaning and should not be escaped. I.e. ``escape('//?/c:/Quo vadis?.txt')`` should return ``'//?/c:/Quo vadis[?].txt'``. Perhaps we should move the escape function to the glob module (because it is glob's peculiarity).
Here is a patch for glob.escape().
Could anyone please review the patch before feature freeze?
Updated patch addresses Ezio's and Eric's comments.
Updated patch addresses Eric's comment.
Looks good to me.
New changeset 5fda36bff39d by Serhiy Storchaka in branch 'default':
Issue #8402: Added the escape() function to the glob module.
Thank you Ezio and Eric for your reviews. | http://bugs.python.org/issue8402 | CC-MAIN-2015-11 | refinedweb | 1,291 | 69.28 |
=> Fender has their Pre CBS and Post CBS (hecha en
Mexico) era's.
=> Peavey was accused of making non-standard
solid-state components that were only available in
their factories.
=> Behringer's stomp box products were delayed for
import to the USA over patent disputes.
It seems a company's success is tied directly to it's
ability to overcome adversity and misperception. So
what their cable tester looks like another one. Did
they copy it exactly, did they break any laws, or did
they license the original design? Were any laws
actually broken? What was settled with the stomp box
patents? I believe the stomp boxes are available in
the USA now.
I find it funny, sad, and ironic that in a linux
related list that someone is complaining that a
company reused somebody's existing widget rather than
coming up with it's own proprietary widget.
Here is a question to the person who complained about
this: Do you use a Linux distro, or do you use FIPS to
partition your hard drive, then build your entire
Linux system from source (and I am not referring to
Gentoo)?
I will admit Behringer had a rough start, but I like
their products, and I like their prices. When I go to
purchase something, there are many factors to
consider. Sometimes I choose Behringer, but not all
the time. If you wanted bash them about their linux
support (or lack of it) I would be happy to join in,
especially with the V-Amp and FCB1010. On this
thread, I will have to come to their defense.
BTW ironic does not mean made of iron.
-=cybersean3000=-
__________________________________________________
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
Received on Thu Mar 2 20:15:08 2006
This archive was generated by hypermail 2.1.8 : Thu Mar 02 2006 - 20:15:08 EET | http://lalists.stanford.edu/lau/2006/03/0123.html | CC-MAIN-2017-17 | refinedweb | 317 | 73.78 |
James Pond - Pond is Back! (Canceled)
Pond will be back with a new campaign! See below for more details.
James Pond - Pond is Back! (Canceled)
Pond will be back with a new campaign! See below for more details.
About
Cod Alert ! Cod Alert ! Our top underwater agent – James Pond has responded to an inkjet message marked “RED HERRING” and is now octopied on a top secretion mission.
Fin out the latest news at the squid links here.
Join the conversation:
&
Twitter -
20 years ago we saw Pond complete his final mission and for many of us we've been waiting for the next installment to come along. Well wait no longer since your favourite Underwater F.I.5.H Agent, Bubble 07, with his Licence to Gill is about to get the Kickstarter re-boot we've all been waiting for!
We love Pond! We were there for his last adventure back in 1993 and have long been waiting for his return. We've wished for a new game that combines the characters and gameplay we loved all those years ago with:
- Modern devices
- Updated graphics and sound
- The original designer
- Even more of that punny/British Pond humour!
So we're starting from scratch and we need your help!
The High Concept:.
-
More details on the game's design can be found here.
Our £100,000 initial funding goal is pretty modest given our ambition for the game, but it's designed that way to try and make sure a Pond game does get made. Beyond that we have a series of stretch goals from additional game features to localisation and platforms that as a backer you'll get to vote on.
Our plan for the initial funding goal is to develop initially for release onto PC and Mac and distribute the game to backers via a DRM free download link. Of course we also have plans to get onto Steam and will be offering all the other cool stuff (beta access, achievements and badges) that comes with that!
Things that made Pond great that we want to retain:
- Non Linear gameplay
- Platformer
- Vehicles
- Boss fights!
- The Outfits
- Useable pick up items (the umbrella!)
- Gadgets! The Radar Gun
- Classic arcade features, like invincibility pickup, high scores, score counter...
- Sidekicks
- Easter eggs!
- Hidden areas
- Bonus Levels
- That all to famous tongue in cheek humour!
What we would like to add:
- Physics engine
- Underwater sections (interlaced with Land based gameplay)
- Popular game themes (Zombies, Vampires, Warewolves for instance. Pond's all about the parody and reference)
- More use of vehicles in multiple settings.
- The best bits of all Pond games, think of it as a Pond greatest hits.
- Increased used of outfit unique abilities (RoboSuit grapple ability, Rail Boots etc..)
- New Graphical style
- Multiple solutions to levels, using pickups, gadgets etc.
- Cameos from famous Pond characters, new characters. (Dr. Maybe is back!)
- Famous pond locations, underwater, north pole, moon etc.
No Modern Nonsense!
We want to bring back Pond for the things you loved about it, and that includes retro gaming features. We're talking about Boss levels (that are tricky to beat!) high scores, easter eggs, cheat codes and above all a focus on playability. We're not interested in prying more money from your pockets with DLC bolt ons, trying to support a freemium model by pushing IAPs, or constantly spamming your Facebook wall so we can get all your mates involved (although if you could do that, that would be awesome!)
This is one of the nice things about Kickstarter, you make your pledge and we get to make a game (something we love) so we can deliver it to you (something you want!)
Penguins:
We all know that Penguin (One of the chocolatiest biscuits in the world, apparently…) featured in the original game, which although at the time was some big corporate product placement, the designers slightly irreverent attitude and their sheer awesomeness have made them part of the game's charm.
So we want them back! We're currently trying to get into talks with McVitie's to see their return, but if you have another product that you'd like 'placed' (And wouldn't mind the Pond treatment) then get in contact. After all we are looking for funding!
- Acquired the full historic rights to James Pond!
- Convinced Chris Sorrell, the original designer to be involved.
- Tracked down the original team and interviewed them ruthlessly!
- Generated some initial funding.
- Setting of our next funding goal and deciding what we want to achieve!
Audio
It is sad to tell that Richard Joseph is no longer with us otherwise we'd ask him to be involved at the drop of a hat!
That doesn't mean his iconic work over the 3 games can't feature in the new one though. We also have all the rights to James Pond music so Richard's legacy (and infuriatingly catchy pieces!) can live on in the new Pond.
Let's not forget the sounds either! James Pond had some pretty iconic sound design that interestingly varied from machine to machine. We'll be reviving some of the more iconic pieces and making sure they end up in the new game.
We're working closely with PJ Belcher Pro Audio to make sure the audio in game is top notch. Whether it be new sounds and music, re-imagining of the old stuff, or faithful mastering of the retro pieces and SFX we'll make sure the audio is brought right up to date!
Meet the Team!
Chris Sorrell - Chris came up with the idea for a game with a cartoon fish called Guppy back when he was just 18! Attracting the attention of Millennium Interactive they turned it into James Pond and the rest is history. Chris was responsible for the majority of the programming, design, art and even some sound on all Pond titles! He's as integral to Pond as a gill is to a fish so we're very pleased to have him on board!
Although Chris has basic involvement with the project, participating in the campaign and producing a high end concept document for the game itself, his full involvement in the actual development is a stretch goal of ours (we asked him to work 'for the love of it' but something about bills and mouths to feed…) See our stretch goals below.
Ian Saunter - Ian was the Development Director at Millennium back in the day and we've been chatting with him about Pond for a while now! He now runs another company called Optricks Media (check them out!) who do some cool Augmented Reality stuff and AppToy design.
Michael Hayward - Now running his own company in alternative health medicine, Michael was the Managing Director at Millennium, who aside from being responsible for all the promotional, marketing and merchandising of Pond also came up with the name James Pond which would then carve out the very future of the franchise. We're currently talking about getting him onboard with the new project for some pretty cool stuff!
Jeremy Cooke - The man behind Gameware. After buying the full rights to Pond back in 2003 Jeremy hasn't really done much with Pond until now. Why now I hear you ask? Because the timing is just right! Crowd funding and digital distribution has opened up a whole world of opportunities for the independent developer and what better time to bring back Pond than on the 20th anniversary of his last mission!
PJ Belcher - The project manager and producer. PJ is the new guy on the scene and largely responsible for running this campaign. He'll be closely overseeing the development of the new game as a producer and be directing the audio. He's also the silly voice you hear in the Kickstarter video, go mock him on Twitter @pjbelcher.
Art Team, Assemble! - None of the lovely art you see on this page, Twitter and Facebook would have been possible without the awesome talents of Woody over at Utopian World of Sandwiches (The guys behind Chompy Chomp Chomp!) and Tom from 2D forever. Go check them out and show them some love. But don't hire them because they're really good and we want them for ourselves….
By using Kickstarter it allows us to involve supporters and the fans on a level we could never have dreamt of before. By backing the project you'll get a vote in all our key design and development decisions, making sure we make a game with the fans and not just for them!
After approaching a number of publishers to become involved in making a new Pond game we found none were willing to part with sufficient money to make a new game unless they also owned the IP. . We think they've missed a great opportunity and so we have turned to the fans to make this project happen!
It also means we can gauge how many people want to see a new Pond game before just going ahead and making one!|
The money you pledge isn't going on fast cars, vacations or any flash swag, it's all going towards the design and development for a new Pond game to be delivered across new platforms - the Designers, Programmers, Artists, Audio, tech licensing, general project management and delivery costs.
The more we raise the bigger the game!
Well firstly there are the reward tiers! They are all cumulative so the more you pledge the more you get and the closer we come to crafting and delivering that Pond game!
The tiers offer a whole host of kickstarter exclusive goodies from Pond merchandise only available to backers, to digital offerings and in game content.
We are also offering a never before seen level of backer involvement. As a backer you will gain access to a backers only exclusive closed Facebook group, where you'll get to decide which goals we stretch to and where we'll be asking the community for input on everything from design considerations (2D or 3D?) to the name of the game itself! We'll also take input and suggestions from the community. Think you've got a good fishy pun? Well let's hear it!
You'll also see your name in the credits (see tiers for available levels) have exclusive content to show you were a backer (special builds, badges, share content, avatars! etc) and loads of goodies!
So stake your place in gaming history and back Pond!
We're doing stretch goals a little differently, Flex goals! Below is a list of all the things we would like to do beyond the initial funding goal of £100K each with their estimated cost. To make sure the game is made the exact way you want it to be these flex goals will be put to a vote for all backers. Pledge more to help us reach these goals and then vote for the ones you want to see!
Platforms:
Beyond our initial goal it's going to take some additional funding to get the game onto certain platforms. Aside from the time and administration cost involved there's also unique design considerations for each platform, like 2nd and touch screens. Vote for the platforms you want to see and dig that little deeper to see your favorite one (or 2!) sporting Pond!
Wii U - £40k
3DS - We would love to include this, but at this time without significant further research we can't guarantee it. If there is enough funding though this will be a top priority!
iOS & Android - £25k
Ouya - £15k
Linux - £15k
PS4 (Online) - £40k
PS3 (PSN) - £30k
PS Vita - £40k
Xbox (XBLA) - £30k
Features:
Get Chris Sorrell on board! - £30k
Get the mind of the man behind the fish FULLY involved in the project. Although he's currently working with the game on a basic level to have him as a Creative Lead from initial concept to delivery needs payment. Something about having to eat or something...
Co-operative Play! - £25k
So many platformers have employed this with great sucess and it's time Pond got the co-op treatment. In the room multiplayer is a fundamental of retro gaming. Vote for this tier and your snide comments to your friend sitting beside you can really hit home! Live chat just doesn't have that impact *sigh*
Extended Campaign - £20k
With this additional funding we'll be able to make the game's campaign even longer incorporating more platforms, puzzles, storyline and fun than you can shake a fish finger at! We're talking Robocod long!
Playable and Legacy Sidekicks! - £25k
Finnius! Getting multiple playable characters with unique abilities to work well takes a lot of effort. This funding boost will see that happen. Not only will Finnius return (We've coaxed him out of retirement, but he's demanding cash!) but we'll also introduce some new characters, ooooh.
Extra and Legacy Gadgets - £20k
Spring boots, the umbrella, bubble guns and cake cannons! We'll incorporate all your favourite gadgets from Quill branch and a few new ones into the gameplay on top of the ones we already have planned.
Extra and Legacy Vehicles - £20k
Pond's Car, Plane and even a bathtub! With this funding we can make sure these classic vehicles make a re-appearance with some new things along the way! This is on top of the ones we already want to incorporate like the Golden Bathtub!
Extra and Legacy Bosses - £20k
Giant Evil Teddy Bear, Giant Evil car, Giant Evil Ballri.... Hang on, I'm seeing a theme here. Well if it was giant and evil expect to see it again! Along with some new oversized, morally compromised baddies on top of the ones we have planned.
Extra and Legacy Locations - £20k
The shore of 3 Mile Island, the North Pole and the Moon! Legendary locations for Pond's famous missions. With this funding we'll make sure they see their way back into the new game along with even more locations than we have planned!
Multiple Level Solutions - £25k
A gaming feature we feel would suit the new Pond game very well! Think ahead, keep your wits about you and use those FI5H smarts to figure out how YOU'D like to complete the mission. Then go back and do it the other ways!
Procedural Level Generation - £30k
Pond has always been an innovator and we've seen a few other platformers put this to good use. Well how about Pond jumps on that band wagon? Real time level creation means a unique experience for each player, but doing it right will take time. This extra money can make that happen!
To say thank you to all you lovely people for your support we're offering a whole load of goodies (as well as the game of course!)
Just check out the tiers on the right side of the screen to what you can get.
We've got some pretty cool stuff - from 'Pond-Backer' T-shirts so you can let the world know you played your part in this project, to a personalised Licence to Gill ID Card!
We feel it's important though to point out that our main priority is to make a game. So if you'd like the maximum amount of your hard earned money to go towards development then check out the 'No Physical Stuff' tier.
For those of you that do want some cool merchandise but also want the most of your money to go toward game development we are setting up a store where you can chose the items you want and buy them at a discounted price exclusive to project backers!
Risks and challenges
The game world moves (and develops) very quickly and new platforms are constantly emerging and so although we've made the utmost effort to put as much information as possible together before launching this campaign it's worth mentioning that some things may change.
Whether it's adding more platforms, changing available merchandise or our proposed costs of the stretch goals we will do our utmost to make sure the Backer's interests are first and foremost protected and that in all cases we will endeavour to make sure that any and all changes are for equal or greater value than those currently in place.
There are many platforms that we would like to get the new Pond onto but beyond time and money there are often additional obstacles that are simply out of our control. In all cases we will do everything in our power to make sure what we promise does happen. Given Gameware's good standing in the industry and the high profile of this campaign there really shouldn't be any issues – and in any event we will keep you informed at all times.
Questions about this project? Check out the FAQ
Support
Funding period
- (30 days) | https://www.kickstarter.com/projects/gameware/james-pond-pond-is-back | CC-MAIN-2018-43 | refinedweb | 2,832 | 70.73 |
Learning Dart simple basic data for beginners, and howTo use it for building a package. Beginner Dart programmers kickoff to develop a glorious package..
Collect people together, let they present ideas, collect best ideas to files and try to group them meaningfully. DO NOT build many classes in this phase, for to keep your mind open for intuitive solutions. Leave room and time to resolve, what will be the direction of this package. Practise to use Dawo by finding it's keywords in editor.
First dawo version seems to be a mess, and, yes, it is intentional so. dawo 0.0.1 shows starter programmers confusion, and records way and steps out of this mess to clarity and understanding.
0.0.1 demonstrates how data is messy, when it is not inside classes. Check for pollution of public namespace!!
0.0.2 version is meant to weed out the material and create a couple of classes. 0.0.3 might be for building first control-flow structures to really use this app.
0.0 x
0.0 x Basic idea and orientation of package to resolve. 0.0 x Incubator idea: -move first too big parts to independent packages. Maybe chore and team
aldente_func : Simple functions, including some logical errors. Find them! bat_loop : control structures. dawlib_phase : idea of keeping some of this stuff inside of "mini-library".
dawlib_chore : start of bigger "job" / "chore" for real-world-work. This should quite soon to be moved to it's own library, co's it will grow too big and important. dawlib_base : dawlib_coll : examples and hacking material of collections. dawlib_stream : examples of simple Streams. dawo.dart : dawo_app : dawoApp class dawo_dew : small helper functions for cl (command line) testing. dawo_src file created by stagehand plugin app dawo_tools : helper tools, stamps aso.
A simple usage example:
import 'package:dawo/dawo.dart'; var dawoApp = new DawoApp(); // var da = new DawoApp(); // shorter way da. xxx // play with variables var dd = new DawoDev(); dd. xxx main() { var awesome = new Awesome(); }
Play in IDE editor with alphabet: a..z to see, what variables are available. See how common namespace is polluted with unnecessary stuff, and try to find a way, to organize them to meaningful classes.
Like: bLib
morn, night, day
make, init, blake
Joker
sleep(), start() stop() render()
ride() roll()
Please file feature requests and bugs at the issue tracker.
example/dawo_example.dart
// Copyright (c) 2017, Heikki K Lappalainen. All rights reserved. Use of this source code // is governed by a BSD-style license that can be found in the LICENSE file. import 'package:dawo/dawo.dart'; main() { var awesome = new Awesome(); print('awesome: ${awesome.isAwesome}'); /// testing dawo // daw... ok }
Add this to your package's pubspec.yaml file:
dependencies: dawo: ^0.0.1
You can install packages from the command line:
with pub:
$ pub get
Alternatively, your editor might support
pub get.
Check the docs for your editor to learn more.
Now in your Dart code, you can use:
import 'package:dawo/daw. | https://pub.dartlang.org/packages/dawo/versions/0.0.1 | CC-MAIN-2018-51 | refinedweb | 496 | 68.57 |
NAME
MooseX::Lexical::Types - automatically validate lexicals against Moose type constraints
VERSION
version 0.01
SYNOPSIS
use MooseX::Types::Moose qw/Int/; # import Int type constraint use MooseX::Lexical::Types qw/Int/; # register Int constraint as lexical type my Int $foo; # declare typed variable $foo = 42; # works $foo = 'bar'; # fails
DESCRIPTION
This module allows you to automatically validate values assigned to lexical variables using Moose type constraints.
This can be done by importing all the MooseX::Types constraints you you need into your namespace and then registering them with MooseX::Lexical::Types. After that the type names may be used in declarations of lexical variables via
my.
Values assigned to variables declared with type constraints will be checked against the type constraint.
At runtime the type exports will still return
Moose::Meta::TypeConstraints.
There are a couple of caveats:
- It only works with imported MooseX::Types
Using normal strings as type constraints, like allowed in declaring type constraints for attributes with Moose, doesn't work.
- It only works with scalars
Things like
my Str @foowill not work.
- It only works with simple named types
The type name specified after
myneeds to be a simple bareword. Things like
my ArrayRef[Str] $foowill not work. You will need to declare a named for every type you want to use in
my:
subtype ArrayOfStr, as ArrayRef[Str]; my ArrayofStr $foo;
- Values are only validated on assignment
my Int $foo;
will not fail, even if
$foonow holds the value
undef, which wouldn't validate against
Int. In the future this module might also validate the value on the first fetch from the variable to properly fail when using an uninitialized variable with a value that doesn't validate.
AUTHOR
Florian Ragwitz <rafl@debian.org>
This software is copyright (c) 2009 by Florian Ragwitz.
This is free software; you can redistribute it and/or modify it under the same terms as perl itself. | https://metacpan.org/pod/MooseX::Lexical::Types | CC-MAIN-2015-22 | refinedweb | 318 | 51.48 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.