text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Using WebControls In ASP.NET MVC Views – Part 3
Check out my newest blog post about using WebControls inside of MVC (source code included)🙂
I mean’t to have this post out this morning, but I was busy working on getting my personal website online — Please check it out and tell me what you think!
Up until now all of the post in this series have been tests to see if we could make the process work. Now, finally, now we get into some real code that we can actually use. I’ll take just a little bit of space to explain how it works and how to set it up, but then the rest of the post to explain how to use it.
Getting The Code Setup
Before we discuss how to use the code lets go over briefly how to setup the code.
To start, the code you need to download is found at the end of the post. When you download it, put both the
.cs and the
.aspx file into your project. By default the code expects the
.aspx to be in your Views directory, but you can move it — but if you do then you need to update
MVCWebFormExtensionMethods._WebFormRenderControl.TEMPLATE_PAGE with the new location (yeah, it’s a long name :)).
Optionally, you can make a couple changes to your Web.config to make sure that the extension method and MvcControlGroup are available to the rest of your application without needing to add anything special onto each of your pages.
<configuration> <!-- snip... --> <system.web> <pages> <controls> <!-- snip... --> <add tagPrefix="mvc" namespace="MvcWebControls" assembly="YourProjectAssemblyName" /> </controls> <namespaces> <!-- snip... --> <add namespace="MvcWebControls"/> </namespaces> </pages> </system.web> <!-- snip... --> </configuration>
I’ve hidden most of the Web.config from this example, so make sure to add the areas you see above, not simply replace your entire config.
The rest of this post goes over some demos found out on my website, you may want to follow along to better understand what is going on.
Example 1 — Simple Postbacks
<html> <head runat="server"> <title>Simple Postback Example</title> </head> <body> <h2>Just A Simple Form</h2> <% this.Html.WebForm((form) => { %> <% form.RenderControl(new TextBox()); %> <% form.RenderControl(new Button() { Text = "Save" }); %> <hr /> <% form.RenderControl(new Calendar()); %> <% }); %> </body> </html>
For the first example, we want to see if we can use a few simple WebControls and see if our values are posted back the way we would expect. In this example we add a TextBox, Button and Calendar. Pay attention to how this HtmlHelper method works.
If you notice, the method excepts a delegate to render the control onto the page. We do it like this because it allows us to provide markup to our page along side with the WebControls. The method accepts a single parameter the
MvcWebForm. Think of this as a very, very simple version of a WebForm.
The MvcWebForm gives you access to the page that is being used to render the controls. This is important to remember and I’ll go over it in the next section.
Example 2 — Using Events
This is handy so far but unless we are using some events for our controls, nothing much has changed. Let’s look at another example.
<html> <head runat="server"> <title>Using Events</title> </head> <body> <h2>Simple Button Click Event</h2> <% this.Html.WebForm((form) => { %> <strong>Clicked : </strong> <% Label text = form.RenderControl(new Label() { Text = "0" }); %> <br /><br /> <% Button submit = form.RenderControl(new Button() { Text = "Add" }); submit.Click += (s, e) => { int value = 0; int.TryParse(text.Text, out value); text.Text = (++value).ToString(); //post back this information form.Page.ClientScript.RegisterStartupScript( typeof(Page), "confirm", string.Format("alert('Updated to {0}');", text.Text), true ); }; %> <% }); %> </body> </html>
In this example we are assigning a Click event to the button we added to the page. You’ll notice when we run this example the value is incremented up by one on each press.
Notice that we use
form.Page instead of
this.Page. When you use this code you must remember that you are working with two separate Page instances. One for the View and the other for rendering the control. In this example we use
RegisterStartupScript to display an alert box. If you were to have used
this.Page.ClientScript, nothing would happen.
It’s a subtle difference that you are going to want to keep in mind while you use this class.
Example 3 — MvcControlGroup
So far all of our examples have only used a couple controls that were all rendered inline with the rest of the page. While this is going to work in most situations, some places won’t work as well.
The example below shows how you can use the
MvcControlGroup to group controls together, for example, the Wizard control.
<html> <head runat="server"> <title>Using Events</title> </head> <body> <h2>Using Wizard Control with MvcControlGroup</h2> <div> <% this.Html.WebForm((form) => { %> <mvc:MvcControlGroup <asp:Wizard <WizardSteps> <asp:WizardStep <p>Leaving fields blank will catch the validators.</p> <strong>Name</strong> <asp:TextBox <asp:RequiredFieldValidator <br /><br /> <strong>City</strong> <asp:TextBox <asp:RequiredFieldValidator </asp:WizardStep> <asp:WizardStep <strong>Date Requested</strong> <asp:Calendar </asp:WizardStep> <asp:WizardStep <h2>Confirm This Order?</h2> <p>This step maps to a Controller Action before submitting</p> <% MvcWebForm.Current.Action = "/Projects/MvcWebForms/Submit"; %> <% MvcWebForm.Map(() => new { name = name.Text, city = city.Text, date = date.SelectedDate.ToShortDateString() }); %> </asp:WizardStep> </WizardSteps> </asp:Wizard> </mvc:MvcControlGroup> <% }); %> </div> </body> </html>
Just like with other controls, the
MvcControlGroup must be rendered in a separate page instance. This control moves it to the correct context before it is processed.
If you remember, the idea behind this project was to allow people to use WebControls with Mvc — not just use WebControls in Mvc. On the last step we point our form to a Controller Action and use a method called
Map to format our information for the postback.
When the user finished the Wizard they can post the finished form back to the correct Controller Action!
**whew** — that’s a lot to take in!
Limitations
There are a couple limitations that still need to be worked through…
- HTML output must be well formed – If your control outputs bad HTML, this thing will croak. The reason is that instead of using Regular Expressions to try and match content the output is parsed using the XDocument.
- PageMethods, AJAX calls will probably not work – Controls like the UpdatePanel or using PageMethods will most likely not work anymore. Some additional work is going to be required to make those work.
I’m sure there is more that needs to be looked at but at this point it seems as if it can be used — at least for testing.
Please let me know if you have any ideas, feedback or suggestions! I’m going to continue to work on this and see what kinds of improvements can be made.
Source Code
Download MvcWebForms.zip
.
[…] Part 3 […]
Using WebControls In ASP.NET MVC Views – Part 2 « Yet Another WebDev Blog
August 13, 2009 at 12:36 am
[…] Part 3 […]
Using WebControls In ASP.NET MVC Views – Part 1 « Yet Another WebDev Blog
August 13, 2009 at 12:37 am
[…] to VoteUsing WebControls In ASP.NET MVC Views – Part 3 (8/12/2009)Wednesday, August 12, 2009 from webdev_hbWebControls In MVC Series Part 1 Part 2 Part 3 I mean’t […]
ASP.NET MVC Archived Blog Posts, Page 1
August 17, 2009 at 12:26 am
really nice and useful piece of work.
I tried to use your code to place a MVC view, but I failed miserably.
The first hurdle was when “XDocument.Parse” failed because of a noscript block in the html generated by the ReportViewer.
I removed the noscript block using a regex before parsing it, but the parser now complains about the way the ReportViewer is setting up the parameters:
“Reserved.ReportViewerWebControl.axd?Mode=true&ReportID=7c…”
Do you have any solution for this issue?
ogrig
August 21, 2009 at 1:42 am
Sorry about the previous message, I should have checked what I wrote before submitting.
What I wanted to say was: “I tried to use your code to place a reportViewer control in a MVC view”
ogrig
August 21, 2009 at 1:45 am
Interesting, I’d love to help you sort this out – Can you send me your e-mail address using my contact form.
If I had to take a guess off the top of my head then it might be from the ampersand (&) signs in your output. You might need to escape those to “&”
In any case, this is something that the MvcWebForm class should do for you on it’s own. Thanks for letting me know about it!
webdev_hb
August 21, 2009 at 7:16 am
In the meantime I managed to get a bit further.
First, I put the offending script in a CDATA block.
I can now create a view with an empty report (just “form.RenderControl(new ReportViewer());” and nothing else).
Next, when I tried to use the control with a local rdlc report, The ReportViewer generates a script with 2 onclick events, one of them for the DocumentMapButton.
The easiest way out was to just disable the document map button.
After a bit of struggle I now have a page with a partially functional ReportViewer control.
Mind you, I only tried it with very simple reports. But at least I can display a report page and navigate between pages.
What does not work right now:
– the document map.
– printing the report. When I tried to print the control tried to install something and I stopped. It would have probably worked, but I wasn’t very interested at the time.
– exporting. This is a bit unpleasant, but it should not be very difficult to solve. And I can always set ShowExportControls to false, provide my own controls on the page and then stream the PDF or Excel content on the server.
I will send you my email address shortly
ogrig
August 23, 2009 at 7:07 pm
Unfortunately, you’re having to jump through a lot more hoops than I had intended. If you would include some of what the output should look like (if you were to render it on a normal page).
Right now I’m working changing out using the XDocument.Parse to simply overriding the render event of the container control and outputting the content based on the rendered content – which should get around the error.
In any case, thanks for the feedback – this will help a lot fixing this code.
webdev_hb
August 23, 2009 at 7:43 pm
[…] Using WebControls In ASP.NET MVC Views – Part 3 « Yet Another WebDev Blog somewebguy.wordpress.com/2009/08/12/using-webcontrols-in-mvc-part-3 – view page – cached #Yet Another WebDev Blog RSS Feed Yet Another WebDev Blog » Using WebControls In ASP.NET MVC Views – Part 3 Comments Feed Yet Another WebDev Blog jLinq – Going Online! The Super Secure HttpRequest New Hugoware.net — From the page […]
Twitter Trackbacks for Using WebControls In ASP.NET MVC Views – Part 3 « Yet Another WebDev Blog [somewebguy.wordpress.com] on Topsy.com
August 29, 2009 at 5:49 am
[…] with 5 comments Check out a new post about using WebControls inline with MVC that actually works with postbacks! […]
WebForms And MVC In Harmony — Almost… « Hugoware
September 10, 2009 at 10:27 am
You reallly rocks man.. that was Quite interesting..
Nithin Mohan T K
October 5, 2009 at 6:54 am
Great! Had fun messing around with it (but haven’t found a use for it yet though :))
webdev_hb
October 5, 2009 at 9:48 am
[…] Using WebControls With Inline MVC […]
MVC Post Round-Up « Hugoware
October 5, 2009 at 7:31 pm
[…] I say ‘pretty much’ because I’ve done a lot of work to let people use WebControls inline with MVC code. […]
Render Partial — But With Arguments! « Hugoware
October 26, 2009 at 11:39 pm
I was able to use the control working via your MvcControlGroup and it works well. However I ran into one gotcha.
I get a “Databinding methods such as Eval(), XPath(), and Bind() can only be used in the context of a databound control.” error if I try to use data binding and something like the following:
<asp:Label Text='’ />
inside of the MvcControlGroup blob.
NOTE: the code uses the namespace MvcWebForms but your
article says to use MvcWebControls.
Todd Smith
November 11, 2009 at 2:17 pm
Pretty interesting — the basic idea behind this code is that it moves all of the controls into a separate page instance, performs the page lifecycle and then returns the content to the correct containers.
It would make sense that the contexts wouldn’t exactly match up if they are using inline rendered code (like this.Databind()) because this is referring to the first page instance. There is actually a property on the this.Html.WebForm argument that represents the instance of the second rendered page context.
I’m not sure if that code is open source or editable but that seems like the most probable situation…
Do you have any source code to share?
webdev_hb
November 11, 2009 at 8:58 pm
I’ve tried to implement your control with asp.net mvc on vs2010 but I’m getting “Filtering is not allowed” error on this line during runtime:
HttpContext.Current.Response.Filter = new _WebFormStreamFilter();
Has anyone come accross this?
(i’m running on the ASPET development server on Server2008)
David
March 26, 2010 at 5:04 am
David, I’ve heard other reports about the Filter not working anymore in MVC2 – I’m not sure if this is a .NET 4.0 issue or if it is MVC2 at this point.
Right now I’m trying to figure out a different way to handle this.
In the meantime, you might check out this blog post where you use binding with a Model to WebControls. Not exactly the same thing but might get you started in a different direction.
hugoware
March 27, 2010 at 9:15 am | https://somewebguy.wordpress.com/2009/08/12/using-webcontrols-in-mvc-part-3/ | CC-MAIN-2017-17 | refinedweb | 2,352 | 65.12 |
Red Hat Bugzilla – Bug 808188
The system with kernel 3.3 freezes when using web-camera
Last modified: 2012-08-28 14:04:03 EDT
Created attachment 573763 [details]
<lsusb -v -d 2232:1020> command output
Description of problem:
After the kernel has been updated to <3.3.0-4.fc16.i686.PAE> a web-camera using isn't possible, because OS freezes at the moment when some application is trying to open </dev/video0>.
___________________________________
Version-Release number of selected component (if applicable):
kernel-PAE-3.3.0-4.fc16.i686
___________________________________
How reproducible:
always
___________________________________
Steps to Reproduce:
<$ vlc v4l2://> (also it is shown with MPlayer & other applications using </dev/video0>)
___________________________________
Actual results:
OS is not responding. The <$ ping> command, launched from the other machine to the first one, says "Destination Host Unreachable".
___________________________________
Expected results:
The VLC player shows the picture from web-camera. OS works fine.
___________________________________
Additional info:
I've done the next steps to work around this problem:
I've extracted the <uvcvideo> module source (uvc_ctrl.c/uvc_driver.c/uvc_entity.c/uvc_isight.c/uvc_queue.c/uvc_status.c/uvc_v4l2.c/uvc_video.c/uvcvideo.h) from the <kernel-3.2.10-3.fc16.src.rpm> package and compiled the module with this <Makefile>:
___________________________________
obj-m := uvcvideo.o
uvcvideo-objs := uvc_driver.o uvc_queue.o uvc_v4l2.o uvc_video.o uvc_ctrl.o \
uvc_status.o uvc_isight.o uvc_entity <uvcvideo.ko> to the </lib/modules/$(uname
-r)/kernel/drivers/media/video/uvc/> directory and run <# depmod -a>.
Now the <uvcvideo> module does not cause the OS freezing.
I have the same issue with kernel version 3.3.0-8.fc16.i686.PAE and the workaround solves the problem. Camera device ID is 090c:3717.
This trouble exists with kernel 3.3.1-3.fc16.i686.PAE too.
I am seeing this in kernel-PAE-3.3.1-5.fc16.i686 with camera device id 13d3:5126.
webcam unusable since the move to 3.3.
This trouble exists with kernel 3.3.2-1.fc16.i686.PAE too.
Per
$ sudo -c 'echo options uvcvideo nodrop=1 >> /etc/modprobe.d/uvcvideo.conf && rmmod uvcvideo && modprobe uvcvideo'
For my kernel, 3.3.2-1.fc16.i686.PAE, this effectively works around the crash on my machine (ASUS EEE PC 1201N).
Very good, options uvcvideo nodrop=1 fixes the crash on Thinkpad T60, using
3.3.2-1.fc16.i686.
(In reply to comment #7)
>.
* Samsung RV520-s0l // Fixed
I no longer reproduce the bug as of kernel-PAE-3.3.2-6.fc16.i686.
uvcvideo nodrop=1 still needed here running kernel-3.3.2-6.fc16.i686 to avaoid hard freeze. PAE should'nt be make the difference. Thinkpad T60.?
----
cat reset.c
#include <stdio.h>
#include <fcntl.h>
#include <errno.h>
#include <sys/ioctl.h>
#include <linux/usbdevice_fs.h>
///home/nargis/reset /dev/bus/usb/003/002
void main(int argc, char **argv)
{
const char *filename;
int fd;
printf ("/home/nargis/reset /dev/bus/usb/003/002");
filename = argv[1];
fd = open(filename, O_WRONLY);
ioctl(fd, USBDEVFS_RESET, 0);
close(fd);
return;
}
-- Compile this and run with parameters
#./reset /dev/bus/usb/003/002
I "reconnect" web cam programs. But it's not good!!(
(In reply to comment #11)
>?
It's not this bug.
Hello, this is 100% reproducible for me. I can reproduce it with "vlc v4l2://". Sometimes I need to run the command only once, sometimes 5 times or more. Sooner or later a hard-freeze happens and nothing responds, not even magic-SysRq keys (I have them enabled). No backtrace on screen, no backtrace in the logs.
I remember having those freezes after 3.3.0 at fc16, so I always kept a 3.2 kernel for working. Now that I preupgraded to fc17, 3.4.3 seems to manifest the same bug as well.
Webcam is a laptop-embedded Lenovo EasyCamera (according to lsusb -v).
Graphics is Intel embedded GM45 (Xorg.0.log) modeset via the i915 module (lspci -v).
Please let me know how to get some useful debug info so we can fix it. Thanks.
OK I managed to reproduce it in console mode using "mplayer -vo null tv:///dev/video0" several times. A backtrace was shown part of which is caught on the first attached file.
Even though the system was completely locked-up, pressing the power button produced another backtrace which was repeating with a period of about 30s. This is shown on 2nd attached file. Kernel is 3.4.3-1.fc17.PAE.
Created attachment 593618 [details]
first backtrace
Created attachment 593619 [details]
second backtrace, after pressing power button, repeating every 30s
I am having the same problem on a Thinkpad T60p with an external HP USB Webcam. Running cheese completely locks the system up, requiring a power cycle to recover. There is no visible output from either cheese or on the system log. This is under 3.4.3-1.fc17.i686 and cheese 3.4.2 (current from updates-testing).
What information can I gather that would help solve this problem?
@Peter: Did you try Brad Rubenstein's fix? (Comment #5 for this bug). It solves the problem on my Lenovo Z560 with 3.4.2-4.fc17.i686.PAE.
The workaround in comment #5 gets past the lock-up. However, cheese still doesn't work very well with the camera. Skype also has problems. I guess I'll have to dig deeper and try to find out whether these problems are with the applications or with the kernel/driver.
We can add another camera to the list. The integrated camera on Asus EeePC 1005PE, vendor 13d3 product 5111 IMC Networks fails as described above.
I can also confirm that the "nodrop" workaround fixes the problem on my current kernel, 3.4.6-2.fc17.x86_64.
Earlier I submitted a report to the kernel bugzilla regarding this problem. See bug 45031.?
(In reply to comment #21)
>?
It was, yes.
Thank you for letting us know the problem is resolved.
Thank you too. | https://bugzilla.redhat.com/show_bug.cgi?id=808188 | CC-MAIN-2017-43 | refinedweb | 1,002 | 70.29 |
Comments for Open Letter to the Borland C++ Developer Community 2009-07-09T13:01:13-07:00 Open Letter to the Borland C++ Developer Community Bertin KOUADIO 2007-02-01T19:25:08-08:00 2007-02-01T19:25:08-08:00 Open Letter to the Borland C++ Developer Community I do think that Borland should simply try to make up this ugly spot. First I was deceived sometimes ago when they abandoned dbase. After about 5 years using it, I had to close everything up and take a 100 rad curve! Now with CBX and again I am on the road side. There is nothing I can do about that. It is just like, ok, we bought a trash and huu! look, just forget about and buy something else! Well!!!. See I think YOU MUST GIVE those who bought CBX and registered their product, a pass to c++ Builder 2006! VCL for BuilderX?? Craven Weasel 2004-11-18T11:06:56-08:00 2004-11-18T11:06:56-08:00 VCL for BuilderX?? Is there any new on VCL for BuilderX?Following the procedure suggested by Borland, we compiled a simple BCB6-VCL application with BuilderX, with success; but we couldn't see the form inside BuilderX, nor modify it... Is this the compatibility announced by Borland? Will we have to modify the form in BCB6 and then "use" it from BuilderX, is this the new RAD environment for VCL???Is there anybody in Borland, answering this (and similar) comment to the open letter?? Open Letter to the Borland C++ Developer Community Silent Bob 2004-07-08T12:43:07-07:00 2004-07-08T12:43:07-07:00 Open Letter to the Borland C++ Developer Community Hi everybody,I read all the comments and I'd like to say that I agree 100% with the most of them.I must sadly say that this is my farewell to Borland, having used BCB from its first release (1.0) to the last (6.0).1) BCB-X is SHIT.2) I'll never code with Java unless I'll be dying for hunger and so forced to.3) while .NET might be the next technology and M$ is desperately trying to force everybody to use it, I can't bear the loss of efficiency and the lack of OS low level control access which M$ is slowly but constantly forcing the developers to accept coding with .NET and exposing newer OS API in that way.4) I cannot accept that C++ (and C) are almost treated like assembly nowadays and cursed to become so in the very near future.5) I'm tired of re-learning a new, incomplete, slower, powerless and to a certain extent more limitated way to do(??? or desperatly and frustrating attempt to do) *exactly* the same things.6) Bored to trash over a dozen years of deep knowledge and porting once again my applications to a new(!?) coding paradigm.7) VCL is real elegant OOP, MFC is counter-nature. 8) The only defect of VCL is to be coded in Object Pascal with a bunch of bugs which could usually be overcome with a reasonable effort.9) What I was expecting from BCB-X was it to be the next version of BCB6.0 but with a new VCL-like framework written in pure ansi C++, with some new features for easy porting/bridging with the old VCL code, for which, by the way, I wrote several dozens custom controls for the most different kind of apps.10) I'll stick to BCB6 untill possible, looking for, in the meanwhile, a free C++ solution compiler/framework, trying to find/write at least a decent RAD designer somehow.Borland, I honestly think that you have silently been bribed by M$, and wouldn't surprise me if one day you would be officially acquired by the latter.Perhaps anti-trust issues are keeping M$ by doing that?Being an electronics engineer and having designed hw devices for the most diverse kinds of needs, ranging from simple controllers to real time audio, video, biomedical data signal acquisition, computing and logging (but this is generally true for any kind of electrical signal)to microwave devices, I had to cope with the design and code of drivers, 2d/3d primitive drawing libraries, 2d/3d custom editors for different needs, a highly optimized MPEG4 codec, commercial DB apps, shell extensions and namespaces, lots of COM/ActiveX libraries, objects and controls.Not to mention the many other things i liked to design, code and sometimes even sell (out of my ordinary developer workplace).So fuck JAVA, .NET and the like, i want/need/claim the freedom to write highly otimized code (assembly/c/c++ or even object pascal), and use .NET only for the crappy commercial (mostly) DB applications Fuck u Borland, it has been nice til' it lasted but now it's over. Some advise from a long time Borland developer... Craven Weasel 2004-05-31T20:17:51-07:00 2004-05-31T20:17:51-07:00 Some advise from a long time Borland developer... I've been writing code since Borland Pascal in dos and I have always preferred to use Borland Pascal/Delphi and C++BuilderX since they offered a truly RAD IDE better than MS with the same relative easy of use you might expect from VB, except you could actually create compiled executables rather than using a pseudo code interpreter like VB had (similar to the new .NET and Java). Now it seems since MS invested a large chunk of cash in Borland .NET has become the main focus...This is I think the issue the majority of developers have with Borland, we expected Borland would continue to offer a compiled exe compiler and just add .NET as an add-on at most but I think Borland has lost sight of what they had to offer that was unique to them. If we all wanted to write .NET applications we could just use C# or VB, .NET is so windows-centric using Delphi almost seems silly.C++BuilderX is ok and a Java IDE is no issue for me but where's the RAD IDE? Perhaps I'm missing something but isn't this thing just a simple compiler? What was the point of releasing CBX with no RAD support?A lot of people, including myself, have put a lot of time and effort into VCL components, I have written a few hundred in my day and think it's a great way to write apps. CLX is a good idea too but now it's just dropped off the planet???What about all the companies out there that sell VCL controls? There's only a few hundred of them, what are they to do now?I remember filling out a questionnaire some time back about what I'd like to see in the future from Borland but I have to wonder why Borland didn't ask it's users what it should do before it decided to go all out with .NET and drop VCL/CLX completely?We want the option to compile binary exes not .NET applications, if we are force to use .NET what do we need Borland for?Borland finally got things right in D7 I thought with the addition of DataSnap yet still had support for the old BDE. And just offered a preview of a .NET compiler.If Borland wants to make CBX into something worth using add support for VCL and CLX components before you release version 2 and make sure the IDE has all the features CB6 had.And for Delphi 9 bring back compiled exes, VCL and CLX support with .NET as just another project type and do so soon!People like myself do not want to just discard years of VCL code they have written and I don't feel we should have to, you should make tools WE want to use not tools you think we should use. Please add support for compiled exes, VCL and CLX ASAP before everyone jumps ship to MS. Open Letter to the Borland C++ Developer Community Velko Iltchev 2004-05-26T15:30:36-07:00 2004-05-26T15:30:36-07:00 Open Letter to the Borland C++ Developer Community Velko IltchevLecturer on “Compiler Construction” and “Object-oriented Programming”Department of Computer Systems, Technical University PlovdivSt. Petersburg blvd. 61a, 4002 Plovdiv, Bulgaria, EuropeTel.: 00359-32-627544, Fax: 00359-32-629018, Mobile: 0898-410233Email: iltchev@tu-plovdiv.bg or v.iltchev@ieee.org =====================================================TO:Borland Developer Networkemail: cpp_open_letter@borland.comSUBJECT: Open letter and other "good" newsPlovdiv, 27.05.2004Dear Colleagues,I am deeply disgusted from the news I receive in the last time from Borland.I begin to ask myself: "Is that Borland anymore OR is that a branch of MicroSoft?"I write on Borland products since 1987. In this year Borland released Turbo C ver. 1.0.I began to write on C much earlier, but I did NOT like the Microsoft C compiler for MS-DOS.I invested much time and money to go in depth into Borland C++ Builder and VCL.I show this product to my students as the best example of object-oriented and component style of programming.I recommend Borland products to my students and encourage them to use Borland products in their future work as young specialists.BUT:What I saw as Borland C++ BuilderX (at a first glance) was a text editor, which was able to call compilers written by other companies.MOREOVER:In your open letter from October 29, 2003 you wrote:."HOW SHOULD I UNDERSTAND THIS?My questions:1. Why should I continue to invest time and money in Borland products? Who else can give me the guarantee that Borland will NOT change its "Long Term Product Line Strategy" again?2. How long will Borland exists as an independent company?3. Should I continue to pledge my name as university professor, recommending Borland products to my students?With best regards:V. Iltchev re: Open Letter to the Borland C++ Developer Community Zar sha 2004-04-25T12:52:37-07:00 2004-04-25T12:52:37-07:00 re: Open Letter to the Borland C++ Developer Community What do you expect to hear:)?They screwed up and got nothing to say.>Fire your strategy folks borland, they will drown you.<Any new product that has "C++" in it's name shold target this:ANSI/ISO compliance - as much as possiblegood support for boost.org like stuff - the new cpp style.Actually buiderX does it but does it in an ugly way. As to bcb series - they all are based on originally delphi VCL.VCL don't use all c++ features and this is pretty anoying.Still it is a good tool for GUI utils.However compiler is quite shitty - try compiling cryptopp.com lib.msvs 6-7.1 and intel cpp 8 compile it nicely.Finally one more point - writing IDE for C++ in Java is RIDICULOUS! re: Open Letter to the Borland C++ Developer Community Zar sha 2004-04-25T12:10:23-07:00 2004-04-25T12:10:23-07:00 re: Open Letter to the Borland C++ Developer Community Agree 110% :))) re: Open Letter to the Borland C++ Developer Community Noel Carlos Hernandez Perez 2004-04-17T14:51:19-07:00 2004-04-17T14:51:19-07:00 re: Open Letter to the Borland C++ Developer Community Oh, my friend.I'm at your side, from the times of OWL.Borland fall INTO THE DARK SIDEAGAIN !!!, AGAIN !!!, AGAIN !!!What can we do ? My users are asking about supporting .NET and colaborations in the future, I don´t know what to say.Can we suing them ? re: Open Letter to the Borland C++ Developer Community Noel Carlos Hernandez Perez 2004-04-17T14:31:38-07:00 2004-04-17T14:31:38-07:00 re: Open Letter to the Borland C++ Developer Community My full company are in crossroad now.C++ Builder X is terrible slow, even in our dual processor computers, PIV 3200 MHZ, 2 GB of RAM.The IDE are very UGGLY.What's the point about compiling my code in four diferent compilers ? Does borland realize that their own compiler is a crap ?What's the point about making linux, unix, mac apps ? Does this uggly things exist today ? I didn't see any of my neigbors installing linux because stability :-)Who care about the IDE is written in Java or Delphi or Visual Basic, the problems will persist because the programmers at borland continue to being mediocre. Remember "You wont be clever if you speak Latin"Borland is wasting its time, they dont sell Kylix, because is utopia, C++BuilderX and wxWidgets is utopia too, another kind of communism, the communism of computer software.Borland's consummers are tired of bugs, borland programmers cant do anithing about it, they simply switch to Java sort of things because is well PAID, money move the world and programmers from borland seem to shift to microsoft or get its salary increase programming in Java.Do you really think that they make any application using C++Builder, Kylix or Delphi ? I can't count how many COMERCIAL applications are and will be written using MFC and visual C++ just because is a solid product, and the guys at microsoft know what they want (money of course but I mean at this point "what they want in the future for this product"), but things are changing at microsoft too, A GREAT SHADOW IS EXTENDING AT MSFT (like the lord of rings) one lead programmer very fustrated about its Java implementation, suddenly come up with .NET C# and so on, a slow versions of a crap in which everybody can apply a part of his knowledge of every language and a new fresh API, this guy are really mad and we too because someday we will be writing thousands of line for this plataform in C# builder or microsoft .NET, so we are at THE END OF DAYS of fast tools and programmings, I will wait for a Windows Versions made in C#, a crap that need 10 GHZ machine and 10 GB of memory, but its a matter of time, C++ native compilers will become like ASSEMBLY language in our days, and our childrens will say "Are daddy mad ? why he is still programming in C++ ?". jajajaja. If we want to be productive we must switch to C# and .NET is pretty slow like java or so, but who care of that ? does anybody ask you why dont you write your programs in assembly for faster operations ?Does any product comes with a banner like "compiled with Inter Compiler" or so ? Software its about money !!! (Hello linux comunity, are you alive cause microsoft donations or something ?) It's like living in the soviet union.Money can do anything and we must trust in the microsoft vision, Bill Gates can foresee the future I´m absolutely sure !!!, he can crash anytime nevertheless he has money and can fix things :-)What to do next ? Open Letter to the Borland C++ Developer Community Craven Weasel 2004-03-01T03:07:40-08:00 2004-03-01T03:07:40-08:00 Open Letter to the Borland C++ Developer Community I'm no longer continuing with Borland products, C++ Builder 6 is the end of the line for me. This sudden and radical shift from Borland is most unwelcome, Borland has been on the decline ever since their "Inprise" nonsense. They have lost their direction as a company. I'm changing over to Intel C++, there are other choices, Borland had great products for awhile, but getting locked into the VCL or the old OWL libraries that end up in nothing is a big mistake, and the now X.Good luck Borland, you need it!!! | http://edn.embarcadero.com/article/31277/atom | crawl-002 | refinedweb | 2,654 | 62.48 |
Docs |
Forums |
Lists |
Bugs |
Planet |
Store |
GMN |
Get Gentoo!
Not eligible to see or edit group visibility for this bug.
View Bug Activity
|
Format For Printing
|
XML
|
Clone This Bug
There appears to be an issue with threading on nptl enabled systems.
So far this is mainly demonstrated by Mono applications, but could potentially
affect other threaded apps as well. Im posting this as a new bug, as the
original bug focused (incorrectly) on Mono. Here's a fresh start.
Here is a sample program to better demonstrate the issue submitted by Canal Vorfeed on bug #54603. Here's Canal:
----------------------------------------------------------------------------
Ok, I've spent few hours on this issue and found where REAL problem lies.
Good news: it's NOT Boehm's GC and it's NOT mono.
Bad news: it's problem with glibc itself :-(
I've started with "GC is broken with nptl" sample by "Peter Johanson" and played with it for few hours. In the end I've just removed Boehm's GC completely (just plain old malloc) and... it still deadlocks somewhere.
So we should stop playing with mono and try to address REAL issue: deadlocks somewhere in nptl library itself :-( Unfortunatelly I'm not glibc guru.
Program:
#include <pthread.h>
#include <stdio.h>
void *thread_function (void *args)
{
int j = 0;
char *str;
printf("starting thread!\n");
for (j; j < 24; j++)
{
str = (char *)malloc(240);
printf("malloc in thread !\n");
}
pthread_yield();
}
int main (void)
{
int i;
pthread_t thread;
char *str;
for (i=0;i<1000;i++)
{
pthread_create( &thread, NULL, thread_function, (void *)i);
pthread_yield();
str = (char *)malloc(240);
printf("%d threads\n", i);
}
sleep(10);
}
$ ./testpgm | grep 'malloc in thread' | wc -l
9168
Without ntpl I've got expected 24000 ...
-------------------------------------------------------------------
Nice work tracking this down, let's have the toolchain team take a look
at this or try and push it upstream.
*** Bug 54603 has been marked as a duplicate of this bug. ***
*** Bug 63713 has been marked as a duplicate of this bug. ***
Just a side comment: it does not look like a deadlock problem after all: it's
just glibc/nptl can not create more the 384 threads (even if THREADS_MAX in
glibc is 100000 and /proc/sys/kernel/threads-max is 14336). Since LinuxThreads
CAN create requested 1000 threads on the same system (in fact it's not even two
systems: I have Gentoo with glibc/LinuxThreads installed in chroot jail) it's
not kernel issue.
Not sure about mono: can it actually hit this limit in real world applications
? If the answer is no then mono bug should be reopened...
Oops. Of course even if it's not deadlock condition it still CAN affect mono
(and a lot of other programs): it's not 384 simultaneous threads but rather 384
threads for lifetime of program!
#include <pthread.h>
#include <stdio.h>
#include <time.h>
void *thread_function (void *args)
{
pthread_exit(NULL);
}
int main (void)
{
int i;
pthread_t thread;
char *str;
for (i=0;i<1000;i++)
{
if (pthread_create( &thread, NULL, thread_function, (void *)i))
{
perror("testpgm");
} else {
// int status;
// pthread_join(thread, (void **)&status);
pthread_yield();
printf("%d threads\n", i);
}
sleep(1);
}
}
$ ./testpgm
1 threads
2 threads
...
380 threads
381 threads
testpgm: Cannot allocate memory
testpgm: Cannot allocate memory
...
testpgm: Cannot allocate memory
testpgm: Cannot allocate memory
$
If you'll uncomment pthread_join program will work Ok but it's not Ok not not
wait for thread stop with pthreads, right ?
P.S. sleep is there to make sure no race conditions exist at all: thread is
created and executed and then other will be started. You can remove it from
sample - it does not change anything...
I've tried both these test programs on my system (which is NPTL enabled, and
exhibits the referenced mono 'bug'). Surprisingly, they both work fine, easily
reaching the end of the for loop. I attempted to replace the for loop with a
while( 1 ), and they both sat there quite happliy chugging into the tens of
thousands of threads created. Also, it should be mentioned that in all cases
that I've seen this bug exhibited (in the C# thread test posted elsewhere, and
in mono's xsp/mod-mono-server) the threads numbered in the 10s, and each of
them was in one of the various wait functions (pthread_cond_timedwait,
wait_sem, etc..).
In mono's case at least, the bug can be 'resolved' by disabling the garbage
collector (libgc), obviously this isn't a viable solution. On the mono bugzilla
(), I've mentioned that the freeze
occurs when the garbage collector initiates a world stop (stops all threads
except the current) and begins a full garbage collection. However, for some
reason the thread performing the garbage collection then also waits.
When running the various test programs in gdb, I found that if I put a
breakpoint in the libgc code that performed the garbage collection, the
breakpoint would be reached repeatedly, however there was a period of one or
two seconds in between each occurance. This occured when it would normally
freeze completely. I'm not one hundred percent up on my unix signal handling
etc., but it seems as if the breakpoint occuring may have altered some of the
processes signals, perhaps waking one of the other threads, and therefore
alleviating the freeze.
Perhaps some details are different enough to see difference in behaviour.
I've tried to recompile system with gcc 3.3.4-r1 but it made no difference...
Anyway: looks like there are something goes on with thread synchronisation: in my case with pthread_join I can avoid problems in mono case GC trying to freeze all threads to safely do garbage collectiong. I think real problem is in nptl resources allocation: from outside it looks like there are something wrong with mutexes or something and sometimes nptl just can not do the right thing. Since "normal" programs (like MySQL or Apache) behave quite predictable in regard to threads communications (threads are created and work on separate task - sometimes with something like "manager" thread but never in "normal" programs some random thread will try to communicate with some other equally random thread!) it does not trigger bug in nptl. But GC can stop all threads to do it's work literally at ANY TIME and thus it triggers bug from time to time...
For reference:
Portage 2.0.50-r11 (gcc34-x86-2004.2, gcc-3.4.1, glibc-2.3.4.20040808-r0, 2.6.9-rc1-mm4)
=================================================================
System uname: 2.6.9-rc1-mm4 i686 Intel(R) Pentium(R) 4 CPU 3.00GHz
Gentoo Base System version 1.5.3
Autoconf: sys-devel/autoconf-2.59-r4
Automake: sys-devel/automake-1.8.5-r1
ACCEPT_KEYWORDS="x86 ~x86"
AUTOCLEAN="yes"
CFLAGS="-O2 -pipe -march=pentium4 -funroll-loops -ffast-math -fomit-frame-pointer -ffloat-store -fforce-addr -ftracer -mmmx -msse -msse2 -mfpmath=sse"/afs/C /etc/afs/afsws /etc/gconf /etc/terminfo /etc/env.d"
CXXFLAGS="-O2 -pipe -march=pentium4 -funroll-loops -ffast-math -fomit-frame-pointer -ffloat-store -fforce-addr -ftracer -mmmx -msse -msse2 -mfpmath=sse" X509 Xaw3d aalib acl afs alsa apm arts avi berkdb bitmap-fonts bzlib calendar caps cjk crypt cups curl dga directfb djbfft doc emacs encode esd f77 fam fbcon fdftk firebird foomaticdb ftp gcj gd gd-external gdbm ggi gif gmp gnome gpm gtk gtk2 guile iconv imap imlib immqt immqt-bc ipv6 jack java javamail javascript jikes jpeg junit kde kerberos krb4 ldap libcaca libg++ libwww mad makecheck memlimit mhash mikmod mime ming mmx motif mozilla mpeg mysql nas ncurses nls nptl objc odbc oggvorbis opengl oss pam pcre pdflib perl php pic png postgres pwdb python qdbm qt quicktime readline ruby samba sasl sdl session skey slang slp snmp socks5 spell sqlite sse sse2 ssl svga tcltk tcpd tetex threads tiff truetype unicode x86 xinerama xml xml2 xmlrpc xmms xpm xprint xsl xv zlib"
This bug is probably a dup/dep of bug #45115
@solar: I fail to see how a compile time bug about g++ not liking one of the
pthread.h macros is related to this runtime threading problem. Care to
elaborate?
"Care to elaborate?" Sorry not really.
I'm not a nptl fan. If you/others want to try to solve this bug then I'm
simply suggesting to take a peek at that bug which may or may not be
related to the problems your having here. Either way if your using ntpl
then that bug concerns you.
can someone here please test glibc 2.3.4.20040916?
Muine (a mono app) appears to still lock up under glibc 2.3.4.20040916.
reaching 102 threads with latest glibc in portage 0918
adding as much info as i can think off
mono apps seem happier, since it is 1020 i will try to see if they lock up how they used to in past, compiled mono -r2 with nptl after remerging 0918 with nptl
ok, here all info
first, while i wrtoe the above, muine must have reached 1020 threads as it locked up now in the end
nullzone ~ # /lib/libc.so.6
GNU C Library 20040918 release version 2.3.4, by Roland McGrath et al.
This is free software; see the source for copying conditions.
There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A
PARTICULAR PURPOSE.
Compiled by GNU CC version 3.4.2 (Gentoo Linux 3.4.2-r1, ssp-3.4.1-1, pie-8.7.6.5).
Compiled on a Linux 2.6.7 system on 2004-09-19.
Thread-local storage support included.
For bug reporting instructions, please see:
<>.
nullzone ~ #
nullzone ~ # emerge info
Portage 2.0.51_rc1 (default-linux/x86/2004.2/gcc34/2.6, gcc-3.4.2, glibc-2.3.4.20040918-r0, 2.6.9-rc1 i686)
=================================================================
System uname: 2.6.9-rc1 i686 AMD Athlon(tm) XP 3200+
Gentoo Base System version 1.5.3
ccache version 2.3 [enabled]
Autoconf: sys-devel/autoconf-2.59-r4
Automake: sys-devel/automake-1.8.5-r1
Binutils: sys-devel/binutils-2.15.90.0.1.1-r3
Headers: sys-kernel/linux26-headers-2.6.7-r4
Libtools: sys-devel/libtool-1.5.2-r5
ACCEPT_KEYWORDS="x86 ~x86"
AUTOCLEAN="yes"
CFLAGS="-O2 -march=athlon-xp :oaddcvs ccache cvs sandbox sfperms"
GENTOO_MIRRORS=""
MAKEOPTS="-j2"
PKGDIR="/usr/portage/packages"
PORTAGE_TMPDIR="/var/tmp"
PORTDIR="/usr/portage"
PORTDIR_OVERLAY="/overlay"
i know longer till freezing of mono apps != fix but goes in right direction
just a fyi for all
1020 threads
starting !
1021 threads
1022 threads
1023 threads
1024 threads
then the usual endless list
s/102/1020/ sorry
The more I've looked into this, the more apparant it is that the issue is not
that there is a bug in NPTL, but instead there are some applications (mono and
libgc in particular) that are developed expecting certain behaviour in
LinuxThreads that is non standard, or erroneous at best.
I've straced program given in comment #4 on my nptl system and got :
$ strace ./testpgm2
... ... ...
30236 write(1, "253 threads\n", 12) = 12
30236 mmap2(NULL, 8392704, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xbf25b000
30236 mprotect(0xbf25b000, 4096, PROT_NONE) = 0
30236 clone(child_stack=0xbfa5bb28, flags=CLONE_VM|CLONE_FS|CLONE_FILES|CLONE_SIGHAND|CLONE_THREAD|CLONE_SYSVSEM|CLONE_SETTLS|CLONE_PARENT_SETTID|CLONE_CHILD_CLEARTID|CLONE_DETACHED, parent_tidptr=0xbfa5bbf8, {entry_number:6, base_addr:0xbfa5bbb0, limit:1048575, seg_32bit:1, contents:0, read_exec_only:0, limit_in_pages:1, seg_not_present:0, useable:1}, child_tidptr=0xbfa5bbf8) = 30491
30236 sched_yield( <unfinished ...>
30491 _exit(0) = ?
30236 <... sched_yield resumed> ) = 0
30236 write(1, "254 threads\n", 12) = 12
30236 mmap2(NULL, 8392704, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = -1 ENOMEM (Cannot allocate memory)
30236 dup(2) = 3
30236 fcntl64(3, F_GETFL) = 0x2 (flags O_RDWR)
30236 fstat64(3, {st_mode=S_IFCHR|0620, st_rdev=makedev(136, 5), ...}) = 0
30236 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xbfa5c000
30236 _llseek(3, 0, 0xbffff358, SEEK_CUR) = -1 ESPIPE (Illegal seek)
30236 write(3, "nptl: Cannot allocate memory\n", 29) = 29
30236 close(3) = 0
... ... ...
So the 255-th pthread_create seems to fails because the mmap of 8M for
the thread stack fails. I thus tried to lower the maximum stack size to
1M (ulimit -s 1024), and the program terminate correctly as the kernel
is able to allocate all requested memory for kernel stacks (well it
pretend to because I don't have 1G of memory :).
Doing so little math, it appears the problem comes from the very large
default maximum stack size used. With a 8M stack size, and a 2G/2G split
(which seems to be how I compiled my kernel), a single process can not
map more than 2G of memory, that is 256 * 8M ... The original poster
seems to have a kernel compiled with a 3G/1G split which allow a single
process to map 3G (~ 380 * 8M) of memory ...
This is the same problem that affect the original problem. By reducing
the stack size, i get the awaited 24000 answer:
$ ulimit -s 8192
$ ./testpgm | grep 'malloc in thread' | wc -l
6120
$ ulimit -s 512
$ testpgm | grep 'malloc in thread' | wc -l
24000
So the failure of those sample programs is simply caused by memory exhaustion,
and this is no bug in glibc (maybe there is a problem for the very large
default stack size limit but that is another issue) and it is something
else that is causing the problem with glibc (after reading the different
links, it seems to come from difference in semantic from LinuxThreads and
NPTL, but its just a guess) ...
Yes, problem is with "memory exhaustion", true. But why the hell memory is
exhausted in the first place ?
You are missing fundamental fact: program from comment #4 WILL NEVER CREATE
MORE THEN TWO THREADS! sleep is there for a reason: it's GUARANTEE that this
program will never create more then two threads on any sane system.
If glibc is 100% sure every stack for every dead thread should be kept around
forever then you are 100% correct. But it's just stupid: why all this dead
memory is kept around forever ? I can see ABSOLUTELY NO REASON to keep stack
for every thread which ever existed in program allocated forever. More: if
you'll add pthread_join all stacks will be correctly deallocated. To me it
looks like some strange error with nptl locking logic and NOT as normal
behaviour.
If you'll think about this:
you'll see it's EXACTLY the same problem: thread is gone (and there are NO
associated kernel process in memory - check with /proc !) but half-dead zombie
is still in memory so no cleanup...
From pthread_join(3) man page:
When a joinable thread terminates, its memory resources (thread
descriptor and stack) are not deallocated until another thread
performs pthread_join on it. Therefore, pthread_join must be
called once for each joinable thread created to avoid memory
leaks.
And since pthread_create create thread in joinable state by default
(that is the case since the 'pthread_attr_t *' parameter is NULL)
and they are not detached by call to pthread_detach(3), the behaviour
is correct. The bug is in the program posted in comment #4, not the
glibc !
If you want the thread memory (stack, ...) to be automatically freeed
by the libc when the thread exit, you must create it in detached
state or call pthred_detach(3). Otherwise, you must call pthread_join(3).
I think the rationale for keeping thread memory is to be able to get
the return value from the thread when calling pthread_join. But in all
case, there is no bug since it is the behaviour POSIX seems to specify.
As Defresne Sylvain pointed out this is a red herring. See bug 54603 and
upstream for the fix.
That fix is actually a workaround. It seems to be a gcc/glibc issue:
Basically there is a threading/exception problem caused by either of them.
True, I guess, the underlying problem is that gcc -fexceptions is incompatible
with glibc+NPTL. However mono doesn't need -fexceptions (it's pure C,
-fexceptions is for mixed C/C++ code) so removing it will fix the mono build.
If you want to try to fix the gcc/-fexceptions/glibc/nptl incompatibility then
I think that would best be done on a separate bug, or upstream.
Okay, I've just commited a change to the package.masked mono-1.0.2-r1 which
adds the -fexpections fix as posted on the ximian bug. Still requires gcc-3.4
unfortunately.
I'd like to get some wide testing on that, and then un package.mask that
version, so it is only ~x86. At that point I want to get the mono-1.0.2 and
friends (gtk-sharp, etc). So that people with NPTL systems will only have to
deal with marking a few select ebuilds ~x86 to have it functioning, and every
one else can have a stable 1.0 mono.
TEST! (/me gets on hands and knees and begs)
Seems to be ok for me with mono 1.0.2-r1+muine 0.6.3
works fine with muine-from-cvs,
glibc-2.3.4.20041021
gcc-3.4.2-r3
mono-1.0.2-r1
gtk-sharp-1.0.2
Ok, one last *MAJOR* test request:
I've just commited mono-1.0.6 to the tree, but i've added it to package.mask as well.
I've done some testing with both muine, and BLAM! on this, and I'm 90% sure we're ok with NPTL *without* using gcc-3.4! The new version deps on gcc-3.3.5, which i believe may have been the piece that contained the fix. "may" being the operative word.
So I need some testing from people using gcc-3.3.5, as if this is the case, one of the major things holding up a mono-1.0.x marked stable will disappear. Thanks all for persevering.
1.0.6 actually seems to have some threading issues unrelated to the NPTL issue.
I've commited mono-1.0.5-r4, which includes these changes for the 1.0.5 series,
and is working here and for one other user wonderfully. *PLEASE* test this, as
it's the most realistic target for stabalization, given the flakiness of 1.0.6.
Marking this TEST-REQUEST.
I build mono-1.0.6 with gcc-3.3.5 & glibc-2.3.4.20050125 (nptl only glibc)
it works fine for me..
muine doesn't hang any more :)
-------------------------------------------------------
cafri ~ # emerge -pv glibc mono =sys-devel/gcc-3.3.5-r1
These are the packages that I would merge, in order:
Calculating dependencies ...done!
[ebuild R ] sys-libs/glibc-2.3.4.20050125 -build -debug -erandom -hardened
(-multilib) +nls -nomalloccheck +nptl +nptlonly +pic -userlocales 0 kB
[ebuild R ] dev-dotnet/mono-1.0.6 -debug +nptl 0 kB
[ebuild R ] sys-devel/gcc-3.3.5-r1 -bootstrap -boundschecking -build -debug
-fortran -gcj -gtk -hardened (-ip28) (-multilib) -multislot (-n32) (-n64) +nls
-nocxx +objc* -static (-uclibc) 0 kB
Total size of downloads: 0 kB
cafri ~ # gcc -v
Reading specs from /usr/lib/gcc-lib/i686-pc-linux-gnu/3.3.5/specs
Configured with: /var/tmp/portage/gcc-3.3.5-r1/work/gcc-3.3.5/configure --enable
-version-specific-runtime-libs --prefix=/usr --bindir=/usr/i686-pc-linux-gnu/gcc
-bin/3.3.5 --includedir=/usr/lib/gcc-lib/i686-pc-linux-gnu/3.3.5/include --datad
ir=/usr/share/gcc-data/i686-pc-linux-gnu/3.3.5 --mandir=/usr/share/gcc-data/i686
-pc-linux-gnu/3.3.5/man --infodir=/usr/share/gcc-data/i686-pc-linux-gnu/3.3.5/in
fo --with-gxx-include-dir=/usr/lib/gcc-lib/i686-pc-linux-gnu/3.3.5/include/g++-v
3 --host=i686-pc-linux-gnu --disable-altivec --enable-nls --without-included-get
text --enable-__cxa_atexit --enable-clocale=gnu --with-system-zlib --disable-che
cking --disable-werror --disable-libunwind-exceptions --enable-shared --enable-t
hreads=posix --disable-multilib --disable-libgcj --enable-languages=c,c++
Thread model: posix
gcc version 3.3.5 (Gentoo Linux 3.3.5-r1, ssp-3.3.2-3, pie-8.7.7.1)
cafri ~ # gcc-config -l
[1] i686-pc-linux-gnu-3.3.5 *
[2] i686-pc-linux-gnu-3.3.5-hardened
[3] i686-pc-linux-gnu-3.3.5-hardenednopie
[4] i686-pc-linux-gnu-3.3.5-hardenednossp
[5] i686-pc-linux-gnu-3.4.3
[6] i686-pc-linux-gnu-3.4.3-hardened
[7] i686-pc-linux-gnu-3.4.3-hardenednopie
[8] i686-pc-linux-gnu-3.4.3-hardenednossp | http://bugs.gentoo.org/63734 | crawl-002 | refinedweb | 3,423 | 64.61 |
Advanced Namespace Tools blog
14 January 2018
Rewriting ventiprog progressive backup script and related tools
The Advanced Namespace Tools can be used with any root fileserver, but have always been designed with fossil+venti as their primary environment, even with the change to 9front-only support. One of the components is a set of scripts to help manage venti+fossil replication. I always felt that there was insufficient support tooling around venti+fossil, which contributed to the frustrations felt by users during the era in which many of them experienced failures of the system. Post 2012, I believe the old bugs have been cleaned up so reliability is much improved, but good backup hygiene is still important.
Basic concepts: arenas data and fossil rootscores
At its core, venti is an append-only log of data blocks. These blocks are stored in "arenas" partitions, which are further subdivided into arenas of 512mb each. The required "index" partition and optional "bloom" partition can be regenerated if needed from the raw arenas data. The fossil file server uses venti as a backing store for presenting a conventional hierarchical file system to the user. Because the data about how the blocks are assembled into a filesystem is itself part of the structure stored in venti, there is an ultimate "master block" representing the root of the entire filesystem, which can be referenced by its sha1 hash fingerprint. (Yes, venti does need an update to a newer hash function.) Whenever fossil stores a snapshot to venti, it records this thumbprint, known as a "rootscore". You can view it like this:
term% fossil/last /dev/sdE0/fossil vac:98610970109e9a89f19dbc69c9d7b5b196d31459
If you can connect to a venti which holds a copy of the relevant data blocks, you can instantly create a new clone-fossil by issuing a command like:
fossil/flfmt -v 98610970109e9a89f19dbc69c9d7b5b196d31459 /dev/sdE1/fossil
Because fossil only retrieves blocks from venti as they are needed, a freshly flfmt -v fossil can be created instantly and occupies approximately no space. As data is read/written, blocks will be retrieved and cached locally by the fossil until the next snapshot is made, at which point they are considered free for reuse. The key point to understand is that a fossil/venti backup system can ignore everything about the fossils apart from the rootscores, all the data is contained in the venti arenas.
The tool for copying venti arenas is the venti/wrarena command. In action, it looks like this:
venti/wrarena -o 5369503744 /dev/sdF0/arenas 0x90c937a
That command sends data to the venti specified in the $venti environment variable from the arenas partition /dev/sdF0/arenas arena beginning at offset -o number and clump offset 0xnumber. The first offset is the location of the given sub-arena within the overall arenas partition, and the clump offset is how many data clumps to begin the reading at. A freshly initialized venti which has not dumped any backup data will begin with using a command more like:
venti/wrarena -o 794624 /dev/sdF0/arenas 0x0
That command references the very first sub-arena and the very beginning of the data clumps. The offsets are calculated by retrieving the index file from venti's http server and subtracting 8192 from the first parameter after the bracket in output lines such as:
arena='arenas0' on /dev/sdF0/arenas at [802816,537673728)
The wrarena command only traverses a single sub-arena. To fully back up an active venti, you will need to traverse the entire list of active arenas.
The old ventiprog script
For the first ANTS release back in 2013, I had creaed a script called 'ventiprog' which automates this process somewhat. It did this by maintaining a list of rootscores under /n/9fat/rootscor and also a wrcmd at /n/9fat/wrcmd. The "fossilize" command would store the most recent fossil archival snapshot rootscore, and the ventiprog command would issue the stored wrcmd, track the new clump offset, and write an updated wrcmd with the new clump offset. You would then copy your stored rootscores to the backup venti.
The main annoyance with this approach was creating maintaining the wrcmd. As soon as you filled up a given arenas, you would need to manually create a new wrcmd by finding the offset of the next sub-arena within the arenas partition. If you were dealing with large quantities of data, this was obviously way too much extra work for a supposedly automated progressive backup/replication system. Because my data needs were comparatively small, mostly plaintext source code and writing, the system worked ok for me, but clearly needed to be upgraded.
backup.example from /sys/src/cmd/venti/words
There was already a more sophisticated progressive venti backup script present in the distribution, an example created by Russ Cox. I wasn't making use of it because when I glanced at it, I didn't fully understand what it was doing and what assumptions it was making about its environment. When it was time to upgrade the ANTS venti replication system though, it was the right place to start. The core logic of the original script is as follows:
. bkup.info fn x { echo x $* y=$1 if(~ $#$y 0){ $y=0 } echo venti/wrarena -o $2 $3 $$y end=`{venti/wrarena -o $2 $3 $$y | grep '^end offset ' | sed 's/^end offset //'} if(~ $#end 1 && ! ~ $$y $end){ $y=$end echo '#' `{date} >>bkup.info whatis $y >>bkup.info } } hget | awk ' /^index=/ { blockSize=0+substr($3, 11); } /^arena=/ { arena=substr($1, 7); } /^ arena=/ { start=0+substr($5, 2)-blockSize; printf("x %s %d %s\n", arena, start, $3); } ' |rc
This is not the easiest script in the world to grok on first reading, is it? I'll try to gloss this for you by describing what actually happens when you run it, assuming you start with a blank bkup.info file.
- We hget the venti http/index file, which is a description of the arenas and their location and how much data they contain
- process that data with awk, producing for each subarena a line formatted like: x arenas0 794624 /dev/sdF0/arenas
- we pass this series of 'x' lines to rc, which executes the previously defined 'x function'
- the x function issues a venti/wrarena command using the data contained in each line
- the ending clump offset of each subarena is stored in the bkup.info file
- the next time the script is run, the x function uses the stored ending clump offsets of each arena as a parameter, to avoid resending previously written data
- the bkup.info file is updated if the new ending clump offset is different than the previous
The tricky and clever part of the script is the $y variable, which is used to track the clump offsets. After the script has been run several times, the bkup.info file will look like this:
# Thu Jan 11 05:17:54 GMT 2018 arenas0=0x1fbbff7b # Thu Jan 11 05:18:12 GMT 2018 arenas1=0x1fe6b78f # Thu Jan 11 05:19:47 GMT 2018 arenas2=0x1fcc3e9e # Thu Jan 11 05:19:55 GMT 2018 arenas3=0x12f3126 # Thu Jan 11 05:35:42 GMT 2018 arenas3=0x3602e6e # Thu Jan 11 09:13:28 GMT 2018 arenas3=0x1fc9ddc9 # Thu Jan 11 09:15:22 GMT 2018 arenas4=0x1fc92873 # Thu Jan 11 09:17:43 GMT 2018 arenas5=0x1fba5742
As a result, when the ". bkup.info" command is executed, we create a set of variables tracking the final clump offset of each arena. As the x function executes, it uses a clever layer of indirection: $y is going to be an arenas label like "arenas3" and therefore $$y is going to be whatever the value of arenas3= was set to in the bkup.info script.
Rewriting ventiprog based on backup.example
So, this script is obviously a better foundation for progressive venti replication, because it does the tricky part - calculating the partition and clump offsets and issuing wrcmd for each partition - rather than relying on manual management. As written though, it does have a major issue in current 9front, which I'm not sure if existed in the old labs environment. The main problem is that the native awk port used by 9front does 32-bit signed arithmetic, which means that the partition offset values go wrong as soon as you are more than 2gb into the arenas partition. I fixed this by removing the arithmetic handling from awk and adding this logic:
[awk without arithmetic]' |sed 's/\,.*\)//g' >/tmp/offsetparse numarenas=`{wc /tmp/offsetparse} count=0 while(! ~ $count $numarenas(1)){ data = `{grep arenas^$count /tmp/offsetparse} offset=`{echo $data(3) ' - 8192' |pc |sed 's/;//g'} echo x $data(2) $offset $data(4) >>/tmp/offsets.$mypid count=`{echo $count ' + 1' |hoc} } cat /tmp/offsets.$mypid |rc
These shenanigans just use the "pc" calculator in 9front to process the arithmetic and produce output in the same form as the original script. The other changes to backup.example are minor - adding optional parameters for what filename the bkup.info file will be so that you can backup to multiple ventis by having an info file for each one, letting you specify a different address rather than 127.1:8000 for the http request for the venti index file, and letting you specify a 'prep' file in addition to the info file, in case there is preconfiguration necessary. For instance, in the vultr vps nodes that I use for the remote potion of my grid, my prep file looks like:
webfs bind -b '#S' /dev bind -b '#l1' /net.alt bind -b '#I1' /net.alt
This is necessary to set up the environment after cpu in from my terminal. The bkup.info file equivalents are named 9venti.info and ventiauth.info according to the names of the target ventis, and each begins with a line like:
venti=/net.alt/tcp!10.99.0.12!17034
This makes the wrarena command appropriately target the venti server reachable on the /net.alt private network of the vps datacenter. To replicate the data from my main venti to two other ventis I issue these commands:
ventiprog -p bkup.prep -f ventiauth.info ventiprog -f 9venti.info
Due to deduplication, I can run a similar set of commands on the other ventis, so the datablocks present in each venti are mirrored in the others, and any fossil server can init a copy of the most recent snapshot backed by any of these ventis.
Rootscore management with fossilize and storescore
The other component of the system that needed some improvement was mangement of fossil rootscores. A single 'rootscor' file is suboptimal when a single venti is backing up multiple different fossils, you want to track each fossil's scores separately. The fossilize script was modified so that it stored rootscores in the 9fat at scores.$fsname, with fsname set by default to $sysname. A companion script, storescore, is used by the venti servers to receive output from fossilize, store it in their own 9fat partition under a parallel name, and pass it along to a possible next machine. A sample command, used after ventiprog has replicated the venti datablocks between machines:
rcpu -h fs -c fossilize |storescore |rcpu -h backup -c storescore
This is run on the primary venti server and retrieves the most recent rootscore from 'fs' via the fossilize command, then stores it locally with storescore, and continues the chain by sending the output of storescore (which is identical to its input) to storescore on the backup venti.
Testing revealed a mildly annoying issue - the fact that the default 9front namespace file does not bind '#S' to dev, and once it is bound, it is visible at the standard path to remote machines during cpu. This caused the partition-finding logic to error because the target machines when rcpu'd into would see the controlling machine's disk files under /dev/sd* rather than their own. The fix was to both make sure that #S was unmounted from the controlling machine prior to issuing fossilize and storescore commands via rcpu, and add logic to bind '#S' if needed inside those scripts, after a rfork en.
With these new, improved scripts a fully automated replication workflow for venti data and fossil rootscores is easy to put in place. | http://doc.9gridchan.org/blog/180114.ventiprog.rewrite | CC-MAIN-2021-21 | refinedweb | 2,051 | 55.47 |
There are many cases in computer programming we need to search for an element in a collection of data. The Unordered Linear Search is one of the simplest searching algorithms.
def unordered_linear_search(search_list, data): for i in range(len(search_list) - 1): if search_list[i] == data: # We have found the item at index i return i # If we got here, we didn't find the item return None if __name__ == '__main__': names = ['Bob Belcher', 'Linda Belcher', 'Tina Belcher', 'Gene Belcher', 'Louise Belcher'] linda_index = unordered_linear_search(names, 'Linda Belcher') if linda_index: print('Linda Belcher found at ', str(linda_index)) else: print('Linda Belcher was not found') teddy_index = unordered_linear_search(names, 'Teddy') if teddy_index: print('Teddy was found at ', str(teddy_index)) else: print('Teddy was not found')
In this code, we are basically looping through the search_list object until we found a match. If we found the match, we return the index of the item’s location. Otherwise, we return Python’s None object to indicate the item wasn’t found. When run, the program will produce this output.
Linda Belcher found at 1 Teddy was not found
Advertisements
3 thoughts on “Unordered Linear Search—Python”
For the search for loop, why not just the following? Is there any reason?
for s in search_list:
if s == data:
# We have found the item at index i
return i
We are looking for the index not the element. Normally we I would use indexOf, but this post is geared towards students who have to learn this algorithm. Good question! | https://stonesoupprogramming.com/2017/05/15/unordered-linear-search-python/ | CC-MAIN-2018-05 | refinedweb | 251 | 56.18 |
You are developing an application that interacts with a SQL database, and you need to defend against SQL injection attacks.
SQL injection attacks are most common in web applications that use a database to store data, but they can occur anywhere that a SQL command string is constructed from any type of input from a user. Specifically, a SQL injection attack is mounted by inserting characters into the command string that creates a compound command in a single string. For example, suppose a query string is created with a WHERE clause that is constructed from user input. A proper command might be:
SELECT * FROM people WHERE first_name="frank";
If the value "frank" comes directly from user input and is not properly validated, an attacker could include a closing double quote and a semicolon that would complete the SELECT command and allow the attacker to append additional commands. For example:
SELECT * FROM people WHERE first_name="frank"; DROP TABLE people;
Obviously, the best way to avoid SQL injection attacks is to not create SQL command strings that include any user input. In some small number of applications, this may be feasible, but more frequently it is not. Avoid including user input in SQL commands as much as you can, but where it cannot be avoided, you should escape dangerous characters.
SQL injection attacks are really just general input validation problems. Unfortunately, there is no perfect solution to preventing these types of attacks. Your best defense is to apply strict checking of input?even going so far as to refuse questionable input rather than attempt to escape it?and hope that that is a strong enough defense.
There are two main approaches that can be taken to avoid SQL injection attacks:
In many cases, user input needs to be used in queries such as looking up a username or a message number, or some other relatively simple piece of information. It is rare to need any character in a user name other than the set of alphanumeric characters. Similarly, message numbers or other similar identifiers can safely be restricted to digits.
With SQL, problems start to occur when symbol characters that have special meaning are allowed. Examples of such characters are quotes (both double and single), semicolons, percent symbols, hyphens, and underscores. Avoid these characters wherever possible; they are often unnecessary, and allowing them at all just makes things more difficult for everyone except an attacker.
In SQL parlance, anything that is not a keyword or an identifier is a literal. Keywords are portions of a SQL command such as SELECT or WHERE, and an identifier would typically be the name of a table or the name of a field. In some cases, SQL syntax allows literals to appear without enclosing quotes, but as a general rule you should always enclose literals with quotes.
Literals should always be enclosed in single quotes ('), but some SQL implementations allow you to use either single or double quotes ("). Whichever you choose to use, always close the literal with the same character with which you opened it.
Within literals, most characters are safe to leave unescaped, and in many cases, it is not possible to escape them. Certainly, with whichever quoting character you choose to use with your literals, you may need to allow that character inside the literal. Escaping quotes is done by doubling up on the quote character. Other characters that should always be escaped are control characters and the escape character itself (a backslash).
Finally, if you are using the LIKE keyword in a WHERE clause, you may wish to prevent input from containing wildcard characters. In fact, it is a good idea to prevent wildcard characters in most circumstances. Wildcard characters include the percent symbol, underscore, and square brackets.
You can use the function spc_escape_sql( ), shown at the end of this section, to escape all of the characters that we've mentioned. As a convenience (and partly due to necessity), the function will also surround the escaped string with the quote character of your choice. The return from the function will be the quoted and escaped version of the input string. If an error occurs (e.g., out of memory, or an invalid quoting character chosen), the return will be NULL.
spc_escape_sql( ) requires three arguments:
The string that is to be escaped.
The quote character to use. It must be either a single or double quote. Any other character will cause spc_escape_sql( ) to return failure.
If this argument is specified as 0, wildcard characters recognized by the LIKE operator in a WHERE clause will not be escaped; otherwise, they will be. You should only escape wildcards when you are going to be using the escaped string as the right-hand side for the LIKE operator.
#include <stdlib.h> #include <string.h> char *spc_escape_sql(const char *input, char quote, int wildcards) { char *out, *ptr; const char *c; /* If every character in the input needs to be escaped, the resulting string * would at most double in size. Also, include room for the surrounding * quotes. */ if (quote != '\'' && quote != '\"') return 0; if (!(out = ptr = (char *)malloc(strlen(input) * 2 + 2 + 1))) return 0; *ptr++ = quote; for (c = input; *c; c++) { switch (*c) { case '\'': case '\"': if (quote == *c) *ptr++ = *c; *ptr++ = *c; break; case '%': case '_': case '[': case ']': if (wildcards) *ptr++ = '\\'; *ptr++ = *c; break; case '\\': *ptr++ = '\\'; *ptr++ = '\\'; break; case '\b': *ptr++ = '\\'; *ptr++ = 'b'; break; case '\n': *ptr++ = '\\'; *ptr++ = 'n'; break; case '\r': *ptr++ = '\\'; *ptr++ = 'r'; break; case '\t': *ptr++ = '\\'; *ptr++ = 't'; break; default: *ptr++ = *c; break; } } *ptr++ = quote; *ptr = 0; return out; } | http://etutorials.org/Programming/secure+programming/Chapter+3.+Input+Validation/3.11+Preventing+SQL+Injection+Attacks/ | CC-MAIN-2017-22 | refinedweb | 924 | 60.24 |
On 03/17/2010 06:58 PM, Grégoire Sutre wrote: > Just out of curiosity: is there a reason for this behavior of > AC_CHECK_DECLS, which, quoting the manual, is unlike the other > ‘AC_CHECK_*S’ macros? Yes - it looks cleaner to write code like: if (CONDITION) do_something (); than #if CONDITION do_something (); #endif but that only works if CONDITION is always defined to 0 or 1. In general, we prefer to avoid #if inside function bodies; it is easier to read code where all the #if have been factored out to file scope level and function bodies are straight-line code. Newer autoconf macros follow this style, and gnulib continues it for most macros defined by gnulib. And even for the older autoconf macros which did not follow this paradigm, it's not too hard to do: #ifndef CONDITION # define CONDITION #endif at the top of the file, to once again keep #if out of function bodies. -- Eric Blake address@hidden +1-801-349-2682 Libvirt virtualization library
signature.asc
Description: OpenPGP digital signature | http://lists.gnu.org/archive/html/bug-gnulib/2010-03/msg00197.html | CC-MAIN-2015-11 | refinedweb | 171 | 56.69 |
STEVEN PENA13,928 Points
Use input() to ask the user if they want to start the movi If they answer anything other than "n" or "N", print "Enjo
super close just need a little help
import sys start = input("do you want to start the movie? enter N/n") if start != 'n': print("Enjoy the show") elif start != 'N': else: sys.exit()
1 Answer
Steven Parker194,159 Points
Here's a few hints:
- be careful and consistent with indentation
- do just one test instead of two
- if you convert the input to lower case, you can test for both "n" and "N" at once | https://teamtreehouse.com/community/use-input-to-ask-the-user-if-they-want-to-start-the-movi-if-they-answer-anything-other-than-n-or-n-print-enjo | CC-MAIN-2020-24 | refinedweb | 104 | 77.16 |
Using Pyepgem
Hello,
I’m using in my normal python envoirment pyephem, to calculate positions of planets and sattelites.
This is a great missing in this envoirment.
Is there a way to import this pyephem library? click on the colored bar that is just above "Branch: master, New pull request". You will see there that this repo is ~73% C code and ~4% Python code. Apple's iOS guidelines make it difficult for Pythonista to support such an app. If a repo is "pure Python" then you have a fighting change to make them work but when they are largely non-Python, odds are low.
@powersim Hi, have you tried to install it via 'pip install pyephem' with StaSh? To install StaSh, if not installed in Pythonista you use, run the following code:
import requests as r; exec(r.get('').text)
then run the file 'launch_stash.py' installed in the main folder of Pythonista and use pip in the shell.
If the library you are installing depends on not pure python codes like C or Fortran, no way to use it in Pythonista, but if it needs only pure python libraries, the command pip in StaSh works like the one in computer, it installs all the needed dependencies.
The only solution I have in mind to run not pure python libraries with Pythonista is to use a remote python server by adding to a python temporary directory of the server the precompiled library you want, according to the os of the python server (Linux, Windows, ...), and by changing the python sys path so the remote python server can find in the temporary folder all it needs to run your scripts based on precompiled library. Not very comfortable because it is slow (it must download a full precompiled lib every time during independent sessions) but it works.
Regards
- georg.viehoever
Maybe try skyfield, which is pure Python . | https://forum.omz-software.com/topic/4995/using-pyepgem/3 | CC-MAIN-2022-05 | refinedweb | 316 | 68.81 |
Learning Dart — Save 50%
Learn how to program applications with Dart 1.0, a language specifically designed to produce better-structured, high-performance applications with this book and ebook
This article is written by Ivo Balbaert and Dzenan Ridjanovic, the authors of the book Learning Dart.
(For more resources related to this topic, see here.)
A Dart web application runs inside the browser (HTML) page that hosts the app; a single-page web app is more and more common. This page may already contain some HTML elements or nodes, such as <div> and <input>, and your Dart code will manipulate and change them, but it can also create new elements. The user interface may even be entirely built up through code. Besides that, Dart is responsible for implementing interactivity with the user (the handling of events, such as button-clicks) and the dynamic behavior of the program, for example, fetching data from a server and showing it on the screen. We explored some simple examples of these techniques. Compared to JavaScript, Dart has simplified the way in which code interacts with the collection of elements on a web page (called the DOM tree). This article teaches you this new method using a number of simple examples, culminating with a Ping Pong game. The following are the topics:
- Finding elements and changing their attributes
- Creating and removing elements
- Handling events
- Manipulating the style of page elements
- Animating a game
- Ping Pong using style(s)
- How to draw on a canvas – Ping Pong revisited
Finding elements and changing their attributes
All web apps import the Dart library dart:html; this is a huge collection of functions and classes needed to program the DOM (look it up at api.dartlang.org). Let's discuss the base classes, which are as follows:
- The Navigator class contains info about the browser running the app, such as the product (the name of the browser), its vendor, the MIME types supported by the installed plugins, and also the geolocation object.
- Every browser window corresponds to an object of the Window class, which contains, amongst many others, a navigator object, the close, print, scroll and moveTo methods, and a whole bunch of event handlers, such as onLoad, onClick, onKeyUp, onMouseOver, onTouchStart, and onSubmit. Use an alert to get a pop-up message in the web page, such as in todo_v2.dart:
window.onLoad.listen( (e) =>
window.alert("I am at your disposal") );
- If your browser has tabs, each tab opens in a separate window. From the Window class, you can access local storage or IndexedDB to store app data on the client
- The Window object also contains an object document of the Document class, which corresponds to the HTML document. It is used to query for, create, and manipulate elements within the document. The document also has a list of stylesheets (objects of the StyleSheet class)—we will use this in our first version of the Ping Pong game.
- Everything that appears on a web page can be represented by an object of the Node class; so, not only are tags and their attributes nodes, but also text, comments, and so on. The Document object in a Window class contains a List<Node> element of the nodes in the document tree (DOM) called childNodes.
- The Element class, being a subclass of Node, represents web page elements (tags, such as <p>, <div>, and so on); it has subclasses, such as ButtonElement, InputElement, TableElement, and so on, each corresponding to a specific HTML tag, such as <button>, <input>, <table>, and so on. Every element can have embedded tags, so it contains a List<Element> element called children.
Let us make this more concrete by looking at todo_v2.dart, solely for didactic purposes—the HTML file contains an <input> tag with the id value task, and a <ul> tag with the id value list:
<div><input id="task" type="text" placeholder="What do you want to do?"/> <p id="para">Initial paragraph text</p> </div> <div id="btns"> <button class="backgr">Toggle background color of header</button> <button class="backgr">Change text of paragraph</button> <button class="backgr">Change text of placeholder in input
field and the background color of the buttons</button> </div> <div><ul id="list"/> </div>
In our Dart code, we declare the following objects representing them:
InputElement task; UListElement list;
The following list object contains objects of the LIElement class, which are made in addItem():
var newTask = new LIElement();
You can see the different elements and their layout in the following screenshot:
The screen of todo_v2
Finding elements
Now we must bind these objects to the corresponding HTML elements. For that, we use the top-level functions querySelector and querySelectorAll; for example, the InputElement task is bound to the <input> tag with the id value task using: task = querySelector('#task'); .
Both functions take a string (a CSS selector) that identifies the element, where the id value task will be preceded by #. CSS selectors are patterns that are used in .css files to select elements that you want to style. There are a number of them, but, generally, we only need a few basic selectors (for an overview visit).
- If the element has an id attribute with the value abc, use querySelector('#abc')
- If the element has a class attribute with value abc, use querySelector('.abc')
- To get a list of all elements with the tag <button>, use querySelectorAll('button')
- To get a list of all text elements, use querySelectorAll('input[type="text"]') and all sorts of combinations of selectors; for example, querySelectorAll('#btns .backgr') will get a list of all elements with the backgr class that are inside a tag with the id value btns
These functions are defined on the document object of the web page, so in code you will also see document.querySelector() and document.querySelectorAll().
Changing the attributes of elements
All objects of the Element class have properties in common, such as classes, hidden, id, innerHtml, style, text, and title; specialized subclasses have additional properties, such as value for a ProgressElement method. Changing the value of a property in an element makes the browser re-render the page to show the changed user interface. Experiment with todo_v2.dart:
import 'dart:html'; InputElement task; UListElement list; Element header; List<ButtonElement> btns; main() { task = querySelector('#task'); list = querySelector('#list'); task.onChange.listen( (e) => addItem() ); // find the h2 header element: header = querySelector('.header'); (1) // find the buttons: btns = querySelectorAll('button'); (2) // attach event handler to 1st and 2nd buttons: btns[0].onClick.listen( (e) => changeColorHeader() ); (3) btns[1].onDoubleClick.listen( (e) => changeTextPara() ); (4) // another way to get the same buttons with class backgr: var btns2 = querySelectorAll('#btns .backgr'); (5) btns2[2].onMouseOver.listen( (e) => changePlaceHolder() );(6) btns2[2].onClick.listen((e) => changeBtnsBackColor() ); (7) addElements(); } changeColorHeader() => header.classes.toggle('header2'); (8) changeTextPara() => querySelector('#para').text =
"You changed my text!"; (9) changePlaceHolder() => task.placeholder =
'Come on, type something in!'; (10) changeBtnsBackColor() => btns.forEach( (b)
=> b.classes.add('btns_backgr')); (11) void addItem() { var newTask = new LIElement(); (12) newTask.text = task.value; (13) newTask.onClick.listen( (e) => newTask.remove()); task.value = ''; list.children.add(newTask); (14) } addElements() { var ch1 = new CheckboxInputElement(); (15) ch1.checked = true; document.body.children.add(ch1); (16) var par = new Element.tag('p'); (17) par.text = 'I am a newly created paragraph!'; document.body.children.add(par); var el = new Element.html('<div><h4><b>
A small divsection</b></h4></div>'); (18) document.body.children.add(el); var btn = new ButtonElement(); btn.text = 'Replace'; btn.onClick.listen(replacePar); document.body.children.add(btn); var btn2 = new ButtonElement(); btn2.text = 'Delete all list items'; btn2.onClick.listen( (e) => list.children.clear() ); (19) document.body.children.add(btn2); } replacePar(Event e) { var el2 = new Element.html('<div><h4><b>
I replaced this div!</b></h4></div>'); el.replaceWith(el2); (20) }
- We find the <h2> element via its class.
- We get a list of all the buttons via their tags.
- We attach an event handler to the Click event of the first button, which toggles the class of the <h2> element, changing its background color at each click (line (8)).
- We attach an event handler to the DoubleClick event of the second button, which changes the text in the <p> element (line (9)).
- We get the same list of buttons via a combination of CSS selectors.
- We attach an event handler to the MouseOver event of the third button, which changes the placeholder in the input field (line (10)).
- We attach a second event handler to the third button; clicking on it changes the background color of all buttons by adding a new CSS class to their classes collection (line (11)).
Every HTML element also has an attribute Map where the keys are the attribute names; you can use this Map to change an attribute, for example:
btn.attributes['disabled'] = 'true';
Please refer to the following document to see which attributes apply to which element:
Creating and removing elements
The structure of a web page is represented as a tree of nodes in the Document Object Model (DOM). A web page can start its life with an initial DOM tree, marked up in its HTML file, and then the tree can be changed using code; or, it can start off with an empty tree, which is then entirely created using code in the app, that is every element is created through a constructor and its properties are set in code. Elements are subclasses of Node; they take up a rectangular space on the web page (with a width and height). They have, at most, one parent Element in which they are enclosed and can contain a list of Elements—their children (you can check this with the function hasChildNodes() that returns a bool function). Furthermore, they can receive events. Elements must first be created before they can be added to the list of a parent element. Elements can also be removed from a node. When elements are added or removed, the DOM tree is changed and the browser has to re-render the web page.
An Element object is either bound to an existing node with the querySelector method of the document object or it can be created with its specific constructor, such as that in line (12) (where newTask belongs to the class LIElement—List Item element) or line (15). If useful, we could also specify the id in the code, such as in newTask.id = 'newTask';
If you need a DOM element in different places in your code, you can improve the performance of your app by querying it only once, binding it to a variable, and then working with that variable.
After being created, the element properties can be given a value such as that in line (13). Then, the object (let's name it elem) is added to an existing node, for example, to the body node with document.body.children.add(elem), as in line (16), or to an existing node, as list in line (14). Elements can also be created with two named constructors from the Element class:
- Like Element.tag('tagName') in line (17), where tagName is any valid HTML tag, such as <p>, <div>, <input>, <select>, and so on.
- Like Element.html('htmlSnippet') in line (18), where htmlSnippet is any valid combination of HTML tags.
Use the first constructor if you want to create everything dynamically at runtime; use the second constructor when you know what the HTML for that element will be like and you won't need to reference its child elements in your code (but by using the querySelector method, you can always find them if needed).
You can leave the type of the created object open using var, or use the type Element, or use the specific class name (such as InputElement)—use the latter if you want your IDE to give you more specific code completion and warnings/errors against the possible misuse of types.
When hovering over a list item, the item changes color and the cursor becomes a hand icon; this could be done in code (try it), but it is easier to do in the CSS file:
#list li:hover { color: aqua; font-size:20 px; font-weight: bold; cursor: pointer; }
To delete an Element elem from the DOM tree, use elem.remove(). We can delete list items by clicking on them, which is coded with only one line:
newTask.onClick.listen( (e) => newTask.remove() );
To remove all the list items, use the List function clear(), such as in line (19). Replace elem with another element elem2 using elem.replaceWith(elem2), such as in line (20).
Handling events
When the user interacts with the web form, such as when clicking on a button or filling in a text field, an event fires; any element on the page can have events. The DOM contains hooks for these events and the developer can write code (an event handler) that the browser must execute when the event fires. How do we add an event handler to an element (which is also called registering an event handler)?. The general format is:
element.onEvent.listen( event_handler )
(The spaces are not needed, but can be used to make the code more readable). Examples of events are Click, Change, Focus, Drag, MouseDown, Load, KeyUp, and so on. View this as the browser listening to events on elements and, when they occur, executing the indicated event handler. The argument that is passed to the listen() method is a callback function and has to be of the type EventListener; it has the signature: void EventListener(Event e)
The event handler gets passed an Event parameter, succinctly called e or ev, that contains more specific info on the event, such as which mouse button should be pressed in case of a mouse event, on which object the event took place using e.target, and so on. If an event is not handled on the target object itself, you can still write the event handler in its parent, or its parent's parent, and so on up the DOM tree, where it will also get executed; in such a situation, the target property can be useful in determining the original event object. In todo_v2.dart, we examine the various event-coding styles. Using the general format, the Click event on btns2[2] can be handled using the following code:
btns2[2].onClick.listen( changeBtnsBackColor );
where changeBtnsBackColor is either the event handler or callback function. This function is written as:
changeBtnsBackColor(Event e) => btns.forEach( (b) => b.classes.add('btns_backgr'));
Another, shorter way to write this (such as in line (7)) is:
btns2[2].onClick.listen( (e) => changeBtnsBackColor() ); changeBtnsBackColor() => btns.forEach( (b) =>
b.classes.add('btns_backgr'));
When a Click event occurs on btns2[2], the handler changeBtnsBackColor is called.
In case the event handler needs more code lines, use the brace syntax as follows:
changeBtnsBackColor(Event e) { btns.forEach( (b) => b.classes.add('btns_backgr')); // possibly other code }
Familiarize yourself with these different ways of writing event handlers.
Of course, when the handler needs only one line of code, there is no need for a separate method, as in the following code:
newTask.onClick.listen( (e) => newTask.remove() );
For clarity, we use the function expression syntax => whenever possible, but you can inline the event handler and use the brace syntax along with an anonymous function, thus avoiding a separate method. So instead of executing the following code:
task.onChange.listen( (e) => addItem() );
we could have executed:
task.onChange.listen( (e) { var newTask = new LIElement(); newTask.text = task.value; newTask.onClick.listen( (e) => newTask.remove()); task.value = ''; list.children.add(newTask); } );
JavaScript developers will find the preceding code very familiar, but it is also used frequently in Dart code, so make yourself acquainted with the code pattern ( (e) {...} );. The following is an example of how you can respond to key events (in this case, on the window object) using the keyCode and ctrlKey properties of the event e:
window.onKeyPress .listen( (e) { if (e.keyCode == KeyCode.ENTER) { window.alert("You pressed ENTER"); } if (e.ctrlKey && e.keyCode == CTRL_ENTER) { window.alert("You pressed CTRL + ENTER"); } });
In this code, the constant const int CTRL_ENTER = 10; is used.
(The list of keyCodes can be found at).
Manipulating the style of page elements
CSS style properties can be changed in the code as well: every element elem has a classes property, which is a set of CSS classes. You can add a CSS class as follows:
elem.classes.add ('cssclass');
as we did in changeBtnsBackColor (line (11)); by adding this class, the new style is immediately applied to the element. Or, we can remove it to take away the style:
elem.classes.remove ('cssclass');
The toggle method (line (8)) elem.classes.toggle('cssclass'); is a combination of both: first the cssclass is applied (added), the next time it is removed, and, the time after that, it is applied again, and so on.
Working with CSS classes is the best way to change the style, because the CSS definition is separated from the HTML markup. If you do want to change the style of an element directly, use its style property elem.style, where the cascade style of coding is very appropriate, for example:
newTask.style ..fontWeight = 'bold' ..fontSize = '3em' ..color = 'red';
Animating a game
People like motion in games and a movie is nothing but a quick succession of image frames. So, we need to be able to redraw our screen periodically to get that effect; with Dart screen frame rates of 60 fps or higher, this becomes possible. A certain time interval is represented in Dart as an object of the type Duration. To do something periodically in Dart, we use the Timer class from the dart:async library and its periodic method. To execute a function moveBall() at every INTERVAL ms (you could call it a periodic event), use the following method:
new Timer.periodic( const Duration
(milliseconds: INTERVAL),(t) => moveBall() );
The first parameter is the time period, the second is the callback function that has to be periodically executed, and t is the Timer object. If the callback function has to be executed only once, just write a new Timer(.,.) method, omitting the periodic function. When drawing on canvas, the first thing that the periodically called function will have to do is erase the previous drawing. To stop a Timer object (usually in a game-over situation), use the cancel() method.
Another way of doing this is by using the animationFrame method from the window class. With this technique, we start gameLoop in the main() function and let it call itself recursively, as in the following code:
main() { // code left out // redraw window.animationFrame.then(gameLoop); }gameLoop(num delta) { moveBall(); window.animationFrame.then(gameLoop); }
Ping Pong using style(s)
Normally, you would write an HTML Dart game using canvas, but it is interesting to see what is possible just by manipulating the styles. Download the project from GitHub with: git clone git://github.com/dzenanr/ping_pong_dom.git.
This project was developed in spirals; if you want to see how the code was developed, explore the seven stages in the subfolder spirals (spiral s07, especially, contains a function examineCSS() that show you how to read the rules in the stylesheet of Dart code; also, the game screen contains some useful links to learn more about reading and changing CSS rules).
The following is the Dart code of the master version; we have commented on it using line numbers:
import 'dart:html'; import 'dart:async'; const int INTERVAL = 10; // time interval in ms to redraw the screen const int INCREMENT = 20; // move increment in pixels CssStyleSheet styleSheet; (1) var pingPong = { (2) 'ball': { 'speed': 3, 'x' : 195, 'y' : 100, 'dx' : 1, 'dy' : 1 }, 'key': { 'w' : 87, 's' : 83, 'up' : 38, 'down' : 40 }, 'paddleA' : { 'width' : 20, 'height' : 80, 'left' : 20, 'top' : 60, 'score' : 0 }, 'paddleB' : { 'width' : 20, 'height' : 80, 'left' : 360, 'top' : 80, 'score' : 0 }, 'table' : { 'width' : 400, 'height' : 200 } };main() { styleSheet = document.styleSheets[0]; (3) document.onKeyDown.listen(onKeyDown); (4) // Redraw every INTERVAL ms. new Timer.periodic(const Duration(milliseconds: INTERVAL),
(t) => moveBall()); (5) } String ballRule(int x, int y) { String rule = ''' #ball { background: #fbbfbb; position: absolute; width: 20px; height: 20px; left: ${x.toString()}px; top: ${y.toString()}px; border-radius: 10px; } '''; return rule; } String paddleARule(int top) { String rule = ''' #paddleA { background: #bbbbff; position: absolute; width: 20px; height: 80px; left: 20px; top: ${top.toString()}px; } '''; return rule; } String paddleBRule(int top) { String rule = ''' #paddleB { background: #bbbbff; position: absolute; width: 20px; height: 80px; left: 360px; top: ${top.toString()}px; } '''; return rule; } updateBallRule(int left, int top) { styleSheet.removeRule(1); styleSheet.insertRule(ballRule(left, top), 1); } updatePaddleARule(int top) { styleSheet.removeRule(2); styleSheet.insertRule(paddleARule(pingPong['paddleA']['top']), 2); } updatePaddleBRule(int top) { styleSheet.removeRule(3); styleSheet.insertRule(paddleBRule(pingPong['paddleB']['top']), 3); } onKeyDown(e) { var paddleA = pingPong['paddleA']; var paddleB = pingPong['paddleB']; var key = pingPong['key']; if (e.keyCode == key['w']) { (6) paddleA['top'] = paddleA['top'] - INCREMENT; updatePaddleARule(paddleA['top']); } else if (e.keyCode == key['s']) { paddleA['top'] = paddleA['top'] + INCREMENT; updatePaddleARule(paddleA['top']); } else if (e.keyCode == key['up']) { paddleB['top'] = paddleB['top'] - INCREMENT; updatePaddleBRule(paddleB['top']); } else if (e.keyCode == key['down']) { paddleB['top'] = paddleB['top'] + INCREMENT; updatePaddleBRule(paddleB['top']); } } moveBall() { var ball = pingPong['ball']; var table = pingPong['table']; var paddleA = pingPong['paddleA']; var paddleB = pingPong['paddleB']; // check the table boundary // check the bottom edge if (ball['y'] + ball['speed'] * ball['dy'] > table['height']) { ball['dy'] = -1; (7) } // check the top edge if (ball['y'] + ball['speed'] * ball['dy'] < 0) { ball['dy'] = 1; } // check the right edge if (ball['x'] + ball['speed'] * ball['dx'] > table['width']) { // player B lost (8) paddleA['score']++; document.querySelector('#scoreA').innerHtml
paddleA['score'].toString(); // reset the ball; ball['x'] = 250; ball['y'] = 100; ball['dx'] = -1; } // check the left edge if (ball['x'] + ball['speed'] * ball['dx'] < 0) { // player A lost (9) paddleB['score']++; document.querySelector('#scoreB').innerHtml =
paddleB['score'].toString(); // reset the ball; ball['x'] = 150; ball['y'] = 100; ball['dx'] = 1; } ball['x'] += ball['speed'] * ball['dx']; ball['y'] += ball['speed'] * ball['dy']; // check the moving paddles // check the left paddle if (ball['x'] + ball['speed'] * ball['dx'] < (10) paddleA['left'] + paddleA['width']) { if (ball['y'] + ball['speed'] * ball['dy'] <= paddleA['top'] + paddleA['height'] && ball['y'] + ball['speed'] * ball['dy'] >= paddleA['top']) { ball['dx'] = 1; } } // check the right paddle if (ball['x'] + ball['speed'] * ball['dx'] >= paddleB['left']) { if (ball['y'] + ball['speed'] * ball['dy'] <= paddleB['top'] + paddleB['height'] && ball['y'] + ball['speed'] * ball['dy'] >= paddleB['top']) { ball['dx'] = -1; (11) } } // update the ball rule updateBallRule(ball['x'], ball['y']); }
The screen looks like as shown in the following screenshot:
The screen of Ping Pong DOM
Basically, the mechanism is that we change the left and top property values in the style rules for the ball, paddleA and paddleB in the function ballRule, paddlARule, and so on. When this new style rule is attached to our document, the HTML element moves on the screen. In line (1), we declare a stylesheet that we append to our document in line (3). The variable pingPong in line (2) is a Map with the keys ball, key, paddleA, paddleB, and table (these correspond with HTML element IDs), and their values are, themselves, maps containing variables and their values (for example, top has the value 60). These maps are further referenced using variables, as follows:
var paddleA = pingPong['paddleA'];
In line (4), an onKeyDown event handler is defined. This tests the key that was pressed along with if (e.keyCode == key['w']) (line (6)), and so on, and, when the key is recognized, the value of the top variable in the corresponding paddle Map is incremented or decremented (the value of Top is 0 at the top of the screen and increases towards the bottom of the screen. w means that the value is going up; this means the value of top is decreasing, so we have to subtract INCREMENT from the current top value, and likewise for the other directions). An updatePaddle(A-B)Rule function is called; in it, a new style rule is inserted into the stylesheet, updating the top value for the style rule of the corresponding paddle HTML element (the style rules are multiline strings).
Let's then see what happens in the periodic function moveBall(). Basically, this method changes the x and y coordinates of the ball:
ball['x'] += ball['speed'] * ball['dx']; ball['y'] += ball['speed'] * ball['dy'];
However, we have to check a number of boundary conditions (such as the ball crossing the edges of the table); if the ball is going down toward the bottom edge of the table (line (7)), dy becomes -1, so the new ball['y'] value will be smaller and the inverse will occur for when the ball goes along the top edge. If the ball goes over the right edge (line (8)), Player A wins a point, so their score is updated on the screen and the ball is reset. In line (9), the inverse is true and Player B wins a point. In lines (10) and (11), we test for the collision of the ball and paddleA or paddleB respectively; using paddleA, we want to send the ball to the right, so we set dx = 1; with paddleB, we want to send it to the left, so dx = -1. Then, in the same way as for the paddles, we update the style rule for the ball.
Summary
You now know all the techniques for finding, manipulating, and styling web page elements using Dart code to change the user interface and you can respond to events that take place on the page. You have learned how to change the CSS properties in DOM in order to move game objects and how to build a complete game project. We found that it is advisable to develop in a spiral way, building upon the previous spirals as the project acquires more functionality. The different entities are represented by classes; in that way, our project is naturally modularized.
Resources for Article:
Further resources on this subject:
- Including Google Maps in your Posts Using Apache Roller 4.0 [Article]
- QR Codes, Geolocation, Google Maps API, and HTML5 Video [Article]
- Google Earth, Google Maps and Your Photos: a Tutorial Part II [Article]
About the Author :
Dzenan Ridjanovic
Dzenan Ridjanovic is a university professor who is planning his early retirement to focus on the development of web applications with Dart, HTML5, web components, and NoSQL databases. For more than 10 years, he was a Director of Research and Development in the Silverrun team (), which created several commercial tools for analysis, design, and development of data-driven applications. He was a principal developer of Modelibra () tools and frameworks for model-driven development in Java. Recently, he has been developing the Dartling framework for design and code generation of Dart models. His projects are at GitHub (), where he is considered a Dart expert (). He writes about his projects at On Dart blog (). His courses are available at On Dart Education (). He markets his Dart efforts at On Dart G+ Page (). Dzenan Ridjanovic wrote a book in 2009, under the Creative Commons License, entitled Spiral Development of Dynamic Web Applications: Using Modelibra and Wicket ().
Ivo Balbaert
Ivo.
Post new comment | https://www.packtpub.com/article/handling-the-dom-in-dart | CC-MAIN-2014-10 | refinedweb | 4,547 | 60.04 |
React Higher Order Components in depth
Abstract
This post is targeted to advanced users that want to exploit the HOC pattern. If you are new to React you should probably start by reading React’s Docs.
Higher Order Components is a great Pattern that has proven to be very valuable for several React libraries. In this Post we will review in detail what HOCs are, what you can do with them and their limitations and, how they are implemented.
In the Appendixes we review related topics that are not core to HOC study but I thought we should have covered.
This Post is meant to be exhaustive so if you find anything I have missed please report it and I will make the required changes.
This Post assumes ES6 knowledge.
Let's dive in!
Update August 2016
We’ve been translated to Japanese!
Thanks to all for your interest!
Update January 2017
We’ve been translated to Korean!
And to Chinese!
Thank you very much to the translators!
What are Higher Order Components?
A Higher Order Component is just a React Component that wraps another one.
This pattern is usually implemented as a function, which is basically a class factory (yes, a class factory!), that has the following signature in haskell inspired pseudocode
hocFactory:: W: React.Component => E: React.Component
Where W (WrappedComponent) is the React.Component being wrapped and E (Enhanced Component) is the new, HOC, React.Component being returned.
The “wraps” part of the definition is intentionally vague because it can mean one of two things:
- Props Proxy: The HOC manipulates the props being passed to the WrappedComponent W,
- Inheritance Inversion: The HOC extends the WrappedComponent W.
We will explore this two patterns in more detail.
What can I do with HOCs?
At a high level HOC enables you to:
- Code reuse, logic and bootstrap abstraction
- Render Highjacking
- State abstraction and manipulation
- Props manipulation
We will see this items in more detail soon but first, we are going to study the ways of implementing HOCs because the implementation allows and restricts what you can actually do with an HOC.
HOC factory implementations
In this section we are going to study the two main ways of implementing HOCs in React: Props Proxy (PP) and Inheritance Inversion (II). Both enable different ways of manipulating the WrappedComponent.
Props Proxy
Props Proxy (PP) is implemented trivially in the following way:
The important part here is that the render method of the HOC returns a React Element of the type of the WrappedComponent. We also pass through the props that the HOC receives, hence the name Props Proxy.
NOTE:
<WrappedComponent {...this.props}/>
// is equivalent to
React.createElement(WrappedComponent, this.props, null)
They both create a React Element that describes what React should render in its reconciliation process, if you want to know more about React Elements vs Components see this post by Dan Abramov and see the docs to read more about the reconciliation process.
What can be done with Props Proxy?
- Manipulating props
- Accessing the instance via Refs
- Abstracting State
- Wrapping the WrappedComponent with other elements
Manipulating props
You can read, add, edit and remove the props that are being passed to the WrappedComponent.
Be careful with deleting or editing important props, you should probably namespace your Higher Order props not to break the WrappedComponent.
Example: Adding new props. The app’s current logged in user will be available in the WrappedComponent via this.props.user
Accessing the instance via Refs
You can access this (the instance of the WrappedComponent) with a ref, but you will need a full initial normal render process of the WrappedComponent for the ref to be calculated, that means that you need to return the WrappedComponent element from the HOC render method, let React do it’s reconciliation process and just then you will have a ref to the WrappedComponent instance.
Example: In the following example we explore how to access instance methods and the instance itself of the WrappedComponent via refs
When the WrappedComponent is rendered the ref callback will be executed, and then you will have a reference to the WrappedComponent’s instance. This can be used for reading/adding instance props and to call instance methods.
State abstraction
You can abstract state by providing props and callbacks to the WrappedComponent, very similar to how smart components will deal with dumb components. See dumb and smart components for more information.
Example: In the following State Abstraction example we naively abstract the value and onChange handler of the name input field. I say naively because this isn’t very general but you got to see the point.
You would use it like this:
The input will be a controlled input automagically.
For a more general two way data bindings HOC see this link
Wrapping the WrappedComponent with other elements
You can wrap the WrappedComponent with other components and elements for styling, layout or other purposes. Some basic usages can be accomplished by regular Parent Components (see Appendix B) but you have more flexibility with HOCs as described previously.
Example: Wrapping for styling purposes
Inheritance Inversion
Inheritance Inversion (II) is implemented trivially like this:
As you can see, the returned HOC class (Enhancer) extends the WrappedComponent. It is called Inheritance Inversion because instead of the WrappedComponent extending some Enhancer class, it is passively extended by the Enhancer. In this way the relationship between them seems inverse.
Inheritance Inversion allows the HOC to have access to the WrappedComponent instance via this, which means it has access to the state, props, component lifecycle hooks and the render method.
I won't go into much detail about what you can do with lifecycle hooks because it’s not HOC specific but React specific. But note that with II you can create new lifecycle hooks for the WrappedComponent. Remember to always call super.[lifecycleHook] so you don't break the WrappedComponent
Reconciliation process
Before diving in let’s summarize some theory.
React Elements describe what is going to be rendered when React runs it’s reconciliation process.
React Elements can be of two types: String and Function. The String Type React Elements (STRE) represent DOM nodes and the Function Type React Elements (FTRE) represent Components created by extending React.Component. For more about Elements and Components read this post.
FTRE will be resolved to a full STRE tree in React’s reconciliation process (the end result will be always DOM elements).
This is very important and it means that Inheritance Inversion High Order Components don’t have a guaranty of having the full children tree resolved.
Inheritance Inversion High Order Components don’t have a guaranty of having the full children tree resolved.
This is will prove important when studying Render Highjacking.
What can you do with Inheritance Inversion?
- Render Highjacking
- Manipulating state
Render Highjacking
It is called Render Highjacking because the HOC takes control of the render output of the WrappedComponent and can do all sorts of stuff with it.
In Render Highjacking you can:
- Read, add, edit, remove props in any of the React Elements outputted by render
- Read, and modify the React Elements tree outputted by render
- Conditionally display the elements tree
- Wrapping the element’s tree for styling purposes (as shown in Props Proxy)
*render refers to the WrappedComponent.render method
You cannot edit or create props of the WrappedComponent instance, because a React Component cannot edit the props it receives, but you can change the props of the elements that are outputted from the render method.
Just as we studied before, II HOCs don't have a guaranty of having the full children tree resolved, which implies some limits to the Render Highjacking technique. The rule of thumb is that with Render Highjacking you will be able to manipulate the element’s tree that the WrappedComponent render method outputs no more no less. If that element’s tree includes a Function Type React Component then you won't be able to manipulate that Component’s children. (They are deferred by React’s reconciliation process until it actually renders to the screen.)
Example 1: Conditional rendering. This HOC will render exactly what the WrappedComponent would render unless this.props.loggedIn is not true. (Assuming the HOC will receive the loggedIn prop)
Example 2: Modifying the React Elements tree outputted by render.
In this example, if the rendered output of the WrappedComponent has an input as it’s top level element then we change the value to “may the force be with you”.
You can do all sorts of stuff in here, you can traverse the entire elements tree and change props of any element in the tree. This is exactly how Radium does its business (More on Radium in Case Studies).
NOTE: You cannot Render Highjack with Props Proxy.
While it is possible to access the render method via WrappedComponent.prototype.render, you will need to mock the WrappedComponent instance and its props, and potentially handle the component lifecycle yourself, instead of relying on React doing it. In my experiments it isn’t worth it and if you want to do Render Highjacking you should be using Inheritance Inversion instead of Props Proxy. Remember that React handles component instances internally and your only way of dealing with instances is via this or by refs.
Manipulating state
The HOC can read, edit and delete state of the WrappedComponent instance, and you can also add more state if you need to. Remember that you are messing with the state of the WrappedComponent which can lead to you breaking things. Mostly the HOC should be limited to read or add state, and the latter should be namespaced not to mess with the WrappedComponent’s state.
Example: Debugging by accessing WrappedComponent’s props and state
This HOC wrapps the WrappedComponent with other elements, and also displays the WrappedComponent’s instance props and state. The JSON.stringify trick was taught to me by Ryan Florence and Michael Jackson. You can see a full working implementation of the debugger here.
Naming
When wrapping a component with an HOC you lose the original WrappedComponent’s name which might impact you when developing and debugging.
What people usually do is to customize the HOC’s name by taking the WrappedComponent’s name and prepending something. The following is taken from React-Redux.
HOC.displayName = `HOC(${getDisplayName(WrappedComponent)})`
//or
class HOC extends ... {
static displayName = `HOC(${getDisplayName(WrappedComponent)})`
...
}
The getDisplayName function is defined as follows:
function getDisplayName(WrappedComponent) {
return WrappedComponent.displayName ||
WrappedComponent.name ||
‘Component’
}
You actually don't need to rewrite it yourself because recompose lib already provides this function.
Case Studies
React-Redux is the official Redux bindings for React. One of the functions it provides is connect which handles all the bootstrap necessary for listening to the store and cleaning up afterwards. This is achieved by a Props Proxy implementation.
If you have ever worked with pure Flux you know that any React Component that is connected to one or more stores needs a lot of bootstrapping for adding and removing stores listeners and selecting the parts of the state they need. So React-Redux implementation is pretty good because it abstract all this bootstrap. Basically, you don’t need to write it yourself anymore!
Radium is a library that enhances the capability of inline styles by enabling CSS pseudo selectors inside inline-styles. Why inline styles are good for you is subject of another discussion, but a lot of people are starting to do it and libs like radium really step up the game. If you want to know more about inline styles start by this presentation by Vjeux
So, how does Radium enables inline CSS pseudo selectors like hover? It implements an Inheritance Inversion pattern to use Render Highjacking in order to inject proper event listeners (new props) to simulate CSS pseudo selectors like hover. The event listeners are injected as handlers for React Element’s props. This requires Radium to read all the Elements tree outputted by the WrappedComponent’s render method and whenever it finds an element with a style prop, it adds event listeners props. Simply put, Radium modifies the props of the Element’s tree (what Radium actually does is a little bit more complicated but you get the point)
Radium exposes a really simple API. Pretty impressive considering all the work it performs without the user even noticing. This gives a glimpse of the power of HOC.
Appendix A: HOC and parameters
The following content is optional and you may skip it.
Sometimes is useful to use parameters on your HOCs. This is implicit in all examples above and should be pretty natural to intermediate Javascript developers, but just for the sake of making the post exhaustive let's cover it real quick.
Example: HOC parameters with a trivial Props Proxy. The important thing is the HOCFactoryFactory function.
You can use it like this:
HOCFactoryFactory(params)(WrappedComponent)
//or
@HOCFatoryFactory(params)
class WrappedComponent extends React.Component{}
Appendix B: Difference with Parent Components
The following content is optional and you may skip it.
Parent Components are just React Components that have some children. React has APIs for accessing and manipulating a component’s children.
Example: Parent Components accessing its children.
Now we will review what Parent Components can and cannot do in contrast of HOCs plus some important details:
- Render Highjacking (as seen in Inheritance Inversion)
- Manipulate inner props (as seen in Inheritance Inversion)
- Abstract state. But has its drawbacks. You wont be able to access the state of the Parent Component from outside it unless you explicitly create hooks for it. This makes its usefulness restricted.
- Wrapp with new React Elements. This might be the single use case where Parent Components feel more ergonomic than HOC. HOCs can do this too.
- Children manipulation has some gotchas. For example if children don't got a single root element (more than one first level children), then you need to add an extra element to wrap all children, which might be a little bit cumbersome to your markup. In HOCs a single top level children root is guarantied by React/JSX constraints.
- Parent Components can be used freely in an Elements tree, they are not restricted to once per Component class as HOCs are.
Generally, if you can do it with Parent Components you should because it’s much less hacky than HOCs, but as the list above states they are less flexible than HOCs.
Closing Words
I hope that after reading this post you know a little more about React HOCs. They are highly expressive and have proven to be pretty good in different libraries.
React has brought a lot of innovation and people running projects like Radium, React-Redux, React-Router, among others, are pretty good proofs of that.
If want to contact me please follow me on twitter @franleplant.
Go to this repo to play with code I’ve played with in order to experiment some of the patterns explained in this post.
Credits
Credits go mostly to React-Redux, Radium, this gist by Sebastian Markbåge and my own experimentation. | https://medium.com/@franleplant/react-higher-order-components-in-depth-cf9032ee6c3e | CC-MAIN-2019-09 | refinedweb | 2,511 | 53.61 |
XML::RSS:Parser - A liberal = $ua->request($req); # parse feed $p->parse( $feed->content ); # print feed title and items data dump to screen print $p->channel->{ $p->ns_qualify('title', $p->rss_namespace_uri ) }."\n"; my $d = Data::Dumper->new([ $p- requirement is that the file is well-formed XML. 1.
Inherited from XML::Parser, FILE is an open handle. The file is closed no matter how parse returns. A die call is thrown if a parse error occurs otherwise it will return 1.
Returns a HASH reference of elements found directly under the channel element. The key is the fully namespace qualified element.
Returns a reference to an ARRAY of HASH references. Each hash referenced contains the fully namespaced qualified elements found under directly under an item element. The ordering of the item elements in the feed is maintained within the array.
Returns a HASH reference of elements found directly under the image element. If an image has not been defined the hash will not contain any key/value pairs.
A simple utility method that will return a fully namespace qualified string for the supplied element..
XML::Parser,,,,
The software is released under the Artistic License. The terms of the Artistic License are described at.
Except where otherwise noted, XML::RSS::Parser is Copyright 2003, Timothy Appnel, tima@mplode.com. All rights reserved. | http://search.cpan.org/~tima/XML-RSS-Parser-0.21/Parser.pm | crawl-002 | refinedweb | 221 | 59.5 |
Programming news: Rails 3.0, F# 2.0, Dryad, NuCaptcha
Read about Google App Engine 1.3.6, Visual Studio Lab Management, adaptive software, debugging Silverlight in Firefox, XAML multitargeting, and more.
Securing your OutSystems Agile Platform application
Justin James looks at the three types of security he worked with in the OutSystems Agile Platform: authentication, authorization, and encryption. He also provides a quick overview of the Enterprise Ma...
Modify generated XSLT 1.0 with the xsl:namespace element
Edmond Woychowsky shares a trick for modifying generated XSLT 1.0 without worrying about someone regenerating the map. He also throws in a couple of Star Trek references for good measure.
Using AJAX in the OutSystems Agile Platform
Justin James says that working with AJAX in the OutSystem Agile Platform requires about five minutes to learn, a minor change to your logic, and a little drag-n-drop.
PLINQ and Tasks: Two more .NET parallel options
Learn how to have .NET process LINQ statements in parallel with 13 characters, and how to use Tasks to create more complex multithreaded applications.
How to use the Parallel class for simple multithreading
In this programming tutorial about Parallel Extensions in .NET 4, Justin James focuses on imperative parallelism as presented by the Parallel class..
Poll: Where is the money in mobile development?
Most developers making money with mobile development are writing in-house apps or working on projects that enhance a brand. Where do you see the money in mobile development?
Windows Phone 7 through a developer's eyes
Developer Justin James shares his perspective about the pros and the cons about development for Windows Phone 7. Find out what he thinks could be the Achilles' heel for Windows Phone 7.
Poll: What kind of version control do you use?
Even though it is standard for development shops to use a formal version control system, distributed version control systems have rapidly gained adoption. If you are using version control, what kind a...
Manipulate XML and SQL with LINQ via LINQPad
Edmond Woychowsky describes using LINQPad to write a LINQ query for inserting Excel data into a SQL database.
Poll: Does the type of open source license matter to you?
When you are evaluating an open source development tool, how important is the kind of license it uses to you? Let us know by answering this poll question.
Review: Visual Studio 2010 Ultimate Edition
Visual Studio 2010 Ultimate Edition is for developers who need to do application design, development, and testing. Find out the most compelling reason to buy this edition of Visual Studio 2010.
Programming news: PHP updates, Eclipse SDK 4.0, Google Font Previewer
Read about IronPython 2.7 Alpha 1, Microsoft KittyHawk, the TFS Scrum template, Android app licensing, Visual Studio 2010 keyboard shortcuts, and more.
Is SLOC a valid measure of quality or efficiency?
Is SLOC a valid measure of quality or efficiency?
Monitor Web site requests with Mozilla's LiveHTTPHeaders extension
The LiveHTTPHeaders Mozilla extension is a useful tool for both Web developers and administrators. It provides an easy way to monitor HTTP activity and track down potential problems with Web requests....
See more Enterprise Software...
Using the PrintDocument component in VB.NET applications
Using the PrintDocument component in VB.NET applications
ORA-04030 doesn't always mean you're running out of RAM
ORA-04030 doesn't always mean you're running out of RAM
Develop JavaScript with the JSEclipse plug-in
Develop JavaScript with the JSEclipse plug-in
A dirge for CPAN
A dirge for CPAN.
Do comments slow down PL/SQL?
Do comments slow down PL/SQL?
Ensure your Web site displays properly with character encoding
Tony Patton examines why developers use character encoding for Web pages, outlines the options you have, and offers guidance on how to choose a character encoding.
See more Project Management
How important is multithreading in application development?
How important is multithreading in application development?
Working with .NET access modifiers
The access modifiers available in .NET allow you to put encapsulation in action by defining who can use classes, methods, properties, and so forth, as well as keeping out those who don't need them. He...
Sending e-mail from an Oracle database with utl_smtp
Sending e-mail from an Oracle database....
First impressions of ASP.NET's MVC framework
Find out why you may want to use Microsoft's Model View Controller (MVC) framework instead of Web Forms. | http://www.techrepublic.com/blog/software-engineer/54/ | CC-MAIN-2017-30 | refinedweb | 737 | 57.47 |
Now that we've got some basics down, let's make our application a little more interactive. The following minor upgrade, HelloJava2, allows us to drag the message text around with the mouse.
We'll call this example HelloJava2 rather than cause confusion by continuing to expand the old one, but the primary changes here and further on lie in adding capabilities to the HelloComponent class and simply making the corresponding changes to the names to keep them straight, e.g., HelloComponent2, HelloComponent3, etc. Having just seen inheritance at work, you might wonder why we aren't creating a subclass of HelloComponent and exploiting inheritance to build upon our previous example and extend its functionality. Well, in this case, that would not provide much advantage, and for clarity we simply start over.
Here is HelloJava2:
//file: HelloJava2.java import java.awt.*; import java.awt.event.*; import javax.swing.*; public class HelloJava2 { public static void main( String[] args ) { JFrame frame = new JFrame( "HelloJava2" ); frame.getContentPane( ).add( new HelloComponent2("Hello, Java!") ); frame.setDefaultCloseOperation( JFrame.EXIT_ON_CLOSE ); frame.setSize( 300, 300 ); frame.setVisible( true ); } } class HelloComponent2 extends JComponent implements MouseMotionListener { String theMessage; int messageX = 125, messageY = 95; // Coordinates of the message public HelloComponent2( String message ) { theMessage = message; addMouseMotionListener(this); } public void paintComponent( Graphics g ) { g.drawString( theMessage, messageX, messageY ); } public void mouseDragged(MouseEvent e) { // Save the mouse coordinates and paint the message. messageX = e.getX( ); messageY = e.getY( ); repaint( ); } public void mouseMoved(MouseEvent e) { } }
Two slashes in a row indicates that the rest of the line is a comment. We've added a few comments to HelloJava2 to help you keep track of everything.
Place the text of this example in a file called HelloJava2.java and compile it as before. You should get new class files, HelloJava2.class and HelloComponent2.class as a result.
Run the example using the following command:
C:\> java HelloJava2
Feel free to substitute your own salacious comment for the "Hello, Java!" message and enjoy many hours of fun, dragging the text around with your mouse. Notice that now when you click the window's close button, the application exits; we'll explain that later when we talk about events. Now let's see what's changed.
We have added some variables to the HelloComponent2 class in our example:
int messageX = 125, messageY = 95; String theMessage;
messageX and messageY are integers that hold the current coordinates of our movable message. We have crudely initialized them to default values that should place the message somewhere near the center of the window. Java integers are 32-bit signed numbers, so they can easily hold our coordinate values. The variable theMessage is of type String and can hold instances of the String class.
You should note that these three variables are declared inside the braces of the class definition, but not inside any particular method in that class. These variables are called instance variables, and they belong to the class as a whole. Specifically, copies of them appear in each separate instance of the class. Instance variables are always visible to (and usable by) all the methods inside their class. Depending on their modifiers, they may also be accessible from outside the class.
Unless otherwise initialized, instance variables are set to a default value of 0, false, or null, depending on their type. Numeric types are set to 0, Boolean variables are set to false, and class type variables always have their value set to null, which means "no value." Attempting to use an object with a null value results in a runtime error.
Instance variables differ from method arguments and other variables that are declared inside the scope of a particular method. The latter are called local variables. They are effectively private variables that can be seen only by code inside the method. Java doesn't initialize local variables, so you must assign values yourself. If you try to use a local variable that has not yet been assigned a value, your code generates a compile-time error. Local variables live only as long as the method is executing and then disappear, unless something else saves their value. Each time the method is invoked, its local variables are recreated and must be assigned values.
We have used the new variables to make our previously stodgy paintComponent( ) method more dynamic. Now all the arguments in the call to drawString( ) are determined by these variables.
The HelloComponent2 class includes a special kind of a method called a constructor. A constructor is called to set up a new instance of a class. When a new object is created, Java allocates storage for it, sets instance variables to their default values, and calls the constructor method for the class to do whatever application-level setup is required.
A constructor always has the same name as its class. For example, the constructor for the HelloComponent2 class is called HelloComponent2( ). Constructors don't have a return type, but you can think of them as creating an object of their class's type. Like other methods, constructors can take arguments. Their sole mission in life is to configure and initialize newly born class instances, possibly using information passed to them in these parameters.
An object is created with the new operator specifying the constructor for the class and any necessary arguments. The resulting object instance is returned as a value. In our example, a new HelloComponent2 instance is created in the main( ) method by this line:
frame.add( new HelloComponent2("Hello, Java!") );
This line actually does two things. We could write them as two separate lines that are a little easier to understand:
HelloComponent2 newObject = new HelloComponent2("Hello, Java!"); frame.add( newObject );
The first line is the important one, where a new HelloComponent2 object is created. The HelloComponent2 constructor takes a String as an argument and, as we have arranged it, uses it to set the message that is displayed in the window. With a little magic from the Java compiler, quoted text in Java source code is turned into a String object. (See Chapter 10 for a complete discussion of the String class.) The second line simply adds our new component to the frame to make it visible, as we did in the previous examples.
While we're on the topic, if you'd like to make our message configurable, you can change the constructor line to the following:
HelloComponent2 newobj = new HelloComponent2( args[0] );
Now you can pass the text on the command line when you run the application using the following command:
C:\> java HelloJava2 "Hello, Java!"
args[0] refers to the first command-line parameter. Its meaning will be clearer when we discuss arrays later in the book.
HelloComponent2's constructor then does two things: it sets the text of theMessage instance variable and calls addMouseMotionListener( ). This method is part of the event mechanism, which we discuss next. It tells the system, "Hey, I'm interested in anything that happens involving the mouse."
public HelloComponent2(String message) { theMessage = message; addMouseMotionListener( this ); }
The special, read-only variable called this is used to explicitly refer to our object (the "current" object context) in the call to addMouseMotionListener( ). A method can use this to refer to the instance of the object that holds it. The following two statements are therefore equivalent ways of assigning the value to theMessage instance variable:
theMessage = message;
or:
this.theMessage = message;
We'll normally use the shorter, implicit form to refer to instance variables, but we'll need this when we have to explicitly pass a reference to our object to a method in another class. We often do this so that methods in other classes can invoke our public methods or use our public variables.
The last two methods of HelloComponent2, mouseDragged( ) and mouseMoved( ), let us get information from the mouse. Each time the user performs an action, such as pressing a key on the keyboard, moving the mouse, or perhaps banging his or her head against a touch screen, Java generates an event. An event represents an action that has occurred; it contains information about the action, such as its time and location. Most events are associated with a particular GUI component in an application. A keystroke, for instance, can correspond to a character being typed into a particular text entry field. Pressing a mouse button can activate a particular button on the screen. Even just moving the mouse within a certain area of the screen can trigger effects such as highlighting or changing the cursor's shape.
To work with these events, we've imported a new package, java.awt.event, which provides specific Event objects that we use to get information from the user. (Notice that importing java.awt.* doesn't automatically import the event package. Packages don't really contain other packages, even if the hierarchical naming scheme would imply that they do.)
There are many different event classes, including MouseEvent, KeyEvent, and ActionEvent. For the most part, the meaning of these events is fairly intuitive. A MouseEvent occurs when the user does something with the mouse, a KeyEvent occurs when the user presses a key, and so on. ActionEvent is a little special; we'll see it at work later in this chapter in our third version of HelloJava. For now, we'll focus on dealing with MouseEvents.
GUI components in Java generate events for specific kinds of user actions. For example, if you click the mouse inside a component, the component generates a mouse event. Objects can ask to receive the events from one or more components by registering a listener with the event source. For example, to declare that a listener wants to receive a component's mouse-motion events, you invoke that component's addMouseMotionListener( ) method, specifying the listener object as an argument. That's what our example is doing in its constructor. In this case, the component is calling its own addMouseMotionListener( ) method, with the argument this, meaning "I want to receive my own mouse-motion events."
That's how we register to receive events. But how do we actually get them? That's what the two mouse-related methods in our class are for. The mouseDragged( ) method is called automatically on a listener to receive the events generated when the user drags the mousethat is, moves the mouse with any button pressed. The mouseMoved( ) method is called whenever the user moves the mouse over the area without pressing a button. In this case, we've placed these methods in our HelloComponent2 class and had it register itself as the listener. This is entirely appropriate for our new text-dragging component. More generally, good design usually dictates that event listeners be implemented as adapter classes that provide better separation of GUI and "business logic." We'll discuss that in detail later in the book.
Our mouseMoved( ) method is boring: it doesn't do anything. We ignore simple mouse motions and reserve our attention for dragging. mouseDragged( ) has a bit more meat to it. This method is called repeatedly by the windowing system to give us updates on the position of the mouse. Here it is:
public void mouseDragged( MouseEvent e ) { messageX = e.getX( ); messageY = e.getY( ); repaint( ); }
The first argument to mouseDragged( ) is a MouseEvent object, e, that contains all the information we need to know about this event. We ask the MouseEvent to tell us the x and y coordinates of the mouse's current position by calling its getX( ) and getY( ) methods. We save these in the messageX and messageY instance variables for use elsewhere.
The beauty of the event model is that you have to handle only the kinds of events you want. If you don't care about keyboard events, you just don't register a listener for them; the user can type all she wants, and you won't be bothered. If there are no listeners for a particular kind of event, Java won't even generate it. The result is that event handling is quite efficient.[*]
[*] Event handling in Java 1.0 was a very different story. Early on, Java did not have a notion of event listeners and all event handling happened by overriding methods in base GUI classes. This was both inefficient and led to poor design with a proliferation of highly specialized components.
While we're discussing events, we should mention another small addition we slipped into HelloJava2:
frame.setDefaultCloseOperation( JFrame.EXIT_ON_CLOSE );
This line tells the frame to exit the application when its close button is pressed. It's called the "default" close operation because this operation, like almost every other GUI interaction, is governed by events. We could register a window listener to get notification of when the user pushes the close button and take whatever action we like, but this convenience method handles the common cases.
Finally, we've danced around a couple of questions here: how does the system know that our class contains the necessary mouseDragged( ) and mouseMoved( ) methods (where do these names come from)? And why do we have to supply a mouseMoved( ) method that doesn't do anything? The answer to these questions has to do with interfaces. We'll discuss interfaces after clearing up some unfinished business with repaint( ).
Since we changed the coordinates for the message (when we dragged the mouse), we would like HelloComponent2 to redraw itself. We do this by calling repaint( ), which asks the system to redraw the screen at a later time. We can't call paintComponent( ) directly, even if we wanted to, because we don't have a graphics context to pass to it.
We can use the repaint( ) method of the JComponent class to request that our component be redrawn. repaint( ) causes the Java windowing system to schedule a call to our paintComponent( ) method at the next possible time; Java supplies the necessary Graphics object, as shown in Figure 2-4.
This mode of operation isn't just an inconvenience brought about by not having the right graphics context handy. The foremost advantage to this mode of operation is that the repainting behavior is handled by someone else while we are free to go about our business. The Java system has a separate, dedicated thread of execution that handles all repaint( ) requests. It can schedule and consolidate repaint( ) requests as necessary, which helps to prevent the windowing system from being overwhelmed
during painting-intensive situations like scrolling. Another advantage is that all the painting functionality must be encapsulated through our paintComponent( ) method; we aren't tempted to spread it throughout the application.
Now it's time to face the question we avoided earlier: how does the system know to call mouseDragged( ) when a mouse event occurs? Is it simply a matter of knowing that mouseDragged( ) is some magic name that our event-handling method must have? Not quite; the answer to the question touches on the discussion of interfaces, which are one of the most important features of the Java language.
The first sign of an interface comes on the line of code that introduces the HelloComponent2 class: we say that the class implements the MouseMotionListener interface.
class HelloComponent2 extends JComponent implements MouseMotionListener {
Essentially, an interface is a list of methods that the class must have; this particular interface requires our class to have methods called mouseDragged( ) and mouseMoved( ). The interface doesn't say what these methods have to do; indeed, mouseMoved( ) doesn't do anything. It does say that the methods must take a MouseEvent as an argument and return no value (that's what void means).
An interface is a contract between you, the code developer, and the compiler. By saying that your class implements the MouseMotionListener interface, you're saying that these methods will be available for other parts of the system to call. If you don't provide them, a compilation error will occur.
That's not the only way interfaces impact this program. An interface also acts like a class. For example, a method could return a MouseMotionListener or take a MouseMotionListener as an argument. When you refer to an object by an interface name in this way it means that you don't care about the object's actual class; the only requirement is that the class implements that interface. addMouseMotionListener( ) is such a method: its argument must be an object that implements the MouseMotionListener interface. The argument we pass is this, the HelloComponent2 object itself. The fact that it's an instance of JComponent is irrelevant; it could be a Cookie, an Aardvark, or any other class we dream up. What's important is that it implements MouseMotionListener and, thus, declares that it will have the two named methods. That's why we need a mouseMoved( ) method, even though the one we supplied doesn't do anything: the MouseMotionListener interface says we have to have one.
The Java distribution comes with many interfaces that define what classes have to do. This idea of a contract between the compiler and a class is very important. There are many situations like the one we just saw where you don't care what class something is, you just care that it has some capability, such as listening for mouse events. Interfaces give us a way of acting on objects based on their capabilities without knowing or caring about their actual type. They are a tremendously important concept in how we use Java as an object-oriented language, and we'll talk about them in detail in Chapter 4.
We'll also see shortly that interfaces provide a sort of escape clause to the Java rule that any new class can extend only a single class ("single inheritance"). A class in Java can extend only one class, but can implement as many interfaces as it wants; our next example implements two interfaces and the final example in this chapter implements three. In many ways, interfaces are almost like classes, but not quite. They can be used as data types, can extend other interfaces (but not classes), and can be inherited by classes (if class A implements interface B, subclasses of A also implement B). The crucial difference is that classes don't actually inherit methods from interfaces; the interfaces merely specify the methods the class must have. | https://flylib.com/books/en/4.122.1.27/1/ | CC-MAIN-2021-25 | refinedweb | 3,041 | 54.22 |
How to export DataTable to XML in C#
by GetCodeSnippet.com • May 22, 2013 • Microsoft .NET C# (C-Sharp), Microsoft ASP.NET • 0 Comments
Sometimes we need to export our DataSet or DataTable data to XML file. ASP.NET provides a simple WriteXml() method of DataTable or DataSet class. This method simply writes DataTable or DataSet data to the specified file. You can also achieve this by using StreamWriter class object with WriteXml() method of DataTable or DataSet class.
Let’s see how we can do it. You can also download sample code at the bottom of the article.
Don’t forget to use following namespaces in your code as these are necessary. | http://getcodesnippet.com/2013/05/22/how-to-export-datatable-to-xml-in-c/ | CC-MAIN-2019-51 | refinedweb | 113 | 77.53 |
Any of you genius mind know how to hook a wxPython GUI back up to arcmap?
import arcpy import pythonaddins import wx class TestButtonClass(object): """Implementation for wxTest_addin.TestButton (Button)""" dlg = None def __init__(self): self.enabled = True self.checked = False def onClick(self): if self.dlg is None: self.dlg = TestDialog() else: self.dlg.Show(True) return class TestDialog(wx.Frame): def __init__(self): wxStyle = wx.CAPTION | wx.RESIZE_BORDER | wx.MINIMIZE_BOX |wx.CLOSE_BOX | wx.SYSTEM_MENU wx.Frame.__init__(self, None, -1, "Set DataFrame Name", style=wxStyle, size=(300, 120)) self.SetMaxSize((300, 120)) self.SetMinSize((300, 120)) self.Bind(wx.EVT_CLOSE, self.OnClose) panel = wx.Panel(self, -1) self.lblStatus = wx.StaticText(panel, -1, "OK", pos=(8, 8)) wx.StaticText(panel, -1, "Title:", pos=(8,36)) self.tbTitle = wx.TextCtrl(panel, -1, value="", pos=(36, 36), size=(200,21)) self.btnSet = wx.Button(panel, label="Set", pos=(8, 66)) self.Bind(wx.EVT_BUTTON, self.OnSet, id=self.btnSet.GetId()) self.Show(True) def OnClose(self, event): self.Show(False) # self.Destroy() doesn't work def OnSet(self, event): # Use str() to strip away Unicode crap sTitle = str(self.tbTitle.GetValue()) mxd = arcpy.mapping.MapDocument("CURRENT") df = mxd.activeDataFrame df.name = sTitle arcpy.RefreshTOC() app = wx.PySimpleApp() app.MainLoop()
I have written a Python script using wxPython that runs inside ArcMap. It runs fine from the Python Command Line inside ArcMap, but if it is launched from an ArcMap Add-In, it runs once without a problem, but if it is run a second time in the same ArcMap session, it crashes ArcMap. If it is launched from a Toolbar Tool, it crashes ArcMap immediately. More details here:
Unfortunately, this currenly does not work and is not supported. The low-levels of integrating any Python UI framework (including wxPython) with the event loop of the desktop applications is very difficult. It is something we are researching to fix, but the scope of such a project is very big and isn't something that we feel can be done in the 10.1 timeframe. We do hope to support wxPython in the future as the Python UI Framework for hooking into ArcGIS Desktop.
Thanks,
Jason Pardy
Esri | https://community.esri.com/thread/38336-wxpython-hooked-to-arcmap | CC-MAIN-2018-22 | refinedweb | 366 | 54.08 |
0
Where are the connections stored, are they in a list, how to I access them?
I have a (-messy-) program that will allow the users to set there name and when they send input to the server it will come up with "Received: Blah from username" but I would like to send that to all of the people who are connected.
Also, So that they can shut the connection to them.
from twisted.internet import reactor, protocol PORT = 6661 class User(protocol.Protocol): connectionstat = 1 name = "" def connectionMade(self): self.transport.write("Hello, What is your name?") def dataReceived(self, data): if self.connectionstat == 1: self.name = data self.connectionstat = 2 else: print "Received: " + data.rstrip('\n') + " from " + self.name self.transport.write("You Sent: " + data) def main(): factory = protocol.ServerFactory() factory.protocol = User reactor.listenTCP(PORT,factory) print "Running Echo..." reactor.run() if __name__ == '__main__': main()
Thanks | https://www.daniweb.com/programming/software-development/threads/364837/twisted-connections | CC-MAIN-2017-47 | refinedweb | 149 | 54.49 |
span8
span4
span8
span4
Hi, does anyone know a way to limit attributes returned from WFS. I just want to return back the unique id so I can cross check against a clone out the data.
This structure works in postman but I dont see a way for it to work in the fme reader. Does something need to be added to the xml filter?? service=wfs& version=2.0.0& request=GetFeature& typeNames=namespace:featuretype& featureID=feature& propertyName=attribute1,attribute2
Many Thanks
Oliver
Hi @olivermorris,
You can just add the extra request parameters into the WFS reader.
Like so:
Hope this helps,
Itay
thanks @itay I will try with the reader, i was using the feature_reader and I dont think you have the same ability - it seems to override the URL.
Answers Answers and Comments
13 People are following this question.
WFS service expose attributes problem 1 Answer
Reading WFS in defined portions 2 Answers
FME desktop won't load data from wfs service. 1 Answer
Reading WFS fails in FME 2019, works in FME 2016 3 Answers
Dynamically read WFS with token generation 4 Answers | https://knowledge.safe.com/questions/97938/wfs-limit-attributes-returned.html | CC-MAIN-2019-47 | refinedweb | 187 | 60.04 |
PEP 8 – Style Guide for Python Code
- Author:
- Guido van Rossum <guido at python.org>, Barry Warsaw <barry at python.org>, Nick Coghlan <ncoghlan at gmail.com>
- Status:
- Active
- Type:
- Process
- Created:
- 05-Jul-2001
- Post-History:
- 05-Jul-2001, 01-Aug-2013
Table of Contents
- Introduction
- A Foolish Consistency is the Hobgoblin of Little Minds
- Code Lay-out
- String Quotes
- Whitespace in Expressions and Statements
- When to Use Trailing Commas
- [1]. When using a hanging indent the following should be considered; there should be no arguments on the first line and further indentation should be used to clearly distinguish itself as a continuation line:
# Correct: # Aligned with opening delimiter. foo = long_function_name(var_one, var_two, var_three, var_four) # Add 4 spaces (an extra level of indentation) to distinguish arguments from the rest. def long_function_name( var_one, var_two, var_three, var_four): print(var_one) # Hanging indents should add a level. foo = long_function_name( var_one, var_two, var_three, var_four)
# Wrong: # multiline multiline disallows mixing tabs and spaces for indentation. line length limit up could not use implicit continuation
before Python 3.10, so backslashes were acceptable for that case::
# Wrong: #:
# Correct: #, and should not have an encoding declaration.
In the standard library, non-UTF-8 encodings should be used only for test purposes. Use non-ASCII characters sparingly, preferably only to denote places and human names. If using non-ASCII characters as data, avoid noisy Unicode characters like z̯̯͡a̧͎̺l̡͓̫g̹̲o̡̼̘ and byte order marks.
All identifiers in the Python standard library MUST use ASCII-only identifiers, and SHOULD use English words wherever feasible (in many cases, abbreviations and technical terms are used which aren’t English).
Open source projects with a global audience are encouraged to adopt a similar policy.
Imports
- Imports should usually be on separate lines:
# Correct: import os import sys
# Wrong: import sys, os
It’s okay to say this though:
# Correct:.
- When importing a class from a class-containing module, it’s usually okay to spell this:
from myclass import MyClass from foo.bar.yourclass import YourClass
If this spelling causes local name clashes, then spell them explicitly::
#::
# Correct: i = i + 1 submitted += 1 x = x*2 - 1 hypot2 = x*x + y*y c = (a+b) * (a-b)
# Wrong: i=i+1 submitted +=1 x = x * 2 - 1 hypot2 = x * x + y * y c = (a + b) * (a - b)
- Function annotations should use the normal rules for colons and always have spaces around the
->arrow if present. (See Function Annotations below for more about function annotations.):
# Correct: def munge(input: AnyStr): ... def munge() -> PosInt: ...
# Wrong: def munge(input:AnyStr): ... def munge()->PosInt: ...
- Don’t use spaces around the
=sign when used to indicate a keyword argument, or when used to indicate a default value for an unannotated function parameter:
# Correct: def complex(real, imag=0.0): return magic(r=real, i=imag)
# Wrong: def complex(real, imag = 0.0): return magic(r = real, i = imag)
When combining an argument annotation with a default value, however, do use spaces around the
=sign:
# Correct: def munge(sep: AnyStr = None): ... def munge(input: AnyStr, sep: AnyStr = None, limit=1000): ...
# Wrong: def munge(input: AnyStr=None): ... def munge(input: AnyStr, limit = 1000): ...
- Compound statements (multiple statements on the same line) are generally discouraged:
# Correct: if foo == 'blah': do_blah_thing() do_one() do_two() do_three()
Rather not:
# Wrong: if foo == 'blah': do_blah_thing() do_one(); do_two(); do_three()
- While sometimes it’s okay to put an if/for/while with a small body on the same line, never do this for multi-clause statements. Also avoid folding such long lines!
Rather not:
# Wrong: if foo == 'blah': do_blah_thing() for x in lst: total += x while t < 10: t = delay()
Definitely not:
# Wrong: if foo == 'blah': do_blah_thing() else: do_non_blah_thing() try: something() finally: cleanup() do_one(); do_two(); do_three(long, argument, list, like, this) if foo == 'blah': one(); two(); three()
When to Use Trailing Commas
Trailing commas are usually optional, except they are mandatory when making a tuple of one element. For clarity, it is recommended to surround the latter in (technically redundant) parentheses:
# Correct: FILES = ('setup.cfg',)
# Wrong: FILES = 'setup.cfg',
When trailing commas are redundant, they are often helpful when a version control system is used, when a list of values, arguments or imported items is expected to be extended over time. The pattern is to put each value (etc.) on a line by itself, always adding a trailing comma, and add the close parenthesis/bracket/brace on the next line. However it does not make sense to have a trailing comma on the same line as the closing delimiter (except in the above case of singleton tuples):
# Correct: FILES = [ 'setup.cfg', 'tox.ini', ] initialize(FILES, error=True, )
# Wrong: FILES = ['setup.cfg', 'tox.ini',] initialize(FILES, error acronyms in CapWords, capitalize all the letters of the acronym. names start.
Prescriptive: Naming Conventions.
ASCII Compatibility
Identifiers used in the standard library must be ASCII compatible as described in the policy section of PEP 3131..
Type Variable Names
Names of type variables introduced in PEP 484 should normally use CapWords
preferring short names:
T,
AnyStr,
Num. It is recommended to add
suffixes
_co or
_contra to the variables used to declare covariant
or contravariant behavior correspondingly:
from typing import TypeVar VT_co = TypeVar('VT_co', covariant=True) KT_contra = TypeVar('KT_contra', contravariant=True) and Variable Names
Function names should be lowercase, with words separated by underscores as necessary to improve readability.
Variable names follow the same convention as function names. backwards, ‘: Try to keep the functional behavior side-effect free, although side-effects such as caching are generally fine.
Note 2:!
- Use
is notoperator rather than
not ... is. While both expressions are functionally identical, the former is more readable and preferred:
# Correct: if foo is not None:
# Wrong:with
x < y,
y >= xwith
x <= y, and may swap the arguments of
x == y:
# Correct: def f(x): return 2*x
# Wrong: f = lambda x: 2*x
The first form means that the name of the resulting function object is specifically ‘f’ instead of the generic ‘<lambda>’. This is more useful for tracebacks and string representations in general. The use of the assignment statement eliminates the sole benefit a lambda expression can offer over an explicit def statement (i.e. that it can be embedded inside a larger expression)
- Derive exceptions from
Exceptionrather than
BaseException. Direct inheritance from
BaseException.
raise X from Yshould be used to indicate explicit replacement without losing the original traceback.
When deliberately replacing an inner exception (using
raise X from None), ensure that relevant details are transferred to the new exception (such as preserving the attribute name when converting KeyError to AttributeError, or embedding the text of the original exception in the new exception message).
- When catching exceptions, mention specific exceptions whenever possible instead of using a bare
except:clause: ‘except’ clauses to two cases:
- If the exception handler will be printing out or logging the traceback; at least the user will be aware that an error has occurred.
- If the code needs to do some cleanup work, but then lets the exception propagate upwards with
raise.
try...finallycan be a better way to handle this case.
- When catching operating system errors, prefer the explicit exception hierarchy introduced in Python 3.3 over introspection of
errnovalues.
- Additionally, for all try/except clauses, limit the
tryclause to the absolute minimum amount of code necessary. Again, this avoids masking bugs:
# Correct: try: value = collection[key] except KeyError: return key_not_found(key) else: return handle_value(value)
# Wrong: try: # Too broad! return handle_value(collection[key]) except KeyError: # Will also catch KeyError raised by handle_value() return key_not_found(key)
- When a resource is local to a particular section of code, use a
withstatement to ensure it is cleaned up promptly and reliably after use. A try/finally statement is also acceptable.
- Context managers should be invoked through separate functions or methods whenever they do something other than acquire and release resources:
# Correct: with conn.begin_transaction(): do_stuff_in_transaction(conn)
# Wrong:):
#)
- Use
''.startswith()and
''.endswith()instead of string slicing to check for prefixes or suffixes.
startswith() and endswith() are cleaner and less error prone:
# Correct: if foo.startswith('bar'):
# Wrong: if foo[:3] == 'bar':
- Object type comparisons should always use isinstance() instead of comparing types directly:
# Correct: if isinstance(obj, int):
# Wrong: if type(obj) is type(1):
- For sequences, (strings, lists, tuples), use the fact that empty sequences are false:
# Correct: if not seq: if seq:
# Wrong:
==:
# Correct: if greeting:
# Wrong: if greeting == True:
Worse:
# Wrong: if greeting is True:
- Use of the flow control statements
return/
break/
continuewithin the finally suite of a
try...finally, where the flow control statement would jump outside the finally suite, is discouraged. This is because such statements will implicitly cancel any active exception that is propagating through the finally suite:
# Wrong: def foo(): try: 1 / 0 finally: return 42
Function Annotations
With the acceptance of PEP 484, the style rules for function annotations have changed.
- Function annotations shouldability.
-].
Variable Annotations
PEP 526 introduced variable annotations. The style recommendations for them are similar to those on function annotations described above:
-
- Although the PEP 526 is accepted for Python 3.6, the variable annotation syntax is the preferred syntax for stub files on all versions of Python (see PEP 484 for details).
Footnotes
References
This document has been placed in the public domain.
Source:
Last modified: 2022-05-11 17:45:05 GMT
Comments that contradict the code are worse than no comments. Always make a priority of keeping the comments up-to-date when the code changes!
Comments should be complete sentences. The first word should be capitalized, unless it is an identifier that begins with a lower case letter (never alter the case of identifiers!).
Block comments generally consist of one or more paragraphs built out of complete sentences, with each sentence ending in a period.
You should use two spaces after a sentence-ending period in multi- sentence comments, except after the final sentence.
Ensure that your comments are clear and easily understandable to other speakers of the language you are writing in..
defline.
"""that ends a multiline docstring should be on a line by itself:
"""on the same line: | https://peps.python.org/pep-0008/ | CC-MAIN-2022-27 | refinedweb | 1,680 | 52.09 |
Last week, I talked about setting up a dynamic personal web page on Google AppEngine. I mentioned that it would be possible to grow and expand your page beyond the simplistic informational page I have shown you. More specifically, I mentioned running blogs and forums out of it. I already described how to set up a AppEngine based forum so there is no need to reiterate that here. Today I wanted to talk about running your own blog on top of the service.
No, I don’t expect you to write your own blog software, just like you didn’t have to write the forum software. Believe it or not, but blogging engines are being ported to or designed this platform. One of such engines is Bloog written and maintained by Bill Katz. As far as I can tell, it is one of the first Blog software that runs on AppEngine but there will be more to come – I’m sure of it.
Bloog may not be the most sophisticated blogging platform, or the most configurable one – it’s not WordPress. But it is fully functional and it has everything you may need for day to day blogging. It supports reader comments, tagging, has a dynamic contact form, dynamic archives by year, it automatically generates an Atom feed and the template is pre-configured to allow you including vertical add banners in the sidebar. No bells and whistles mind you, but solid functionality.
Bloog is on GitHub and you probably are best off pulling the source code from there. If you are a bit old fashioned like me, and you haven’t caught the git bug yet, the GitHub has a wonderful feature that will allow you to download all of the source code from the repository with a single mouse click as either a zip file or a tarball. Unfortunately if you do that, you will get an incomplete version of the code.
I haven’t really used git, so maybe someone with more experience can verify if what I’m saying is correct. From what I’ve seen, if you are using git, you can set up something akin to symlinks that point to other projects on GitHub. When you check the code out, git will notice the dependency, and follow the link to download the latest and greatest code for it from it’s own repository and put it in the correct directory in your project. This is very neat, but unfortunately it breaks down when you use the “download code as a tarball” feature. The “foreign” code is not fetched, so you end up with empty directories where it was supposed to be.
Bloog depends on Firepython which should be in the utils/external/firepython/ directory. But it’s not and you will probably spend few minutes scratching your head to figure it out. Fortunately I stumbled upon this discussion to help me out.
Essentially, what you have to do, is to grab Firepython a tarball, and extract it into utils/external/firepython/ in your Bloog directory. After you do that, you are ready to go. Here is how my Bloog turned out:
There administration UI is vestigial, so to change things such as the blog title, author and the sidebar links you need to manually edit config.py in the root directory of your project. It’s actually very straightforward – just a long associative list. There is nothing to it, and even someone with no knowledge of Python should be able to do it.
The sidebar banner (I changed it to that liquid drip thing just to show you it’s possible) is defined in a static HTML file located in \views\default\bloog\ads.html. Just delete whatever was there and insert your own personalized vertical banner or add block.
Once you do that, you can just deploy it to Google AppEngine and enjoy your own little blog. That’s really all there is to it. They could make it easier to install, but really the Firepython thing is my fault. If I used git, it would probably never happen.
There you go – yet another awesome thing you can do with Google Apps. You can blog on it, run discussion forums, create personal pages – possibilities are endless. When it first came out, a lot of people discarded it as a mere toy but it’s not. It is a glimpse of what cloud computing can do for you these days! It is a glimpse of how a lot of hosting might be done in the future.
Of course, standard disclaimers apply – you are hosting your shit on Google’s cloud for free which means you don’t really own it. And since it’s google, it will probably harvest your blog for data which it will use maliciously in it’s quest o achieve sentience. So you have to factor that into the equation when you are deciding whether or not you want to use this service.
Arguably, in this day and age there are dozens of free blogging platforms out there. Most of them will let you use your own domain name too. So running a blog on AppEngine is probably not the most efficient way to go about blogging. Bu it is kindoff awesome – and you can hack python code and you are not afraid to get your hands dirty you do get much more control over how your blog looks and what it does than with one of these free services.
I wouldn’t probably recommend running your “mission critical” blog on this platform. But for AppEngine enthusiasts like me, Bloog is yet another fun app to mess around with.
Your posts on Google App Engine inspired me to check it out. But I have to learn python now! :|
Do you use a nice IDE for your python hacks? Or are you using Notepad++ to do every single thing that is just basically text (aka. my current way :P)?
@Mart: Well, I’ve been using Vim for most of the Python stuff lately. So yeah – text editor type thing.
If you do want a lightweight IDE Komodo Edit is pretty decent – it does have good Python support and an OK Vim key bindings (it’s missing a lot of advanced Vim features though).
Pingback: Terminally Incoherent » Blog Archive » What is your favorite code editor?
I recently forked an existing google app engine / python / django blog and added new features and fixed bugs. It’s pretty nice and has lots of social media plugins ready to go. There is still much to do and I have high aspirations for it. Take a look:
App Engine Blog
Hi, I encounter a problem when use the bloog. Do you have any ideas ?
Here’s the error message:
Traceback (most recent call last):
File "C:\Program Files\Google\google_appengine\google\appengine\ext\webapp\__init__.py" , line 507, in __call__
handler.get(*groups)
File "D:\GAE\bloog\handlers\bloog\blog.py", line 371, in get
filter('article_type =', 'blog entry').order('-published'))
File "D:\GAE\bloog\view.py", line 246, in render_query
self.render(handler, render_params)
File "D:\GAE\bloog\view.py", line 224, in render
output = self.render_or_get_cache(handler, template_info, params)
File "D:\GAE\bloog\view.py", line 211, in render_or_get_cache
output = self.full_render(handler, template_info, template_params)
File "D:\GAE\bloog\view.py", line 172, in full_render
tags = Tag.list()
File "D:\GAE\bloog\models\__init__.py", line 181, in list
list_repr = '[' + ','.join([obj._to_repr() for obj in objs]) + ']'
File "D:\GAE\bloog\models\__init__.py", line 152, in _to_repr
self._to_entity))
File "D:\GAE\bloog\models\__init__.py", line 58, in to_dict
init_dict_func(values)
File "C:\Program Files\Google\google_appengine\google\appengine\ext\db\__init__.py", line 765, in _to_entity
entity.set_unindexed_properties(self._unindexed_properties)
AttributeError: 'dict' object has no attribute 'set_unindexed_properties'
For those just getting started with the app engine in Windows, the tutorial Google has up has a few flaws. I’ve documented a couple of them and posted some workarounds here.
For those who are looking for a blogging engine running on Google App Engine, you can check out Hoydaa Blog. It is still in beta, but I think it is worth considering. BTW, here is my blog using it.
I would like to know how to integrate blog with my web site.
I am looking for a programmer to develop a dating site running on Google Apps Engine. Interested, please contact me at chinesesociopolitics at …………. gmail com
@ GAE developer wanted:
Uh, oh! Let me guess:
* You have a great idea for a website, but no programming skills so you are looking for a sucker who will do *ALL OF THE WORK* while you sit back and “supervise”
* You can’t offer a salary so you expect the programmer to work for free until some unspecified time in the future when the site becomes profitable
* The site is totally going to be an overnight hit that will blow match, harmony, okcupid and a million other dating sites out of the water.
Honestly, here is what I think:
– dating site market is over saturated so your chances of success are slim
– no competent programmer will ever work for you for free on this project
– if you find an idiot who will, good luck – I guarantee his code will be shit
Now suggestions:
– learn how to program and do it yourself
Or at least start it. If you make it an interesting project, people will want to join in. If you sit on your ass and say “fetch me a programmer type person, I shall give him my brilliant idea to develop” people will just laugh at you. Do you know how often we get this kind of offers?
At least twice a week. No seriously. Every single time I meet someone and tell them what I do they either:
– ask me to fix their laptop for free
– make me a business proposition kinda like yours
So, do it yourself. If you can’t do it yourself, you have to spend money up front. Start a company, hire programmers, offer them salaries and benefits and you have a chance. “I have a great idea for a website” is a joke.
Also, last time I checked this was a blog and not classified ads or rent-a-coder website. What made you think that it is ok to post a “developer wanted” ad here?
Do you know what we call unsolicited ads like this? We call them spam. You sir are a filthy, unwashed, spammer.
So, please kindly go fuck yourself, die in a fire and go to hell. In that precise order.
hi, I’m doing evryting like you say (I extract the second archiv in the firepython folder, I try with dicet extraction, with extraction on sub folder but still…), but I become the folowing message:
Traceback (most recent call last):
File “C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py” , line 3245, in _HandleRequest
self._Dispatch(dispatcher, self.rfile, outfile, env_dict)
File “C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py” , line 3186, in _Dispatch
base_env_dict=env_dict)
File “C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py” , line 531, in Dispatch
base_env_dict=base_env_dict)
File “C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py” , line 2410, in Dispatch
self._module_dict)
File “C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py” , line 2320, in ExecuteCGI
reset_modules = exec_script(handler_path, cgi_path, hook)
File “C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py” , line 2216, in ExecuteOrImportScript
exec module_code in script_module.__dict__
File “D:\google\DocSavage-bloog-346e5fb\main.py”, line 36, in
from firepython.middleware import FirePythonWSGI
ImportError: No module named firepython.middleware
can some one help me please | http://www.terminally-incoherent.com/blog/2009/03/05/running-a-blog-on-google-appengine/ | CC-MAIN-2015-48 | refinedweb | 1,958 | 65.12 |
I've read this thread When do Ruby instance variables get set? but i'm in two minds when to use class instance variables.
Class variables are shared by all objects of a class, Instance variables belong to one object... Not much room left to use class instance variables if we have class variables...
Could you explain to me difference between these two and when to use them?
code example:
class S
@@k = 23
@s = 15
def self.s
@s
end
def self.k
@@k
end
end
p S.s #15
p S.k #23
Instance variable on a class:
class Parent @things = [] def self.things @things end def things self.class.things end end class Child < Parent @things = [] end Parent.things << :car Child.things << :doll mom = Parent.new dad = Parent.new p Parent.things #=> [:car] p Child.things #=> [:doll] p mom.things #=> [:car] p dad.things #=> [:car]
Class variable:
class Parent @@things = [] def self.things @@things end def things @@things end end class Child < Parent end Parent.things << :car Child.things << :doll p Parent.things #=> [:car,:doll] p Child.things #=> [:car,:doll] mom = Parent.new dad = Parent.new son1 = Child.new son2 = Child.new daughter = Child.new [ mom, dad, son1, son2, daughter ].each{ |person| p person.things } #=> [:car, :doll] #=> [:car, :doll] #=> [:car, :doll] #=> [:car, :doll] #=> [:car, :doll]
With an instance variable on a class (not on an instance of that class) you can store something common to that class without having sub-classes automatically also get them (and vice-versa). With class variables, you have the convenience of not having to write
self.class from an instance object, and (when desirable) you also get automatic sharing throughout the class hierarchy. | https://codedump.io/share/NxTqJVosFG1M/1/ruby-class-instance-variable-vs-class-variable | CC-MAIN-2017-17 | refinedweb | 281 | 80.99 |
Hi, I am learning to work in JAVA. I have gone through the basic concepts, understood and experimented small codes successfully. The point where I have been stuck is garbage collection, finzalize() method and System.gc() method. I have searched these on Google but all the explanations are in high-tech language which are difficult for me to understand.
My question is, is it necessary to use finalize() method with System.gc() method? What is the use of finalize() method in JAVA? My code snippets are as follows.
public class Student { private String name; private int rollNo; private static int countStudents = 0; // Satndard Setters public void setName(String name) { this.name = name; } // Masking of class level variable rollNo public void setRollNo(int rollNo) { if(rollNo > 0) { this.rollNo = rollNo; } else { this.rollNo = 100; } } //getters of static countStudents variable public static int getCountStudents() { return countStudents; } //Default constructor public Student() { name = "not set"; rollNo = 100; countStudents += 1; } // Method for display values on screen public void print() { System.out.println("Student Name:" +name); System.out.println("Roll No:" +rollNo); } //overriding finalize method of object class public void finialize() { countStudents -= 1; } } // end of class
The implantation of above class is as follows.
public class Test { public static void main(String[] args) { int numObjects; // Printing current number of objects i.e 0 numObjects = Student.getCountStudents(); System.out.println("Students Objects" + numObjects); // Creating first student object & printig its values Student s1 = new Student("ali", 15); System.out.println("Student:" + s1.toString()); //losing object reference s1 = null; //Requiring JVM to run Garbage collector but there is //No guarrantee that it will run System.gc(); //Printing current number of objects i.e unpredictable numObjects = Student.getCountStudents(); System.out.println("Students Objects" + numObjects); } //end of main } // end of class
Thank You! for your reply/suggestion. | https://www.daniweb.com/programming/software-development/threads/379505/use-of-finalize-method-java-newbie | CC-MAIN-2018-05 | refinedweb | 296 | 50.33 |
The QStyleOptionQ3ListViewItem class is used to describe an item drawn in a Q3ListView. More...
#include <QStyleOptionQ3ListViewItem>
Inherits QStyleOption.
The QStyleOptionQ3ListViewItem class is used to describe an item drawn in a Q3ListView.
This class is used for drawing the compatibility Q3ListView's items. It is not recommended for new classes.
QStyleOptionQ3ListViewItem contains all the information that QStyle functions need to draw the Q3ListView items.Q3ListView, and Q3ListViewItem.
This enum describes the features a list view item can have.
The Q3ListViewItemFeatures type is a typedef for QFlags<Q3ListViewItemFeature>. It stores an OR combination of Q3ListViewItemFeature values.
See also features, Q3ListViewItem::isVisible(), Q3ListViewItem::multiLinesEnabled(), and Q3ListViewItem::isExpandListViewItem, initializing the members variables to their default values.
Constructs a copy of the other style option.
This variable holds the number of children the item has.
This variable holds the features for this item.
This variable is a bitwise OR of the features of the item. The deafult value is None.
See also Q3ListViewItemFeature.
This variable holds the height of the item.
This doesn't include the height of the item's children. The default height is 0.
See also Q3ListViewItem::height().
This variable holds the Y-coordinate for the item.
The default value is 0.
See also Q3ListViewItem::itemPos().
This variable holds the total height of the item, including its children.
The default total height is 0.
See also Q3ListViewItem::totalHeight(). | http://doc.qt.nokia.com/4.6-snapshot/qstyleoptionq3listviewitem.html | crawl-003 | refinedweb | 226 | 71.92 |
:
- Add your treenodes in code and specify NavigateUrl property.
- Add nodes dynamically using the TreeNodePopulate
event handler
- the example is using a datasource but you can adapt it to your needs.
I can load the root level fine. But how do I show the [+] sign if there are no nodes yet? I can click the Root node and go to the URL, but I have no way of expanding it because the sub nodes aren't there yet.
I must be missing something.
Thanks
Ron
What I would suggest is that you have a daily timer job running (or hourly) that creates the hierarchy in XML and go from there.
You can use the following to get the subinformation of your sites:
using System;
using Microsoft.SharePoint;
using Microsoft.SharePoint.Navig
namespace ConsoleApp
{
class Program
{
static void Main(string[] args)
{
using (SPSite site = new SPSite(""))
{
using (SPWeb web = site.OpenWeb("test"))
{
string format = "{0, -30} {1, -5} {2}";
Console.WriteLine(format, "Title", "Id", "Url");
foreach (SPNavigationNode node in web.Navigation.GlobalNodes
{
Console.WriteLine(format, node.Title, node.Id, node.Url);
}
}
}
Console.Write("\nPress ENTER to continue....");
Console.ReadLine();
}
}
}
The CompTIA Cloud+ Basic training course will teach you about cloud concepts and models, data storage, networking, and network infrastructure.
We thought of that but we need the list to be security trimmed.
SharePoint 2010 removed the My Links list and we are trying to replace it. (Without it's limitations)
Our Thought process is:
Get a list of all site collections and check the logged in user's security.
When the [+] is clicked next to a site collection, go get the next level (check security)
Repeat for every level of sub sites
This way we are only getting one level at a time.
We've successfully done this for all sites at all levels, but the load time is 12+seconds and is unacceptable.
We're trying to reduce the load time by only getting one level at a time.
Thanks
Ron
PS: Once we figure this out, I'll post the code to whomever would like it. :-)
Can you post your code/markup? It seems that Populates nodes may not be setup correctly.
Thanks,
Note: Another alternative that I used before if the treeview gets very slow because of the number of nodes, it is third party vendor called obout tree.
Also we are looking into KoenVosters' suggestion to run a timer job to export an XML file of all sites. We believe that it may be faster (I emphasize may) to read the XML instead of hitting the SP server multiple times.
In addition, we have created a new project on CodePlex called SharePoint 2010 My Links Web Part and will be releasing the source code once I've had it cleaned up and commented properly.
Thanks again for all your help.
Now, If I could only convince Microsoft that they should add a Sort function to the SPSite collections... (we had to manually sort the sites in alpha order :-( ) | https://www.experts-exchange.com/questions/27992755/Expandable-list-of-URLs.html | CC-MAIN-2018-26 | refinedweb | 501 | 72.36 |
A user of your app emails you and tells you that they would love if the greeting screen were in Spanish.
def greeting(): print("Hello!")
You think to yourself, “No problem! I’ll just use a dictionary!”
dictionary = { "Hello!":"Hola!", } def greeting(): print(dictionary.get("Hello")
Now, after running greeting(), you get “Hola!”. Perfect! You’re done localizing it into Spanish.
Afterwards, Mr. Foobar requests that you localize it into French as well.
Except, you can’t, because dictionaries are one to one. You can’t add a new key value pair of “Hello!” and “Bonjour!”.
But what you could do is add tons of dictionaries, but then that runs into another problem :
It’s difficult to edit, and your translators are lost and confused because they aren’t programmers.
You could do it with a set of parallel arrays, and have something like:
english = ["Hello!"] spanish = ["Hola!"] french = ["Bonjour!"]
But what would happen if you didn’t have one word, but a thousand words, in 10 different languages. What then? Ten parallel arrays with a thousand words in each array? (ignoring the fact that a word in English might be two words in Spanish) And what if the localization isn’t fully complete? Would you just insert blank strings into the array to make the array indexes line up?
There are simply too many issues with parallel arrays, and the amount of vertical space you would need to have a set parallel arrays that large would be bad enough to summon Cthulhu from the depths of R’lyeh.
Most importantly, no sane translator is going to waste time counting the indexes to make sure they’re adding the word into the right index.
We need a solution that is easy to understand for translators, easy to maintain, and doesn’t require writing lots of code.
So what’s the solution proposed by Django?
Django’s Solution
Django uses ugettext to do i18n, or internationalization. Internationalization is the act of making all the strings in your code translatable into a different language.
Commonly, you will see ugettext imported like this:
from django.utils.translation import ugettext as _
For reasons that may forever be unknown to me, ugettext is always aliased as an underscore as common convention.
Conventions aside, let’s create a dummy view in Django to show how ugettext works.
from django.shortcuts import render from django.http import HttpResponse from django.utils.translation import ugettext as _ def testView(request): output = _('Hello!') return HttpResponse(output)
To use ugettext, you simply need to call ugettext(‘STRING_HERE’), or _(“STRING_HERE”) since an underscore is aliased to ugettext.
Next, you will need to create a folder called “locale” in the base directory of your project. Then, edit your settings.py and add:
LOCALE_PATHS = (os.path.join(BASE_DIR, "locale")) #Gives you BASE_DIR/locale
This tells Python that our locale path should be inside the “locale” folder.
Now comes the fun part. Run django-admin makemessages -l es. This will create a .po file called “django.po” in BASE_DIR/locale/sp/LC_MESSAGES, where “es” is español, or Spanish.
The .po extension stands for Portable Object, and it is used to hold all of the phrases in the original language and the translated language.
If you open “django.po”, you will see the following:
#, python-format msgid "Hello!" msgstr ""
Now, edit the msgstr so that it reads as follows:
#, python-format msgid "Hello!" msgstr "Hola!"
We are adding a new translation for the Spanish version of “Hello!”.
Now, we need to compile our .po file to a .mo file, which Django can use to process our translations.
Run django-admin compilemessages -l es
Once the .mo file is created, Django will be able to search for the input word or phrase, and output the translated word or phrase. After this step is done, the localization for Spanish is complete!
But Why Isn’t My Text Translated?
If you ran the code, you may have noticed that the text didn’t translate. And you’re correct.
Why should it?
Localization is intended to provide different languages to specific users. Django differentiates between Spanish and American users by your browser’s locale. Django will only translate the page if your locale is “es”. If you live in the United States, your locale is most likely “en” or “en-us”.
If you want to test your code, simply change your locale to “es”, and the translation will work appropriately.
Conclusion
Django’s solution is essentially a gigantic dictionary split into multiple files, with each language being represented by a .po file. A huge benefit of this is that the code is partially decoupled from the translation. The .po file is generated from the code, but translators are unlikely to crash the website by editing the .po file, as they only have to insert their translation into the msgstr part.
As a result, the programmers are happy, as the code is separate from the translations, and the translators are happy, as the translations are separate from the code. It’s easy for both parties to access, meaning translators don’t have to fidget with the code, and programmers don’t have to fidget with long bundles of translated phrases.
Django’s solution is a perfect win-win for everyone. | https://henrydangprg.com/category/django/ | CC-MAIN-2022-27 | refinedweb | 883 | 67.04 |
explain
The problem of not having a clue in an hour must not be the main problem
I remember when I was in school, I learned the main contradiction and the secondary contradiction. The secondary contradiction may gradually become the main contradiction. I feel it's a secondary contradiction to remember something before. Now I feel that I forget it while I remember it.
In the past, it was more about the realizability of various functions and algorithms. Now there are too many thoughts to grasp. In addition, I have always wanted to assist in various things through AI, so I want to start doing some practice.
AI is a tool to help people save labor and time
content
Let's start with a simple prototype using local storage. The simple function of this prototype is to record something and then provide query.
import pickle # Store results def to_pickle(data, file_name, path='./'): output = open(path+file_name+'.pkl', 'wb') pickle.dump(data, output) output.close() print('data save to pickle: ', path+file_name+'.pkl') # Load results def from_pickle(file_name, path='./'): output = open(path+file_name+'.pkl', 'rb') data = pickle.load(output) output.close() return data from datetime import datetime str_format = '%Y-%m-%d %H:%M:%S' import time class AndyAgent: def __init__(self, name , fpath = './'): self.name = name self.fpath = fpath try: self.Data = from_pickle(name,fpath) except: print('New agent,No historical data was read') # Data is a dictionary, ready to dock with mongo self.Data = {} # Original record def rec(self, sentence): new_data = {} new_data['content'] = sentence new_data['ts'] = time.time() # Use microsecond timestamps as the self.Data[int(new_data['ts'] *1e6)] = new_data # Then store to_pickle(self.Data, self.name,self.fpath) # query def query(self): self.question = input('Please enter keyword:') self.answer() # answer def answer(self): self.answer_list = [] for k in self.Data.keys(): rec = self.Data[k] content = rec['content'] t_struct =time.localtime(rec['ts']) ts =time.strftime(str_format,t_struct) rec_tuple = content, ts self.answer_list.append(rec_tuple) print('find%s Records' % len(self.answer_list)) print(self.answer_list) # Entity recognition # Find the time, place, people and company def analysis(self): pass # Record a relationship def rec_a_link(self, from_node, to_node, link_type, attr_dict): pass # Infer a relationship def refer_a_link(self, content): pass # Notification: SMS or email def inform_msg(self, to_list, template_id, content): pass --- # aoa = Agent of Andy aoa = AndyAgent('andy') aoa.Data {1635001088509037: {'content': 'I'm going to the airport tomorrow', 'ts': 1635001088.5090373}, 1635001118614797: {'content': 'I'm going to the airport tomorrow', 'ts': 1635001118.6147974}} aoa.query() Please enter keyword: Airport 2 records found [('I'm going to the airport tomorrow', '2021-10-23 22:58:08'), ('I'm going to the airport tomorrow', '2021-10-23 22:58:38')]
Some thoughts on Design
I hope the Agent can realize these functions:
- 1 record problems (data) and analyze them
- 2 translate and replace manually executed code
- 3 answer questions
Data organization:
- Organize (nodes) and relationships as a graph
Application of machine learning:
- 1 use natural language to analyze characters.
- 2 create algorithms for knowledge discovery and knowledge inference
- 3 strengthen learning methods and improve their own services
Architecture support:
- 1. You can send text messages and emails
- 2. Use the scheduler to run persistently, and give reminders according to time
- 3. Persistence of Mongo library. Data in dictionary format can be easily mapped to the database.
- 4 neo4j for relational storage and query. It can be stored first in the form of a convertible table, and then it can be simply mapped. | https://programmer.help/blogs/modeling-gossip-series-75-intelligent-agent.html | CC-MAIN-2021-49 | refinedweb | 580 | 51.24 |
The last two releases of CUDA have added support for the powerful new features of C++. In the post The Power of C++11 in CUDA 7 I discussed the importance of C++11 for parallel programming on GPUs, and in the post New Features in CUDA 7.5 I introduced a new experimental feature in the NVCC CUDA C++ compiler: support for GPU Lambda expressions. Lambda expressions, introduced in C++11, provide concise syntax for anonymous functions (and closures) that can be defined in line with their use, can be passed as arguments, and can capture variables from surrounding scopes. GPU Lambdas bring that power and convenience to writing GPU functions, letting you launch parallel work on the GPU almost as easily as writing a for loop.
In this post, I want to show you how modern C++ features combine to enable a higher-level, more portable approach to parallel programming for GPUs. To do so, I’ll show you Hemi 2, the second release of a simple open-source C++ library that I developed to explore approaches to portable parallel C++ programming. I have written before about Hemi on Parallel Forall, but Hemi 2 is easier to use, more portable, and more powerful.
Introducing Hemi 2
Hemi simplifies writing portable CUDA C/C++ code. With Hemi,
- you can write parallel kernels like you write for loops—in line in your CPU code—and run them on your GPU;
- you can launch C++ Lambda functions as GPU kernels;
- you can easily write code that compiles and runs either on the CPU or GPU;
- kernel launch configuration is automatic:]; }); }
Hemi is BSD-licensed, open-source software, available on Github.
GPU Lambdas and Parallel-For Programming
As I discussed in the post New Features in CUDA 7.5, GPU Lambdas let you define C++11 Lambda functions with a
__device__ annotation which you can pass to and call from kernels running on the device. Hemi 2 leverages this feature to provide the
hemi::parallel_for() function. When compiled for the GPU,
parallel_for() launches a parallel kernel which executes the provided GPU lambda function as the body of a parallel loop. When compiled for the CPU, the lambda is executed as the body of a sequential CPU loop. This makes portable parallel functions nearly as easy to write as for loops, as in the following code.
parallel_for(0, 100, [] HEMI_LAMBDA (int i) { printf("%d\n", i); });
GPU Lambdas can also be launched directly on the GPU using
hemi::launch():
launch([=] HEMI_LAMBDA () { printf("Hello World from Lambda in thread %d of %d\n", hemi::globalThreadIndex(), hemi::globalThreadCount()); });
Portable Parallel Execution
You can use
hemi::launch() to launch lambdas or function objects (functors) in a portable way. To define a functor that you can launch on the GPU, its class must define an
operator() declared with the
HEMI_DEV_CALLABLE_MEMBER annotation macro (see
hemi/hemi.h). To make this easy, Hemi 2 provides macro
HEMI_KERNEL_FUNCTION(). The simple example
hello.cpp demonstrates its use:
// define a kernel functor HEMI_KERNEL_FUNCTION(hello) { printf("Hello World from thread %d of %d\n", hemi::globalThreadIndex(), hemi::globalThreadCount()); } int main(void) { hello hi; // instantiate the functor hemi::launch(hi); // launch on the GPU hemi::deviceSynchronize(); // make sure printf flushes before exit hi(); // call on CPU return 0; }
As you can see,
HEMI_KERNEL_FUNCTION() actually defines a function object which must be instantiated. Once instantiated, it can either be launched on the GPU or called from the CPU, so this is a way to define parallel functions with the capability of running on the CPU if there is no GPU present.
In fact, you can even compile the above code with a different compiler that knows nothing about CUDA. In this case, it will simply run sequentially on the CPU when passed to
hemi::launch().
You can also define portable CUDA kernel functions using
HEMI_LAUNCHABLE, which defines the function using CUDA
__global__ when compiled using
nvcc, or as a normal host function otherwise. Launch these functions in a portable way; }
Unlike
HEMI_KERNEL_FUNCTION(), which can be either launched on the device or called on the host, calls to
hemi::cudaLaunch() always target the device when compiled with
nvcc targetting a GPU architecture. Note that you can also use
hemi::cudaLaunch() on a traditionally defined CUDA
__global__ kernel function (which is equivalent to
HEMI_LAUNCHABLE). You may want to use
hemi::cudaLaunch instead of CUDA’s
<<< >>> triple-angle-bracket launch syntax because of its ability to automatically configure launches.
Automatic Execution Configuration
In both of the examples in the previous section, the execution configuration (the number of thread blocks and size of each block) is automatically decided by Hemi based on the GPU it is running on and the resources used by the kernel. In general, when compiled for the GPU,
hemi::launch(),
hemi::cudaLaunch() and
hemi::parallel_for() will choose a grid configuration that occupies all multiprocessors (SMs) on the GPU. This makes it almost trivial to launch parallel work on the GPU! With Hemi, execution configuration is an optimization, rather than a requirement. Loops
A common design pattern in writing scalable, portable parallel CUDA kernels is to use grid-stride loops. Grid-stride loops let you decouple the size of your CUDA grid from the data size it is processing, resulting in greater modularity between your host and device code. This has reusability, portability, and debugging benefits.
Hemi 2 includes a grid-stride range helper,
hemi:]; } }
Note that
hemi::grid_stride_range() can be compiled and used with range-based for loops on either the device or host. On the host, it uses a stride of 1.
Portable
The
HEMI_DEV_CALLABLE_MEMBER and
HEMI_DEV_CALLABLE_INLINE_MEMBER macros can be used to create classes that are reusable between host and device code, by decorating any member function that will be used by both device and host code. Here is an example excerpt of a portable class (a 4D vector type used in the “nbody_vec4” the host compilers, too. For details on alignment, see the NVIDIA CUDA C Programming Guide Section 5.3.
Portable Device Code
The code used in Hemi portable functions (those defined with the macros discussed previously) must be portable, or the functions won’t compile for multiple architectures. In situations where you must use GPU-specific (or CPU-specific) code, you can use
HEMI_DEV_CODE macro to define separate code for host and device. }
Portable Iteration
For most situations where you need to portably iterate over a range in your device or kernel functions, I recommend using the
hemi::grid_stride_range() with a range-based for loop, as previously discussed. Hemi also provides portable helper functions for situations where you need to customize iteration or array indexing based on thread or block index.
For kernel functions with simple independent element-wise parallelism,
hemi/device_api.h includes functions to enable iterating over elements sequentially in host code or in parallel in device code.
globalThreadIndex()returns the offset of the current thread within the 1D grid, or 0 for host code. In device code, it resolves to
blockDim.x * blockIdx.x + threadIdx.x.
globalThreadCount()returns the size of the 1D grid in threads, or 1 in host code. In device code, it resolves to
gridDim.x * blockDim.x.
Here’s a SAXPY implementation using the above functions.
HEMI_LAUNCHABLE void saxpy(int n, float a, float *x, float *y) { using namespace hemi; for (auto i = globalThreadIndex(); i < n; i += globalThreadCount()) { y[i] = a * x[i] + y[i]; } }
Hemi provides a complete set of portable element accessors in
hemi\device_api.h including
localThreadIndex(),
globalBlockCount(), etc.
hemi\device_api.h also provides the
synchronize() function which maps to
__syncthreads() barrier operation when compiled for the device, and (currently) a no-op when compiled for the host.
Mix (BSD) to encourage these kinds of flexible use.
Requirements
Hemi 2 requires a host compiler with support for C++11 or later. Hemi builds on a number of C++11 features, including lambda expressions, variadic templates, and range-based for loops.
For CUDA device execution, Hemi 2).
Parallel-For GPU Programming in Other Frameworks
Hemi is not alone in taking advantage of modern C++ to enable easier, more portable parallel programming. In fact, the following sophisticated frameworks address many of same challenges as Hemi, and much more.
Thrust, the parallel algorithms template library included with the NVIDIA CUDA Toolkit, is also compatible with CUDA 7.5 GPU lambdas. You can combine GPU lambdas with Thrust algorithms like
thrust::for_each() and
thrust::_transform() in the same way that you can combine STL algorithms with normal C++ lambdas. Here is an example of SAXPY implemented with Thrust and a portable GPU lambda.
void saxpy(float *x, float *y, float a, int N) { using namespace thrust; auto r = counting_iterator(0); for_each(device, r, r+N, [=] HEMI_LAMBDA (int i) { y[i] = a * x[i] + y[i]; }); }
Kokkos
, developed at Sandia National Laboratory, “implements a programming model in C++ for writing performance portable applications targeting all major HPC platforms”. Kokkos provides abstractions for both parallel execution of code and, importantly, data management, with support for nodes with multi-level memory hierarchies. Kokkos has backends for CUDA, OpenMP, and Pthreads execution. GPU lambdas are a key enabler for the Kokkos CUDA implementation. Here is an example of a parallel loop in Kokkos:
Kokkos::parallel_for(N, KOKKOS_LAMBDA (int i) { y[i] = a * x[i] + y[i]; });
RAJA
is a software abstraction developed at Lawrence Livermore National Laboratory that “systematically encapsulates platform-specific code to enable applications to be portable across diverse hardware architectures without major source code disruption.” Here is an example of a CUDA-executable parallel loop in RAJA:
RAJA::forall<cuda_exec>(0, N, [=] __device__ (int i) { y[i] = a * x[i] + y[i]; });
Co-design discussions arising from NVIDIA’s work in the FastForward program, funded by the U.S. Department of Energy, helped motivate the design and deployment of the lambda support included in CUDA C++.
Try CUDA 7.5 and Portable Parallel Programming Today!
To get started with modern, portable C++ on GPUs, download the latest CUDA Toolkit today. To get started writing portable code with Hemi, download Hemi release 2.0 from Github.
To learn all about the new features in CUDA 7.5, sign up for the webinar “CUDA Toolkit 7.5 Features Webinar” and put it on your calendar for Tuesday, September 22. | https://devblogs.nvidia.com/simple-portable-parallel-c-hemi-2/ | CC-MAIN-2020-24 | refinedweb | 1,711 | 50.97 |
Agenda
See also: IRC log
Chair: today we will be mostly discussing issues 8 & 16
glen: bunch of issues on 'refps' bunch of stuff
which must be echoed back as 1st class soap headers. discussion on
architectural goodness of various proposals.
... 1) surrounds wrapper header
... 2) distinguish refps from other headers
... option 2) reveals the crux of the discussion. important for security to distinguish between headers put in as an echo and those put in on purpose
... discussion has been long and winding. i brought up four buckets simplicity, security, composability and clarity
... are things opaque? i think your require to given there may be a clash with other intended, understood headers. esp for ws-security.
... refps may include headers from other specs (ws-RM, etc) already in use which will cause problems and introduces security concerns.
... opposing argument is all you need to do is know where your EPRs come from and trust them implicitly.
Chair: and your AI to push this discussion on?
glen: would like more time
Chair: given our timeline i'm going to strike
it from the record
... what about the two proposals: wrapped and attribute tagging?
glen: with the status quo i'll have to write code that checks and scans refps and checks consitancy and side effects of headers put in by refps. a bag will simplify this.
tom: +1 to glen. i see getting rid of side effects as a virtue, whilst others want the side-effects. that's the problem.
glen: an observer won't know which headers were echoed and which were put in intentionally as a part of another contract
steve: wants to understand the problems with the annotation marker
<GregT>
glen: not everyone will understand the tag,
e.g. existing software
... doesn't solve the problem given not everyone will understand the attribute
marc: the fundemental problem is that it's still a RM header even though it's flagged
marsh: 90% case is for the side-effect and that's growing
glen: wants to understand this processing model
gudge: it's start with the soap message and figure what the EPR looks like
anish: +1 to to glen. i want to look at this
with issue 18.
... refps are meant to be abstract right?
glen: they're targeted at a particular role/node at a particular time
anish: cookie model. wrapper - a well defined soap header with a well defined processing model would answer this requirement.
tom: going back to mustUnderstand. marking an attribute - isn't the scope for the whole header? what's the problem with wrapping - is it directing individual props to different roles?
marsh: yes. we'd have to reinvent soap processing model within a wrapper element
glen: a lot of this be done by prior agreement
gudge: has to be done dynamically as there may be more than one of you
greg: gudge brought out my point. if you're worried about confliction - don't put stuff in your EPR that's likely to conflict.
glen: WSDLs can be dynamically generated. i could mint one for each use. i could use a policy statement. there's a good set of use-cases to tweak my policy on the fly and continue to describe the contract.
tony: wrapper could be optional to allow refps to be hidden from an intermediatory - give the minter this level of control.
daveo: how does that address the security concern?
marsh: sounds interesting. a conservative consumer could choose to only process headers in the bag.
glen: could just be ignored by some vendors and be useless
marsh: see little difference between the optional bag and the marker
anish: status quo allows you to do this already
marsh: though this is a standardisation of this option
glen: don't like this
marc: me neither
paco: (to glen) difference between describing in WSDL and Policy?
marc: it's the difference between signing a blank piece of paper and signing a letter
glen: are you going to write software that checks and validates an EPR
marsh: outside of the scope of the WG, but it doesn't seem unreasonable
gudge: we might use a blacklist approach..
... we have stuff already which works this way, and IBM has resource framework.
greg: lots of use-cases (tracing/ logging) for the unwrapped approach
daveo: i've yet to hear an acknowledgement from the refps as soap headers is a security problem
marsh: trust is out of scope
paco: our guys have looked into this and we're ok
jeffm: i'm still unclear what the trade-off is here. you're argument seems to be to stick with the status quo
gudge: we don't beleive there is a security issue
<Marsh> actually, gudge says, we don't believe the wrapped approach is any more secure.
bob: other mechanisms exist to secure channels. but you can't secure something that's already been hacked
<jeffm> I also said that saying trust/security is out of scope is what leads to the design of insecure/easily hackable systems - if we believe that one method is more secure we should choose it, especially since the argument against seems to be mostly this is the "status quo" from the draft spec we wrote
discussion of side effects of blindly echoing refps on intermediaries
gudge: this is a legal issue. keep copies of EPRs i've been sent. flip it around, what happens if i mark headers which aren't refps?
bob: only security you've got is to secure the message hasn't been tampered with. you can't protect against an insane minter. trust is out of scope.
daveo: good protocol sepcs will help mitigate against malicious parties in a court of law. i don't think we should say we can't do anything about this.
bob: protection against tampering is sufficient
tony: i'm not concerned about the endpoint. i'm concerend about the impact on intermediatories. these could do "bad things" TM. e.g. denial of service attacks. putting them in a wrapper targets them end to end.
marc: not just intermediaries, you can craft an message to go to other unexpected places.
daveo: it's don box's side-effects that interests me most as a counter argument
Chair: glad that the discussion isn't just black and white, that both sides agree it's a trade-off. have people changed sides?
marsh, gudge: we haven't heard anything new, yet today
Chair: about to go for a 20 minute break. whole
schedule is brought forward 30 mins
... what about the 'put a wrapper in To' proposal
proposals:
discussion of the proposals:
0) Status Quo
1) To as EPR
2) Wrapped
3) Attribute
4) Attribute + Required Header
Chair: would the required header always be required to be there?
gudge: sounds like a 2nd order question after we've decided which way we're going.
<mlpeel> +1 to gudge's remark
tony: how does wrapping refps in To imact the optionality of To
<stevewinkler> There are actually 5 options, we're starting at 0 with the status quo
hugo: doesn't like options 0 or 3
<GlenD> hugo: It's important to know the difference between things you are "really" saying and things you are being told to say
<Zakim> hugo, you wanted to talk about options 0) and 3)
straw poll
cannot live with status quo (option 0): 8
cannot live with (option 1): 4
cannot live with (option 2): 5
cannot live with (option 3): 5
cannot live with (option 4): 3 + several undecided
marc: looks like we need another proposal.
paul: or run someone over
Chair: we have to progress.
gudge: we may just need to vote and move on.
discussion between glen and marsh about proposal 4 .. requiredness of attributes
marc: seems non-deterministic, too much optionality.
glen: me too
marc: you have to put in on of these headers for every node a refp is targeted against. sounds gross.
Chair: straw poll again.
cannot live with (option 4): lots and lots
<mlpeel> -1 to option 4
Chair: goodbye to proposal #4
glen and anish sound out proposal #2. 'To' could contain a copy of the XML, or contain modified values.
anish: don't have to worry about refps being targeted at different nodes
Chair: unless you have multiple Tos .. other issue.
anish: could annotate refps inside the To
<Marsh> Tom: Could keep some in an EPR inside To, and extract others into a refP's wrapper so they could be targetted somewhere else.
daveo: could have multiple Tos, one targeted for each role rather than message
straw poll
cannot live with (option 1): 3
two companies
glen and tom discuss multiple Tos/ bags
Chair: (for myself) don't like 2, brings in other issues regarding processing model and cardinality
marc: making this decision will simplify other issues
gudge: +1 to marc
jeffm: process is make a proposal and then vote on it. finding preferences - make a STV vote!
Chair: math is frightening
staw poll, one vote only, one vote per company.
Options 0, 1, 2 and 3
<hugo> Mark Peel says 3-1-2-0
<mlpeel> hugo had it correct: Novell orders it 3, 1, 2, 0
Chair: ... calculating ... whirl ... click .. click ..
glen: 0 would close i016 with no action
Chair: close i016 as a duplicate and put text under i008
RESOLUTION: close issue 16 with no action and put text under issue 8
markn: option 1 would close issue 8
jeffm: result is a tie between 1 and 3
tom: difficulty is a difference in requirements. i voted for 2 as i'd like to target multiiple intermediaries.
markn: formal vote for option 1 or 3
microsoft 3
<hugo> W3C: 1
<hugo> Mark: 3
<mlpeel> Novell: option 3
<Gudge> mike, we can't hear you
<hugo> Nokia: abstains
HP: abstain
<MSEder> zakim. mute me
IBM: 3
SAP: 3
Hitachi: 3
Fujitsu: 1
Oracle: 1
BEA: 3
Sun: 1
BT: 3
W3C: 1
Novell: 3
Nokia: abstains
option 3 - 7
option 1 - 4
2 abstained
RESOLUTION: close issue 8 with proposal #3
gudge: will send mail about security considerations
marc: i'll help
discussion of what namespaces can be added to a header by refps
schema retricts ##other but there are other namespaces XOP, MTOM, etc ..
markn: time constrained discussion of this issue. 45 mins, lunch then 45 mins afterwards.
daveo: i'm now with paco after discussion of state in an EPR interaction. there are two types of EPR statefull (v) stateless. in all cases they're just an data echoing mechanism.
... we don't need to tell the client about the identity of what is being identified. in all cases we tend to wrap data in an element. in any of the specs where a client needs compare identitiy, there is a wrapper. we don't need to say anything in addressing about EPR identifier.
... state is a separate discussion. EPRs are about echoing data, not identification.
... propose remove text about identification in addressing. EPRs are not always identifiers.
anish: what's the difference between refprops and refparams. why 2 different mechanisms?
<MSEder> 2
paco: you can separate this discussion. there are use-cases without identifiers and use-cases with EPRs as an identitifier and use-cases with state.
marc: was with the argument until he heard paco. unhappy about maintaining distinction between params and props.
paco: address + the state is like URI and a cookie.
paco: how does state effect an interaction: 1000's of ways. policy just raises one area of concern.
marc: more than just state. you're talking about a different thing. it's hiding behind words.
jeffm: we're playing semantic games here. how can you talk about state without identity? state implies an entitiy. it's got to be the state of something.
<marc> marc: discussion is not just about state but also about diff EPRS accepting diff messages and having diff policies
<MSEder> lost audio
daveo: what led me to thinking ERPs are an identifier + state, was the lack of a comparison of EPRs in other specs.
... the specs we have which use EPRs, RM, pub-sub, etc layer comparison on top
tony: you don't cache EPRs!
jeffm: oh yes you do!
markn: warns that a spec you can't abuse is a very high bar
hugo: in the direction we're going, there isn't much difference between props and params. if we need two bags, what is the difference? one of them is still likely to be identifying something
<marc> changing state doesn't mean that a service suddenly accepts a new set of messages. it may affect the result you get for particular messages (e.g. a 'getmybankbalance' message may return a fault unless i previously sent an 'authenticateme' message). its still the same service with the same identity
tom: raises formal identifiers (v) URIs .. specs like WS-RF and RM want to use refps to distinguish what you're talking about
<marc> paco seems to suggest that refps chnage the service
paco: i can't prevent you using refps as identifiers.you can always hang yourself
daveo: i'm ok with collapsing refprops and params. under what situation would you get state in a refprop as opposed to a refparam and do something differently? we need to understand this to go forward.
<hugo> +1 to why we need both
<marc> +1 to not needing both
greg: benefits in optimisation when policy can be attached.
greg: (answering markn) 20,000 blade servers could have their own EPR.
marc: then they're an identifier
anish: if you're talking about two different 'things' then they're identifiers
paco: if everything the same apart from the policy, then are we really talking about a different thing?
anish: if two EPRs can point to the same thing, then we're not talking about identifiers
markn: what's the use-case for refprops?
glen: difference of opinion as to "what is a Web service?" is the problem here
... what we need to do is pick the 80/20 cases. EPR should point you at the same thing a WSDL points you at.
paco: world doesn't turn around WSDL
which chicken and egg. came first, the WSDL or the Web service?
paco: don't use WSDL as a straight jacket.
glen: it's dangerous to change services mid-stream
paco: in very long running services, things such as policy may change.
tom: it's good that EPRs aren't identifiers - they're references. i think we should collapse refps. we don't want them to be identifiers
markn: lunch is looming .. distinction between refprops and params
markn: our charter only requires us to consider (not resolve) this issue.
markn: lunch!
<stevewinkler> Taking over as scribe.
Constrained discussion until 2 about issue i001
Gudge: describes management system as impetus for ref props.
Mark: what is the value of refProps?
Gudge: single entry point into a system, and then use the refprops to go from there.
this stems from two different architectural views of the world: message (v) resource oriented. one isn't better than the other for all cases.
Gudge describes chart from his previous action item
Paco: params and properties are used for state.
marc: that's not what the spec says.
paco: but that's how it's used.
daveo: that was then, this is now. we'll remove identifier terms and add in the part about state.
gudge: The question is, why are there two buckets?
gudge: it seems that we'll use this answer to inform the decision of i001, what is an identifier.
glen: my preference is to simplfy addressing, allow the other use to be layered on top.
marc: so you would say that the management app would define a param.
glen: I would define an EPR extension.
gudge's diagram from Redmond:
paco: so waht you're saying is that an EPR
applies to a specific service, if the service changes you need a new EPR. Is
the reverse true?
... if an EPR changes, do you have to assume that the metadata has changed.
... there's a difference between the description of the interaction and the interaction itself.
... you're assuming the client is aware of how the policy changes.
back and forth between paco and glen...
paco: we're trying to get away from defining this. We're never going to do it, so let's get away from this.
glen: yes, so what's the point of having refps
indicate different metadata?
... the interesting thing about an epr is that some piece of code doesn't need to change to talk to it.
paco: yes, but if you see that they're
different, then it MAY have changed.
... refparams are data, not metadata.
glen: why are we talking about the connection between an EPR and metadata at all?
paco: the rationale for having 2 buckets, is
that the relationship to the metadata is different.
... a client gets an EPR that is differrent in params, you don't have to refresh your (policy?) side, but if props are different you do have to do the refresh (not cache).
mark: why are we using props to do this, instead of just params with a string that defines the metadata context?
glen: there is an unbounded way to figure out that information, why are we talking about it. It shouldn't be in the spec...
Mark: 3 options: 0 status quo, 1 remove refprops (i.e. collapse), 2 reword
reword means clarify
<Zakim> hugo, you wanted to give his view about i001 closure
daveo: the web is not aligned with the web architecture.
<hugo> Hugo: I think that option 1 would close i001, and that options 0 and 2 would required us to justify our choice with something like the comparison that we started with, and consider a QName to URI mapping
marc: the fact that webarch was written retrospectively, gives it weight, and indicates that they know how to build the types of systems we want to build.
jeff: there's a big difference between observing how others have done their implementations and codifying it in a foundational spec.
<Zakim> hugo, you wanted to disagree
marc: I see the removal of refprops as a necessary precursor to rewording the spec wrt to identification.
straw poll:
option 0 gets 0 votes and has been erased.
option 1 gets 10 votes
option 2 gets 6 votes
1 abstention
vote for adoption of proposal 1:
2 abstentions
yes: 7
No: 4
Issue 1 closed with the removal of refprops, and removal of all references to the use of EPRs as identifiers.
RESOLUTION: Issue 1 closed with the removal of refprops and removal of all references ot the use of EPRs as identifiers.
For the record, issue 1 votes: HP=Y, MSFT=N, IBM=N, SAP=ABS, HIT=Y, FUJ=Y, ORCL=Y, BEA=Y, SUN=Y, BT=Y, W3C=ABS, NOK=N, NOV=N
Anish describes the motivation for the issue.
RESOLUTION: change 'will be used instead of' to 'can be used' in section 2 of the WSDL binding. Editors to make the change now.
<marc> section:
<Chair> WRT WSDL binding, section 2 2nd paragraph.
<Gudge> Here is what the amended text from Section 2 of the WSDL binding now says:
<Gudge> To support these scenarios, we define a lightweight and extensible mechanism to
<Gudge> dynamically identify and describe service endpoints and instances. Endpoint references
<Gudge> logically extend the WSDL description model (e.g., portTypes, bindings, etc.), but
<Gudge> do not replace it. Endpoint references can be used in the following cases:
<anish> the issue is at this email:
1st subissue resolved.
anish is willing to drop the 2nd subissue.
anish describes subissues 3 and 4.
<scribe> ACTION: Anish to propose text for resolving i020 subissue 4.
<scribe> ACTION: Anish to start discussion on i020 subissue 4, what are we talking about when we say endpoint reference.
<scribe> ACTION: Anish to restate description of issue i020.
<anish> ACTION 1=Anish to propose text for resolving i020 subissue 3 (what is the relationshiop between wsdl endpoint and wsa:address if a service Qname/port is present in an EPR)
pauld describes the motivation for the issue.
jonathon: we already have all the mechanisms in place to do this now, we don't need to provide extensibility for extensibility.
glen: +1
... if you want to do other things you introduce new headers.
dorchard: if we allow multiple To's a subsequent version of the spec could provide some meaning, but if we don't provide it now, then we don't enable future versions to be compatible using those QNames.
jonathon: dave wants to evolve the namespace and paul wants to evolve the functionality.
glen: I don't see a compatible change worth defining.
<pauld> going from request-response to request-respond-to-one-of-a-list
marc: an example - v1 says pick one, v2 says pick one but do so in this ordering.
<Gudge> I guess I don't understand why we wouldn't just leverage the way SOAP works to do v2
<Gudge> We define new QNames for new headers in v2
<Gudge> if the sender doesn't mark them mu='true', then v1 receivers can ignore them
<Gudge> why does our spec need to say anything?
Marsh: can we just say 'ignore stuff not in our namespace'?
pauld: discussion has been hijacked, issue is about cardinality, not versioning.
Chair: straw poll on where everyone sits.
option 1: status quo
<Gudge> 1. Status Quo
option 2: constrain in the soap binding, open in core and wsdl
<Gudge> 2. Constrain in SOAP binding, open in core/wsdl
option 3: open everywhere
<Gudge> 4. constrain in WSDL, open in core/soap
option 4: constrain in wsdl binding, open in core and soap.
option 1: 13 votes
option 2: 0 votes
option 3: skipped
option 4: 5
Chair: would anyone object to closing issue with no action (accept the status quo)?
marc: it seems like we're limiting ourselves.
glen: I'd rather have a clear first spec, and then later on we can amend it for new scenarios in version 2, or allow other specs to add new headers.
marc: I'm surprised that people are comfortable with the wsdl binding that allows extensions (e.g. wsa:from optional) and doesn't say how it works, but they're not comfortable with this.
RESOLUTION: issue i009 closed with the status quo.
meeting adjourned. | https://www.w3.org/2002/ws/addr/5/01/18-ws-addr-minutes.html | CC-MAIN-2016-44 | refinedweb | 3,754 | 71.95 |
22.1.2018 Getting rid of endless „../../..“ in ES6/TypeScript imports of Ionic projects
You know these ugly ES6/TypeScript import statements with endless sequences of „../../..“ like in this example:
import { Topic} from '../../../services/domain/topic'; import { Rating } from '../../../services/domain/rating'; import { SmartAudio } from '../../../providers/smart-audio/smart-audio'; import { ENGLISH_GERMAN_DEFAULTS } from '../../../data/english-german'; [...]
And even worse, once you move your code to another directory, the number of „..“ in these relative paths does not match the target anymore? I wanted to get rid of these in my current project, looked for a solution and found some good hints on the web: [1], [2] and [3]. Thanks to you guys! In this article I’d like to show how I applied these to my TypesScript 2.4 / Angular 5 / Ionic 3 project:
webpack.config.ts
Ionic v3 brings it’s own
webpack.config.js in
node_modules/@ionic/app-scripts/config/. So, you cannot just create your own, but you have to patch the existing one. So I copied this file to my project root (you could also create a subdirectory for such configurations), and amended as show here – see [HERE] tags:
function getProdLoaders() { [...] } // [HERE] a helper function for alias definitions function asPath(srcSubDir) { return path.join(__dirname, "src", srcSubDir); } var devConfig = { [...] resolve: { extensions: ['.ts', '.js', '.json'], // [HERE] ./src is for getting rid of endless ../../.. in imports // see also tsconfig.json modules: [path.resolve('./src'), path.resolve('node_modules')], // [HERE] some aliases for imports, // tsconfig.json also needs to be maintained alias: { "$data": asPath('services') } }, [...]
And the same for
prodConfig = !
To be able to easily re-apply these changes once Ionic changed their webpack.config.js, you can create a diff file using this command (Linux):
diff -Naur node_modules/@ionic/app-scripts/config/webpack.config.js \ webpack.config.js >webpack.config.js-patch
And re-apply it with:
cp node_modules/@ionic/app-scripts/config/webpack.config.js webpack.config.js patch <webpack.config.js-patch
package.json
To make the Ionic build system use your patched
webpack.config.js, you need to amend your
package.json file as follows:
"config": { "ionic_source_map": "source-map", "ionic_copy": "./config/copy.config.js", "ionic_sass": "./config/sass.config.js", // [HERE] make Ionic use our patched webpack.config.js "ionic_bundler": "webpack", "ionic_webpack": "webpack.config.js" },
tsconfig.json
For TypeScript source code you also need to defines these in your
tsconfig.json file:
"compilerOptions": { [...] // getting rid of endless ../../... in imports "baseUrl": "src", // and some neat aliases for imports "paths": { // webpack.config.js also needs to be maintained "~data/*": [ "services/*" ] } },
the result
Now you can write these imports as follows:
import { Topic} from 'services/domain/topic'; import { Rating } from 'services/domain/rating'; import { SmartAudio } from 'providers/smart-audio/smart-audio'; import { ENGLISH_GERMAN_DEFAULTS } from '$data/english-german'; [...]
As you can see, there are actually two unrelated changes:
- automatically resolve ‚.src‘ for absolute imports
- definition of an alias ‚$data‘, where the ‚$‘ is just to avoid name conflicts – you can define as many aliases as you like
You are welcome to contact me Me on Twitter or send your reply to this article via Twitter.
References:
[1]
[2]
[3] | https://michael.hoennig.de/2018/01/22/getting-rid-of-endless-in-es6-typescript-imports-of-ionic-projects/ | CC-MAIN-2019-22 | refinedweb | 516 | 52.56 |
1. Which of these is a bundle of information passed between machines?
a) Mime
b) cache
c) Datagrams
d) DatagramSocket
View Answer
Explanation: The Datagrams are the bundle of information passed between machines.
2. Which of these class is necessary to implement datagrams?
a) DatagramPacket
b) DatagramSocket
c) All of the mentioned
d) None of the mentioned
View Answer
Explanation: None.
3. Which of these method of DatagramPacket is used to find the port number?
a) port()
b) getPort()
c) findPort()
d) recievePort()
View Answer
Explanation: None.
4. Which of these method of DatagramPacket is used to obtain the byte array of data contained in a datagram?
a) getData()
b) getBytes()
c) getArray()
d) recieveBytes()
View Answer
Explanation: None.
5. Which of these method of DatagramPacket is used to find the length of byte array?
a) getnumber()
b) length()
c) Length()
d) getLength()
View Answer
Explanation: getLength returns the length of the valid data contained in the byte array that would be returned from the getData () method. This typically is not equal to length of whole byte array.
6. Which of these class must be used to send a datatgram packets over a connection?
a) InetAdress
b) DatagramPacket
c) DatagramSocket
d) All of the mentioned
View Answer
Explanation: By using 5 classes we can send and receive data between client and server, these are InetAddress, Socket, ServerSocket, DatagramSocket, and DatagramPacket.
7. What is the output of this program?
import java.net.*;
class networking {
public static void main(String[] args) throws Exception {
URL obj = new URL("");
URLConnection obj1 = obj.openConnection();
System.out.print(obj1.getContentType());
}
}
Note: Host URL is written in html and simple text.
a) html
b) text
c) html/text
d) text/html
View Answer
Explanation: None.
Output:
$ javac networking.java
$ java networking
text/html
8. Which of these method of DatagramPacket class is used to find the destination address?
a) findAddress()
b) getAddress()
c) Address()
d) whois()
View Answer
Explanation: None.
9. Which of these is a return type of getAddress method of DatagramPacket class?
a) DatagramPacket
b) DatagramSocket
c) InetAddress
d) ServerSocket
View Answer
Explanation: None.
10. What is the output of this program?
import java.net.*;
class networking {
public static void main(String[] args) throws MalformedURLException {
URL obj = new URL("");
System.out.print(obj.toExternalForm());
}
}
a) sanfoundry
b) sanfoundry.com
c)
d)
View Answer
Explanation: toExternalForm() is used to know the full URL of an URL object.
Output:
$ javac networking.java
$ java networking
Sanfoundry Global Education & Learning Series – Java Programming Language.
To practice all features of Java programming language, here is complete set on 1000+ Multiple Choice Questions and Answers on Java. | http://www.sanfoundry.com/java-mcqs-networking-datagrams/ | CC-MAIN-2017-09 | refinedweb | 436 | 57.37 |
INTERNATIONAL COLLEGIATE PROGRAMMING CONTEST
(ICPC)
HOW TO GET STARTED
Last modified: September 15, 2007
I assume that you are:
- passionate (and probably crazy) about programming
- willing to spend times (and quite a bit of times) to further your programming skills
- proficient in some programming languages, in particular C/C++ or Java
- comfortable about data structures, algorithms, and object-oriented programming
To know more about ICPC, read the ICPC mother site (@) or Wiki "ICPC". To summarize, ICPC is the programming contest for the university students (just like the IOI - International Olympiad in Informatics - is the programming contest for high school students). The ICPC contest rules are:
- Each team consists of THREE students
- Each team is given only ONE computer
- Each team is given 5 hours to solve 8-10 problems (in C, C++, Java and possibly Pascal)
- The team who solves the most number of questions in the shortest time is the winner
- There are two stages for the contest: Regional's and the Grand Final. Winners of each regional contest proceed to the Grand Final
PREPARATION...
Step 0.1 - Read, Read, Read: on programming, data structures, algorithms, and object-oriented programming.
Step 0.2 - Pick your Language: Pick a programming language that you are comfortable, either C++ or Java or both (but not C nor Pascal as they lack advanced libraries).
Step 0.3 - Gather Programming Resources: Gather programming books and materials, especially online references and resources.
Step 0.4 - Setup your Programming Workbench: If you can afford a laptop, get one (so that you can program at the Starbucks and in the train).
Depending on the host, the contest could be run on Linux (most likely) or Windows or any other exotic machines.
For Java programmers
Use JDK 1.5 and above, which greatly simplifies the IO processing. The Java IDE of choice is certainly eclipse - an open-source IDE supported by IBM (the official sponsor of the contest), and it runs on both Linux and Windows. For newcomers, read "How to install Eclipse", "writing your first Java program in Eclipse", and "debugging Java program in Eclipse".
For C/C++ programmers
It is harder to decide because you have a few options:
- In Windows, you could practice on Microsoft Visual C++ 2005 (an Express version can be freely downloaded from Microsoft). For newcomers on VC++, read "Write a C++ program in Visual C++", and "Debugging C++ program in Visual C++".
- In both Linux and Windows, you could use the open-source Eclipse C/C++ Development Tool (CDT) IDE combined with Cygwin's GCC/G++ compiler. For newcomers, read "How to install Cygwin", "How to install Eclipse CDT", "Writing first C++ program in Eclipse CDT", and "Debugging C++ program in Eclipse CDT".
- [TODO] Linux's IDE.
Important for All Programmers
- You should be familiar with the use of graphical debugger to improve your programming efficiency and productivity.
- You should be familiar with the libraries, such as Java's API and C++ STL.
- [TODO] more
Step 0.5 - Online Judges and Training Sites: There are many "online practice sites" called online judge, that archive hundreds (or even thousands) of past contest problems. You could try the problems at your own time and own target, and submit your solutions online. You program will be automatically compiled and run with a carefully-designed set of test inputs. The status of the run, such as "accepted", "wrong answer","compile error", "presentation error", "time limit exceeded", "memory limited exceed", "output limit exceed" will then be shown to you. In the case of compilation error, some of the sites may also show you the compilation error messages.
These are the sites that I frequently used (google or wiki "icpc", "online judge" to get the full list).
- Peking University Online Judge (PKU): This site support many languages, including Java (JDK 1.5), GNU's GCC/G++ (for C/C++) and Visual C/C++ version 6.
- Universidad de Valladolid Online Judge (UVA): This is the most reputable site, with a good forum (equipped with search). The support for C++ is excellent, however, the support for Java is mediocre (no JDK 1.5).
- USA Computing Olympiad (USACO) Training Program: This is the training site for IOI (International Olympiad in Informatics for high school students) instead of ICPC. However, it provides a very systematic training on the algorithms frequently encountered in contests, e.g., shortest path, greedy, dynamic programming, heuristic search, minimum spanning tree, and etc. It supports C, C++ and JDK 1.5.
LET'S GET REAL & STARTED...
Step 1 - Try PKU Online Judge
- Register with PKU online judge @.
- Read the FAQ to understand to submission rules - IMPORTANT!
- Read the FAQ AGAIN.
- The programming rules for ICPC are:
Java Language
- Input comes form
System.inand output goes to
System.out(no File IO allowed).
- The source file must contain a class called
Mainwith the entry-method
main:
public static void main(String[] args) { ... }.
- Input comes form
std:cinand output goes to
std:cout(no File IO allowed).
- The source file must contain the entry-function
int main() { ... }.
- Try the first problem "1000 (A+B)" with the solution provided in FAQ. The purpose of this problem is to let you understand the above programming rules. In your submission, make sure you specify the language used ("Java", "G++" for GNU C++ compiler, or "C++" for VC6 compiler).
For Java programmers
You should program at JDK 1.5 or above. Use
Scanner,
in.nextInt(),
in.nextDouble(),
in.next()for inputting
int,
double, and
String, and C-like
System.out.printf("formatString", args...)for output.
JDK 1.5 Program Template for ICPC Online Judge
import java.util.Scanner; public class Main { // save as Main.java public static void main(String[] args) { Scanner in = new Scanner(System.in); int i = in.nextInt(); // read int double d = in.nextDouble(); // read double String s = in.next(); // read String // There is no nextChar(), use next() and charAt() char c = s.charAt(2); // Read whole line (or rest of the line past '\n') String line = in.nextLine(); System.out.printf("%4d, %6.2f, %s, %c\n", i, d, s, c); // Use %f for double (not %lf) // Don't forget to print the '\n' } }
- Try another few (extremely) simple problems (you can look at the percentages to estimate the difficulty of the problems)
- Read the programming rules AGAIN before trying these problems.
- You input and output must strictly follow the description given. You need not and CANNOT print input prompting message such as "Please enter a number" (because nobody is sitting at the online judging system to read these prompts).
- Do "1004 (Financial Management)" - Simply averaging 12 numbers
Hints: To test this program, you have two options: (a) key in the 12 numbers slowly, or (b) save the 12 numbers in a file, says "
in.txt", start a Command shell ("
cmd"), and use the "pipe" operator "
<" to re-direct the input from the file "
in.txt" to the
stdinas follows:
For Java Programmers
Assume that the source file is called
Main.java
> javac Main.java > java Main < in.txt
For C++ programmers
Assume that the source file is called
test.cpp
> g++ -o test.exe test.cpp > test < in.txt
- "1003 (Hangover)" - Compute the sum of a harmonic series and compare...
- "1005 (I think I need a house boat)" - Compute the area of a semi-circle and compare...
Step 2 - Try UVA Online Judge: This site is meant for C/C++ programmers. Java programmers can forget about this site (as there is no support for JDK 1.5). For C programs, you cannot use
- Register with UVA online judge @.
- Read the HOWTOs, especially on how to submit solution.
- Try the first problem Volume 1 Problem 100 (3n+1) with the solution provided in HOWTOs.
The above steps are meant for you to familiarize with the contest process. Here come the "serious" training...
Step 3 - Try USACO Training Problem
- Register with USACO Training Program @.
- Read "Section 1.0 Text Introduction" and "Section 1.1 Submitting Solutions".
As mentioned, this site is meant for IOI instead of ICPC, the submission requirements are different from the online judges.
You are required to read from an input file named "
xxxx.in" and write to an output file named "
xxxx.out" where "
xxxx" is the name of the problem.
Try the "First Challenge" (or "
test" problem). If you encounter problem in File IO under VC++ or Eclipse, read "VC++ File Input/Output" or "Eclipse File Input/Output", respectively.
For Java programmers
The sampled Java solution given is based on JDK 1.2, which is rather clumsy in IO processing. I provide the sample solution in JDK 1.5 is as follows:
JDK 1.5 Program Template for USACO
/* ID: yourID LANG: JAVA TASK: test */ import java.util.Scanner; import java.util.Formatter; import java.io.File; import java.io.IOException; public class test { // saved as test.java public static void main (String [] args) throws IOException { Scanner in = new Scanner(new File("test.in")); // file input Formatter out = new Formatter(new File("test.out")); // file output int a = in.nextInt(); int b = in.nextInt(); out.format("%d\n",a+b); // format() has the same syntax as printf() out.close(); // flush the output and close the output file System.exit(0); // likely needed in USACO } }
- Continue and try to complete the training problem. As mentioned, this site provides systematic training for the frequently used algorithms encountered in contests (IOI as well as ICPC).
TRAINING GUIDE
My ex-student-training-manager Mr. Nguyen Trung Nghia suggests the following approach for ICPC training.
You need to pick up basic knowledge in data structures (e.g., vector, linked list, queue, stack).
You need to know many algorithms (USACO teaches some of the basics which you MUST know).
- All kind of sorting algorithms
- How to handle bit-operations (See "tips, tricks and tweak")
- Complexity of well-known algorithms
- Graph (BFS, DFS,...)
- Maths (Number Theory)
- Geometry (Convex Hull, Interval tree,...)
- Greedy algorithm
- Dynamic algorithm
- others
For the Beginners:
- Do "ad-hoc" problems and simple problems which related to Maths (e.g., gcd, fibonaci, prime numbers,...). Focus on how to produce prime number with the fastest speed (See "tips, tricks and tweaks"), manipulate strings, operations on big numbers (not allow to use bignumber library).
- Simulation problems (describe some rules and give the input, you have to follow these rules to produce output). This helps you to learn how to read the questions properly and code without bugs.
- Try not to give them any sample code, let them code it by themselves :-)
- Do not use C++ STL (or Java API library) at this stage.
For the Intermediates and Advanced:
- C++ STL (or Java API libraries) should be taught
- Graphs (bfs, dfs, flood fill algorithm, shortest path (diskja, floyd), tree, network flow)
- Dynamic algorithm, dictionary algorithm
- Geometry (need to know at least convex-hull, detect a point is inside a polygon, calculate the area of a polygon, etc.)
- Graphs, dynamic, ad-hoc, simulation, maths, geometry + dynamics, graphs + geometry, maths + dynamic, and so on
If you are a C++ guy, you MUST know C++ STL:
#include <string> #include <vector> #include <algorithm> #include <queue> #include <list> #include <stack> #include <map> #include <set>
If you are a Java guy, learn Java API, especially
Collection classes and
BigNumber.
C++ Program Template for UVA
#include <iostream> #include <string> #include <vector> #include <algorithm> #include <queue> #include <list> #include <stack> #include <map> #include <set> using namespace std; #define DEBUG #define REP(i,a) for(i=0;i<a;i++) #define FOR(i,a,b) for(i=a;i<b;i++) #define VE vector<int> #define SZ size() #define PB push_back int main() #ifndef ONLINE_JUDEGE freopen("input.txt","r",stdin); #endif #ifdef DEBUG // to write some values for debugging purposes, e.g., int i =5; printf("%d",i); #endif return 0; }
Java References & Resources
- JDK API Documentation
C++ References & Resources
- C/C++ Reference @
- C++ Tutorial for C users @
- C++ Reference @
- C++ Annotations @ | http://www3.ntu.edu.sg/home/ehchua/programming/icpc/icpc_getting_started.html | CC-MAIN-2017-43 | refinedweb | 1,980 | 65.12 |
Articles Index
It's fairly common these days to go to a web site such as a site that displays a list of restaurants
in a given city, and then request the site to display a map that highlights the location of the
restaurants. Figure 1 shows an example of this type of site.
Although you might assume that both the list and map come from resources within
the site, the two types of information often originate from different sites. More specifically,
the services that generate the two types of content are in different sites. This merging of services
and content from multiple web sites in an integrated, coherent way is called a mashup.
Most mashups do more than simply integrate services and content. Sites that do mashups typically add value.
They benefit users in a way that's different and better than the individual services they leverage.
For example, Delexa.org brings together data from
the social bookmarking site del.icio.us and the web site
traffic tracker Alexa. Using the topic tags that it
extracts from del.icio.us, Delexa.org allows users to search for top traffic sites by topic, something that
you can do in only a limited way on the Alexa site. For example, a user can ask to see the most
frequently visited gardening sites or the most frequently visited karate sites.
Mashups are appearing on the web at an extremely fast rate. Three new mashups typically appear on the web each day.
You can see some of the newest ones on the
ProgrammableWeb site. The bulk of the mashups
on the web involve the use of maps. Many of these sites use mapping services such as those provided by
Yahoo Maps and
Google Maps. For example,
mibazaar.com uses the Google Maps
service in a mashup that displays the type of list and map shown in Figure 1.
However, mashup sites are not limited to mapping. As the Delexa.org site illustrates, mashups can involve other types
of information -- in this case, topics identified in bookmarked pages and web traffic data. In addition,
a growing number of mashups involve multimedia content from sites such as
Flickr and YouTube,
or include services from shopping sites such as Amazon.com.
If you examine these mashups, you'll notice a variety of approaches. This article is the first in a series that
examines some of the most common approaches, or styles, for doing mashups. The articles in the series compare
and contrast these styles and discuss some of the major design considerations related to each. This first article
examines a style called server-side mashups.
Before delving into mashup styles, it's important to understand that despite the definition of the term mashup
given in this article, no single definition encompasses all mashups.
This series of articles uses a rather loose definition of mashup. If a web site uses data or functionality
from another web site and combines it in an application, it's a mashup. The application can access the data or
functionality in various ways. It can use formal Representational State Transfer (REST)-based APIs provided by
the other site. Or it can do some informal screen scraping, in which it extracts data from the displayed output
of a program on another site. Or it can access an RSS feed or use a widget provided by another site.
However, if the application simply links to another site, for instance, through an HTML href link, it is
not a mashup. In the simplest sense, if your web application is using other web sites, it's a mashup.
href
As you read the articles in this series, keep in mind that there is some fuzziness in what constitutes a mashup
and probably some degree of subjectivity too.
The two primary mashup styles are
server-side mashups and client-side mashups.
As you might expect, server-side mashups integrate services and content on the server.
The server acts as a proxy between a web application on the client, typically a browser, and the other web site
that takes part in the mashup. In a server-side mashup, all the requests from the client go to the server, which acts as
a proxy to make calls to the other web site. So, in a server-side mashup, the work is pushed from the web application client
to the server.
In general, a server-side mashup works as illustrated in Figure 2.
XmlHttpRequest
Client-side mashups integrate services and content on the client.
They mash up directly with the other web site's data or functionality. For example, in a client-side mashup, the client might make
requests directly to the other web site.
In general, a client-side mashup works as illustrated in Figure 3.
<script>
However, some mashups don't correspond to either of these styles.
Many see these other mashups as expanding the boundary of what constitutes a mashup.
You can find a good demonstration of these styles in the
Java PetStore 2.0 demo, a reference
web application that is part of the Java BluePrints Program.
For simplicity, this article refers to the application as Pet Store. Articles in this series will use the mashups in Pet Store
to illustrate mashup techniques. Pet Store shows how you can use the
Java EE 5 platform with Web 2.0 technologies.
The application also demonstrates how you can use Java EE 5 technologies such as
JavaServer Faces technology (often referred to as JSF) and the Java Persistence API in mashups.
For an overview of Pet Store, see Introducing the Java Pet
Store 2.0 Application.
You can learn more about Pet Store by
downloading it, examining the source code, and deploying it on a Java EE 5-compliant application server.
You can also run a
live version of Pet Store.
Pet Store is a web application for selling and buying pets.
If you're a seller, you can add information about your pet such as a description, price, and photo, to the application's
pet catalog. You can also add information about yourself such as your address.
If you're a prospective buyer, you can select one or more available pets
from the catalog. You can search for a specific pet or type of pet. You can also get a map that shows
where pets for purchase are located. And you can pay for pets online. Pet Store performs various mashups in support of these operations.
For example, it mashes up with the Google Maps service and the Yahoo Maps
Geocoding service to display a map that shows the pets' locations, as shown in Figure 4.
Here, news bar is another mashup.
Each page in Pet Store contains a news bar that displays headlines from an RSS news feed
of the Java Blueprints project. Pet Store also includes a news detail page,
which displays all the news stories from the Java Blueprints project. For these news features, Pet Store uses data fetched from
an RSS feed at another web site, processes and manipulates the fetched data, and then displays the data on its web pages.
Figure 5 shows the news bar above a pet's photo in a Pet Store page.
In this article, you'll learn more about server-side mashups. The article will focus on Pet Store's mashup with the Yahoo Maps
Geocoding service because it's a good example of how to design and build a server-side mashup. Pet Store's mashup
with the RSS feed is also a server-side mashup, but that mashup has some unique aspects that will be covered in a subsequent
article. Pet Store's mashup with the Google Maps service is a client-side mashup. Client-side mashups as well as other mashup
styles will be covered in subsequent articles.
In a server-side mashup, the service or content integration takes place in the server. This is in contrast to
a client-side mashup, where the service or content integration takes place in the client, typically a web browser.
A server-side mashup is also called a proxy-style mashup because a component in the server acts as a proxy to the service.
In general, a proxy is a component that acts as an intermediary between two parties. In a server-side mashup,
a component in the server on your web site acts as a proxy between a page in the browser and another web site.
As illustrated in Figure 2, the browser client makes HttpRequests to your web site. Your
web site then does the work of mashing up with the other web site.
HttpRequest
XMLHttpRequest
Perhaps the biggest challenge in doing a mashup is contending with the basic security protection that the browser security sandbox provides. The browser security sandbox is responsible for keeping personal information secure. Many mashups use Ajax functionality. An XMLHttpRequest is a JavaScript object that is used to exchange data asynchronously between a client and server in an Ajax transaction. To protect against possible maliciousness, most browsers allow JavaScript code that contains an XMLHttpRequest to communicate only with the site domain, that is, the computer system that hosts the web site, from which the page was loaded. The site from which the page is loaded is usually called the server of origin. For example, if the page containing the XMLHttpRequest is loaded from ServerofOriginSite.com, the XMLHttpRequest can only connect to ServerofOriginSite.com. It won't allow the XMLHttpRequest to connect to another site. If the mashup requires a service in a site that is not the server of origin, such as MashupSite.com, there's no way to access it through an XMLHttpRequest. Although the server-of-origin policy adds security, that security makes the creation of mashups more difficult.
ServerofOriginSite.com
MashupSite.com
In a proxy-style mashup, a server-side proxy -- and not JavaScript code in the client -- accesses the service. Because of that, a server-side mashup is not subject to the browser security sandbox and can connect to the other site -- in this case, MashupSite.com -- to access the service. Note that a Java EE application running in a server can access any other web site.
Here are some other good reasons for using proxy style in doing a mashup:
The Pet Store mashup with the Yahoo Maps
Geocoding service is an example of a mashup that uses the proxy style. Recall that Pet Store uses the
Yahoo Maps Geocoding service together with the Google Maps service to display a map of pet locations. The Google Maps service
actually produces the map. However, to map a location, the Google Maps service requires the location to be specified in terms of
its longitude and latitude. So to display a map as shown in Figure 4,
Pet Store needs a way to convert each pet address. The Yahoo Maps Geocoding service does precisely that: It converts each
address to a longitude and latitude.
Rather than requesting the Yahoo Maps Geocoding service each time a map is requested, Pet Store calls the service once: when
a seller adds a pet to the application's catalog of pets. As part of the information it requires for a new pet entry, Pet Store
asks the seller for an address. When the seller submits the content for the pet, Pet Store calls the Yahoo Maps Geocoding service
to get the latitude and longitude corresponding to the address. Pet Store then stores the longitude and latitude in a database
along with the other information that the seller submits. This frees the application from having to request the Geocoding service
each time a map is needed. Pet Store provides the longitude and latitude to the Google Maps service simply by accessing the database.
The Yahoo Maps Geocoding service is a REST-based web service that is available for use
by other web sites through a public API. To access the service through the API, you construct a URL with the required
parameters and go to that URL. Code Sample 1 shows an example. For formatting purposes, the URL is
displayed on multiple lines. In actual code, you should specify the URL on one line.
?appid=com.sun.blueprints.ui.geocoder
&location=4140%20Network%20Circle,%20Santa%20Clara,%20CA,%2095054
The URL is the request URL path
for the Yahoo Maps Geocoding service. The URL is appended with one or more pairs of request
parameters and values. The apiid value identifies the application, and the location value is the address.
Notice that spaces in the address are encoded.
apiid
location
In response, the service returns an XML document that contains the longitude and latitude of the address. For example,
the service request in the previous example returns the XML document shown in Code Sample 2.
<ResultSet xsi:
<Result precision="address">
<Latitude>37.395908</Latitude>
<Longitude>-121.952735</Longitude>
<Address>4140 NETWORK CIR</Address>
<City>SANTA CLARA</City>
<State>CA</State>
<Zip>95054-1778</Zip>
<Country>US</Country>
</Result>
</ResultSet>
If there is an error, the service responds with an HTTP error code and an XML error response message.
Let's look at the code in the proxy class that Pet Store uses in its mashup with the Yahoo Maps Geocoding service.
The proxy class is named GeoCoder. A class such as this would usually be called
from a servlet or JavaServer Faces component that processes the browser client's Ajax call, as shown in
Figure 2.
GeoCoder
Code Sample 3 shows a snippet of code from the GeoCoder class. You can look at the
complete code for the class here.
//The URL of the geocoding service we will be using
private static final String SERVICE_URL =
"";
//The default application identifier required by the geocoding service.
//This may be overridden by setting the applicationId property
static final String APPLICATION_ID =
"com.sun.javaee.blueprints.components.ui.geocoder";
//Now the method that does the work
public GeoPoint[] geoCode(String location) {
...
// Perform the actual service call and parse the response XML document,
// then format and return the results
Document document = null;
StringBuilder sb = new StringBuilder(SERVICE_URL);
sb.append("?appid=");
sb.append(applicationId);
sb.append("&location=");
sb.append(location);
try {
document = parseResponse(sb.toString());
return convertResults(document);
} catch (IllegalArgumentException e) {
...
applicationId
Notice that the geoCode method in GeoCoder makes the call to the geocoding service.
The method takes a String location parameter that represents an address. The location can be formatted
in various address formats such as "city, state" or "city, state, zip".
Both the application identifier (application ID) and the location parameters are also encoded, as shown in
Code Sample 6.
Notice too that the geoCode method does the following:
geoCode
String location
private static final String SERVICE_URL =
"";
static final String APPLICATION_ID =
"com.sun.javaee.blueprints.components.ui.geocoder";
StringBuilder sb = new StringBuilder(SERVICE_URL);
sb.append("?appid=");
sb.append(applicationId);
sb.append("&location=");
sb.append(location);
java.net.URL
DocumentBuilder db =
DocumentBuilderFactory.newInstance().newDocumentBuilder();
InputStream stream = null;
try {
// make call to the mashup web site and receive
// XML document result
stream = new URL(url).openStream();
return db.parse(stream);
}
The geoCode method uses two private methods as helpers. One helper method parses the response from the
geocoding service. The other helper method converts the results after the results are parsed. Parsing and converting
results are common activities when accessing any service. Let's look at the code for the helper methods.
Code Sample 4 shows a snippet of the code for the method that parses the response, parseResponse.
You can view the complete code for the method by examining the code for the
GeoCoder class.
parseResponse
private Document parseResponse(String url)
throws IOException, MalformedURLException,
ParserConfigurationException, SAXException {
DocumentBuilder db =
DocumentBuilderFactory.newInstance().newDocumentBuilder();
InputStream stream = null;
try {
// make call to the mashup website and receive XML document result
stream = new URL(url).openStream();
return db.parse(stream);
} finally {
... clean up and close stream
}
}
The parseResponse method takes a String url parameter that represents the URL of
the resource to be parsed. It then parses the XML content at the specified URL into an XML Document object.
String url
Document
The Document object can be further processed to extract the necessary content,
as shown in Code Sample 5.
The code sample shows a snippet of code for the method that converts the response, convertResults.
You can view the complete code for the method by examining the code for the
GeoCoder class.
convertResults
private GeoPoint[] convertResults(Document document) {
List<GeoPoint> results = new ArrayList<GeoPoint>();
GeoPoint point = null;
// Acquire and validate the top level "ResultSet" element
Element root = document.getDocumentElement();
if (!"ResultSet".equals(root.getTagName())) {
throw new IllegalArgumentException(root.getTagName());
}
// Iterate over the child "Result" components, creating a new
// GeoPoint instance for each of them
NodeList outerList = root.getChildNodes();
for (int i = 0; i < outerList.getLength(); i++) {
// Validate the outer "Result" element
Node outer = outerList.item(i);
if (!"Result".equals(outer.getNodeName())) {
throw new IllegalArgumentException(outer.getNodeName());
}
// Create a new GeoPoint for this element
point = new GeoPoint();
// Iterate over the inner elements to set properties
...
}
// Return the accumulated point information
return (GeoPoint[]) results.toArray(new GeoPoint[results.size()]);
}
The convertResults method takes a document parameter, which is the parsed Document
object that represents the results from the Yahoo Geocoding service. The method converts the parsed XML results into
an array of GeoPoint objects. The GeoPoint class is yet another helper class in Pet Store.
You can view the code for the GeoPoint class here.
The class represents the location address and the longitude and latitude coordinates for the address.
If there are no results from the Yahoo Geocoding service and no exception is thrown, the convertResults method
returns a zero-length array.
document
GeoPoint
There are various things to consider when you do a server-side mashup:
Some of the security-related issues you need to address when you do a server-side mashup are the following:
The Yahoo Maps Geocoding service has a number of specific security requirements. To access the service, you must first
set up an account with Yahoo. You must also obtain from Yahoo an application ID, which is a string that uniquely
identifies your application. Note that Yahoo limits the number of times that an account can use the geocoding service. It
does this by tracking usage for each account ID. That's why you must specify an appid parameter in the call to
the service. This identification requirement is fairly common. Many public REST-based web services require a key,
token, or something similar to identify the calling application. For example, the Google Map API
requires application keys, in which the value is specific to a deployed instance. One benefit of using the server as a proxy
to a service is that you do not need to expose the application ID or token of a mashup service in your client code. Instead,
the application ID or token is only used on the server and not passed to each client's browser.
appid
The Yahoo Maps Geocoding service is a REST-based service. To call it, you specify the appropriate URL along with
any parameter-value pairs. If you examine the GeoCoder class, you'll
also notice that it encodes the service call's URL. Encoding the URL to a generally accepted format makes the request
as transportable as possible. This is shown in Code Sample 6. The caller encodes, prior to calling
the URL, any escape characters in parameters. The caller also decodes any escape characters in fields that are returned
from the URL.
// URL encode the specified location
String applicationId = getApplicationId();
try {
applicationId = URLEncoder.encode(applicationId, "ISO-8859-1");
} catch (UnsupportedEncodingException e) {
if (logger.isLoggable(Level.WARNING)) {
logger.log(Level.WARNING, "geoCoder.encodeApplicationId", e);
}
throw new IllegalArgumentException(e.getMessage());
}
// URL encode the specified location
try {
location = URLEncoder.encode(location, "ISO-8859-1");
} catch (UnsupportedEncodingException e) {
if (logger.isLoggable(Level.WARNING)) {
logger.log(Level.WARNING, "geoCoder.encodeLocation", e);
}
throw new IllegalArgumentException(e.getMessage());
}
In addition to encoding the parameters and assembling the URL, the code in the server needs to make a request to
the service at that URL. As Code Sample 4 shows, Pet Store uses the
java.net.URL class to open a connection to this URL and get an InputStream for reading from that connection.
However, there are other ways to call a service. Java EE and Java SE technologies provide a variety of
ways to make a call to a service. For example, you could use the Java API for XML-Based Services (JAX-WS),
Java sockets, or many other technologies available to Java EE 5 developers.
The mashup with the Yahoo Maps Geocoding service uses a URL and represents a relatively simple interaction.
But if a service requires a more complex interaction -- for example, it requires a digital certificate -- you could use
a method more suited to handling that type of request.
InputStream
When you do a server-side mashup, you need to determine the format of the response that the service returns.
A service can return a response in many different formats including XML, JSON, HTML, plain text,
RSS/ATOM, and GData. In the Pet Store mashup with the Yahoo Maps Geocoding service, the client
doesn't display the data returned by the service. It simply uses the data as input to the Google Maps service.
The Yahoo service returns an XML document, as you can see in Code Sample 2.
Pet Store needs to convert the XML document to a Java object. As Code Sample 5 shows, the results
are parsed into an array of Java objects. The array is stored in a database for subsequent use when a user wants to see a map
of pets for sale in a particular area. Because many public APIs provide the response in XML, the server-side code
must often convert the response into another data type.
Other applications might need to convert the data returned by a service to another format such as JSON. Data in formats such as
JSON can be easier for the browser to handle. By using the server as a proxy to a service, the server
does the work of parsing an XML document and possibly converting it to a format such as JSON.
In that case, your client JavaScript code can be simpler because it doesn't have to parse and convert the XML.
Will the response from the service be used by the client, or will the data be stored on the server?
In Pet Store's mashup with the Yahoo Maps Geocoding service, the data provided by the service is not sent back to the client.
Instead, it is stored in a database on the server for subsequent use by the Google Maps component to pinpoint pets for sale
on the map. If Pet Store did not store the results, it would have to make a request to the Geocoding service
for each client request for a map and for each point shown on the map. This is a good reason for doing the mashup with the
geocoding service as a server-side mashup rather than a client-side mashup.
Map data can change. For example, street addresses and street names change. So you might need a mechanism for updating
data that might go out of date. It's relatively easy to write code on the server to reaccess the mashup service if you need
to update the data cached from a service.
The RSS feed in Pet Store is another example of data that has been cached from a service for reuse.
After data is retrieved from the live RSS feed on the Java Blueprints project site, the server side of the Pet Store application
parses the RSS XML document and converts it to JSON. It then caches the data. The cached data in JSON format is used to fulfill
Ajax requests from all Pet Store clients for display in the news bar.
You must determine whether the server side of the application will need to modify the response from the service before the
client can use it or whether the application can directly pass it to the client. As mentioned in Caching
of Results, the RSS news feed data used by Pet Store is further processed before it's sent to the client.
The GeoCoder proxy class handles exceptions that could occur when accessing the Yahoo Maps Geocoding service.
In general, it's good practice to handle exceptions that could occur when accessing a service. Additionally, it's good practice
to validate the input data to a service. For example, GeoCoder validates the address
input string before sending it to the geocoding service. Doing this can help uncover error conditions before
the call to another web site and it could help you respond more quickly to the user. For instance, you could prompt
the user to correct the entry.
Also, web sites and public APIs used in mashups have very different mechanisms for responding to
exception conditions. So your error-handling code can get convoluted. By using the server as a proxy to
a service, you can shield the client JavaScript code from many of these details and keep it more loosely coupled.
In this way, you can lessen the impact on your client code if the mashup service changes its API.
You can design a mashup in various ways. In one approach, called a server-side mashup, also known as a proxy-style mashup,
you integrate services and content on the server. There are a number of good reasons for using the server-side mashup approach,
not the least of which is that the approach overcomes browser security restrictions that
constrain XMLHttpRequests and the ability to access web sites across domains.
The next article in this series focuses on client-side mashups, in which you integrate services and content in the client.
You'll be able to compare and contrast the client-side and server-side mashup styles and identify which, if either, of these
approaches better suits your needs.
Ed Ort
is a staff member of java.sun.com. He has written extensively about relational database
technology, programming languages, and web services.. | http://java.sun.com/developer/technicalArticles/J2EE/mashup_1/ | crawl-002 | refinedweb | 4,350 | 63.39 |
Video: solder paste reflow!
@helenleigh Okay so watching solder paste heat melt under a microscope is so insanely satisfying. I can’t believe I’ve never seen this before! Blissfully geeking out with @pdp7 and @talldarknweirdo in the @conservify office after #supercon
435 6:20 PM - Nov 6, 2018
Bring your Open Hardware Summit badge to Hackaday Supercon in Pasadena this weekend!
Drew Fustini will have the badge programming jig with updated firmware featuring like the MicroPython WebREPL, accelerometer demo, and Magic 8-Ball app by Steve Pomeroy
Drew Fustini will also have USB-to-serial adapter boards for the badge to share!
The 2018 Open Hardware Summit badge runs MicroPython firmware which allows for an interactive programming experience known as the REPL::Follow:
This:
The board has been shared on OSH Park:
Want to use the KX122-1037 Accelerometer (datasheet) on the 2018 Open Hardware Summit badge?
Make sure that R12 and R13 are populated.
R12 and R13 are 2.2K Ohm resistors for the I2C bus. This is needed for the accelerometer to work. We mistakenly had DNP (do not place) on the BoM (Bill of Materials) for R12 and R13.
Awesome people at Artisan’s Asylum makerspace helped to solder these resistors on the badges right before Open Hardware Summit!
It is possible that some badges were not reworked. Please email drew@oshpark.com if they are missing from your badge.
This photo shows what is will look like when R12 and R13 are missing:
Download the Python file named accelerometer.py from the ohs18apps repository on GitHub:
Start the FTP server and connect to the SSID listed on the badge:
Open your FTP client application and connect to 192.168.4.1:
After the transfer completes, power cycle the badge by removing the batteries and reinserting.
Press the left application button (with the paintbrush and pencil icons) to enter the menu. accelerometer.py should then be listed under Available Apps menu. Press the down cursor until accelerometer.py is selected and then press the application button again.
The KX122-1037 Accelerometer datasheet describes the 3 different axis:
Here are examples of the X, Y and Z axis of the accelerometer for reference::
Thanks to @Steve Pomeroy for creating this MicroPython demo app for the Open Hardware Summit badge:
import gxgde0213b1 import font16 import font12 import urandom import time phrases = [."] epd.clear_frame(fb) epd.set_rotate(gxgde0213b1.ROTATE_270) epd.clear_frame(fb) epd.display_string_at(fb, 0, 60, urandom.choice(phrases), font16, gxgde0213b1.COLORED) epd.display_frame(fb) time.sleep(2)
This Python file can be transfered to Open Hardware Summit badge using FTP server built into the MicroPython firmware:
photo gallery:. | https://hackaday.io/project/112222-2018-open-hardware-summit-badge | CC-MAIN-2019-09 | refinedweb | 441 | 64.1 |
Plan9 on raspberry pi 3
May 31, 2018 20:55 · 587 words · 3 minute read
Plan9 is a research operating system from the same group who created UNIX at Bell Labs Computing Sciences Research Center (CSRC). It emerged in the 80s, and its early development coincided with continuing development of the later versions of Research UNIX. Let’s install it in the raspberry pi 3!
1. Installing Plan9
To install the plan9 os you need a micro sdcard with at least 2GB (I use a 4GB one).
You probably need to log in as root in order to run the following commands
Check the name of your sd card:
fdisk -l
For me it’s
/dev/sdc.
Format the device to FAT32:
mkfs.vfat -n 'PLAN9' -I /dev/sdc
Download the plan9 image and write to your sdcard or save it somewhere:
Remember to change your device!
wget -O - | \ gunzip -c | dd bs=4M of=/dev/null status=progress
Now you are ready to boot Plan9 on raspberry pi!
3. First steps
Since the plan9 image you just downloaded is a fully bootable one, it not require the installation step.
When the pi boots up, you get into the Plan9 graphical interface, rio(1). Rio is a minimalist window manager, it’s just a bunch of rectangles that you draw in the screen 😄.
To use Plan9 you definitely need a three button mouse. Hold the right button, hover
New and release. Again hold the right button, drag a rectangle big enough and release. Now you have a shell, rc(1). First of all check your internet connection:
I’m using ethernet, probably wifi only work on plan9 forks like 9front.
ndb/dns # start the dns resolver ip/ipconfig # configure interfaces and get ip address with dhcp ip/ping google.com # check internet connection
In rc
Ctrl + cnot work, try
Delto stop the ping.
As you can see, some of the commands are similar with the UNIX ones, but Plan9 is an entirely different operating system in several ways. You can also note the concept of namespaces, all ip related commands are in
ip/, disk in
disk/ and so on.
If you receive an error about dns resolution, try to reboot and run DNSSERVER=8.8.8.8 before the commands above.
Now let’s try to use a web browser. You are thinking about Chrome? Firefox? Nop. Plan9 has it’s own web browser, abaco(1)
webfs # start a filesystem that handle urls abaco ''
It’s pretty ugly I know, no css, no js, but it works!
The network settings are not persisted if you reboot, so just like your
.profilein bash, add the commands to
$home/lib/profile.
3. Installing Git
Unfortunately git is not ported to Plan9, but David du Colombier wrote a rc script to mimic the original git commands:
hget -o $home/bin/rc/git chmod +x $home/bin/rc/git
The rc scripts are stored in the
$home/bin/rc directory and automatically appended to /bin, so you can run then in the global namespace.
2. Installing Golang
hget -o go1.10.tbz tar xvf go1.10.tbz && mv go-* go # Go configs mkdir -p gopath/bin GOPATH=$home/gopath GOBIN=$GOPATH/bin GOROOT=$home/go bind -a $GOROOT/bin /bin # append the go binaries to the end of the /bin entries bind -a $GOBIN /bin
Don’t forget to add the binds and envs to
$home/lib/profiletoo
Let’s check:
go env
I’m in love with Plan9, it’s such a cool operating system to learn. If you are interested too check the following resources:
- 9front
- 9p.io
- 9legacy.org
- Book: Introduction to Operating System Abstractions with Plan9
- Book: Notes on the Plan 9 Kernel Source
Expect more posts in the future. See you again next time 😄. | https://mauri870.github.io/blog/posts/plan9-on-raspberry-pi-3/ | CC-MAIN-2019-35 | refinedweb | 635 | 71.14 |
Keith Beattie <KSBeattie at lbl.gov> wrote: > How do I pass the namespaces into minidom.parseString(), or > Domlette.NonvalidatingReader.parseString(), such that they'll be happy > with the 'unbound prefix'? I know of no convenient way of doing this with either minidom or domlette. Probably the quickest solution is to hack the input content so it's surrounded with an element declaring all the known namespaces, then ignore the root element of the result. Alternatively, the DOM Level 3 method parseWithContext would let you insert directly into the relevant part of the document (with namespaces declared above). pxdom supports this method and the domConfig parameter 'canonical-form', so that might be a possibility too. -- Andrew Clover mailto:and at doxdesk.com | https://mail.python.org/pipermail/xml-sig/2003-December/010044.html | CC-MAIN-2017-22 | refinedweb | 121 | 50.43 |
Go
Unanswered
|
Answered
Taxes and Tax Preparation
~17400
answered questions
Parent Category:
Personal Finance
Levies imposed by the government on individuals or organizations such as income taxes, sales taxes, property taxes, and capital gains taxes
Subcategories
Income Taxes
Tax Audits
Tax Forms
Tax Refunds
Non-US Taxes
Property Taxes
Sales Tax
1
2
3
...
174
> your wife gives you her old car do you have to pay any tax when you get your plates?
You shouldn't need plates if your wife gave you the car. I'm a little confused by your question. If you don't get a good answer, resubmit it with more details. probably not, but you'll find out when you go to register it.
Popularity: 134
If you are buying a car from a private seller do you pay them sales tax?
It depends on the state laws for the state the transaction takes place in. There will be fees required at the DMV office.Registration and licensing. There may also be a requirement also in some States, requiring a smog certification, which will have to be conducted.
Popularity: 139
Can the dealer add their doc fee deputy fee and dealer inventory tax to the sales price in Texas?
Here is another tip. This one is from a peer-reviewed article. Pass Through of Tax--tax inventory fee While VIT is actually one component of the dealer's ad valorem tax, state law allows dealers to pass on to retail buyers the unit tax attributable to that sale. The charge on a retail contract for t…
Popularity:
Can tax liens be dismissed under Chapter 7 or Chapter 13?
Even if you discharge a tax debt in a bankruptcy (which can be done in limited circumstances), the lien associated with that debt is not released by bankruptcy proceedings. The result is that you may come out of bankruptcy with no tax liability, but there may still be a lien on your property. That …
Popularity: 114
What is the tax structure for a US citizen in Canada who owns a resort?
this is probably a complex question that no self respecting lawyer would dare touch on this site. if you are really concerned, get legal represtation. qualified legal representation.=====The first question is whether the resort is owned by the individual personally, or whether it is owned by a corpo…
Popularity: 132
Can you claim back taxes on a bankruptcy?
Claiming Back Taxes in Bankruptcy Only if they are more than three years old and meet specific requirements More specifically, yes, taxes may be discharged in bankruptcy and are in a suprisingly low claim priority position....I believe 7th. Generally, income taxes for periods more than 2 or 3 year…
Popularity: 97
You want to buy a car from a private party how much taxes and fees will have to pay?
Answer Depends on the state you are in. Pick up your phone book and call the state motor vehicle dept office in your town, or in some states the county clerk does it, or here in NH it's the town clerk. In GA, you don't pay sales tax if a private party; only dealer sales are subject to sal…
Popularity: 109
Is there any way to avoid paying sales tax on a lower priced new vehicle when the old higher priced vehicle was surrendered during bankruptcy?
Answer No. Bankruptcy has no impact on your duty to pay sales tax are purchases made after you file for bankruptcy.
Popularity: 94
If you file a bankruptcy 7 in August what happens with your tax refund that you are entitled to the next year and thereafter?
Answer.
Popularity: 102
Do you pay taxes on a settlement from your last employer?
Paying Taxes on Settlement IRS regulations can be very complicated, these type cases usually need to be addressed on an individual basis. In general if the monies replace taxable income then the entire amount is subject to taxation. If it is a personal injury award taxes could be owed on a portio…
Popularity: 102
What is the meaning of the letters in a UK national insurance number?
The letters are used to route traffic and result in the location of the policies that are needed by policy holders and by their carriers who offer them insurance.
Popularity: 166 i…
Popularity: 108
Do you need to report information on a 1099-C to the IRS?
Answer A 1099-C is given by someone who has discharged or forgiven a debt to you. It is reported to the IRS by them on that form. (So your reporting it is telling them something they already know). As this constitutes taxable income to you, you must account for it on your return (an…
Popularity: 117
Are bartenders fees taxable?
Answer While I am not exactly certain what you mean by "bartenders fees", presumably either - 1) Wage paid to a bartender where tips are expected on top - (hence sometimes allowed to be below minimum wage - like waitresses) - 2) Tips/gratuities 3) A flat fee paid to a bartender for serv…
Popularity: 141
What is the current federal inheritance tax rate?
For 2011, the federal estate tax exemption will be $5 million and the estate tax rate for estates valued over this amount will be 35%. The estate tax has also become unified with federal gift and generation-skipping transfer taxes such that in 2011 the lifetime gift tax exemption and generation-skip…
Popularity: 103
What is the reason the city collect's sales tax?
For city maintenance, Streets, Buildings, Police pay, Electric, water, gas, city parks and misc. properties, Lets Audit them and see how much of the funds get wasted, I bet if you wasted as much as any Government (I.E. City, County, State, Federal) Your competition would put you out of buise…
Popularity: 138
How much income do you have to earn before you file income tax?
You must file taxes if you earn the following amounts of income: Self-employed, any age: $400 Children and Teens classified as a dependent: $5,700 Single, under 65: $9,350 Single, over 65: $10,750 Married, filing jointly, both spouses under 65: $18,700 Married, filing jointly, one spouse over 65: $…
Popularity: 120
How does deed in lieu work?
The deed in lieu is pretty straightforward. In short, it means that the mortgage creditor will accept the deed of the house in lieu of payment when the debt owner is no longer able to pay upon the debt. When this happens, the home owner surrenders the property and moves out saving the mortgage credi…
Popularity: 156
How many US citizens cheat on their taxes?
An estimated 30 percent to 40 percent of taxpayers cheat on their returns. A recent report by the Commerce Department found ... a 37 percent increase in unreported income from 2000. In a separate report, the Internal Revenue Service looked at both unreported income and improper deductions and con…
Popularity: 97
Can you use a starter check to pay taxes?
Yes
Popularity: 142
Can you write off a firearm on taxes?
If it was a legitimate business related expense you can.
Popularity: 99
Mclean va. tax assessors office phone number?
The phone number for the Tax Assessor for Fairfax County is (703)222-8234. The below website "Free Public Records Directory" has links for Fairfax County in case you have other questions.
Popularity: 112
Where do you mail form 941 in NY?
Popularity: 121
Is a live in girlfriend a tax deduction?
This is a qualified Yes (with some exceptions). They are an exemption as a dependent, but not someone you can claim dependent care expenses unless they are disabled. You can claim educational expenses you paid for this person. The new tax laws allow it if they meet the residency requirement (lives t…
Popularity: 163
How much federal taxes do you owe if you made 250000 filing jointly with dependents?
No two people pay the same tax, even with the same "earnings".And, considering all the major businesses, with hundreds if not thousands of stores, and other independents, and yet more services, - and all the software that is sold every year...all to help people determine how much tax is due...does i…
Popularity: 102
Can you file bankruptcy while still owing back taxes?
Absolutely. They will be handled as part of the process.
Popularity: 93
How much tax do you pay or owe?
FEDERAL TAXES ARE THE SAME THROUGHOUT THE COUNTRY...YOUR STATE MAKE NO DIFFERENCE. (However, State tax laws are specific to each area, and generally follow the Federal guidelines, but not always).Whether you have to file a tax return (or pay tax) depends, in part, on your filing status, deductions, …
Popularity: 102
What if you never paid any income taxes?
If you never owed any, that's fine. Of course, like most that way, your basically poor enough and unsuccessful enough that it makes little difference anyway. Except of course that you missed out on the many benefits for low income people that they get by filing. If you have somehow managed to make a…
Popularity: 132
How long does a tax audit take?
Depends on what developes and how quickly you agree to suggested resolution
Popularity: 133
Can you deduct credit card interest?
No
Popularity: 101
Do you claim a timeshare as an asset?
Yes suggested revision In some ways, it is an asset. It can provide the owner a great location to relax and have fu during vacation week. However, it has a corresponding liability. The owner must pay the increasing maintenance fees. ans addressing above In all ways the property, like your house or…
Popularity: 148
How much is taxed on a one million dollar income?
roughly 400,000 bucks
Popularity: 111
If you buy something for 5 how much is tax?
the tax depense on what you buy.
Popularity: 102
How people evade tax?
There are only two ways to EVADE taxes: 1) Under-report income. 2) Take deductions you are not entitled to. Tax AVOIDANCE, on the other hand, involves legally structuring your finances to minimize the amount of taxes owed.
Popularity: 157
What happens if you file tax return without sending payment?
After the due date passes, penalties and interest will be added to the unpaid taxes. For federal income taxes, the interest rate is currently 4% annually, but the interest rate is adjusted every quarter. The penalty is 0.5% per month for each whole or partial month. Even if you can't pay your taxe…
Popularity: 108
NJ long term capital gains tax?
Unlike the federal government, NJ does not have a special long term capital gains rate. All capital gains are taxed at the same rates as ordinary income.
Popularity: 150
How do you journalize a sales tax payable?
Since Payabl…
Popularity: 152
Shop without paying sales tax in US?
No, in most states you are required to pay sales or use taxes on your purchases based upon where you live, not on where you buy things. If an item is tax-exempt where you live, and you purchase by phone, mail, online or by traveling out of state to buy it and bring it back, then it is tax-exempt in …
Popularity: 93
Do I need to pay tax for online business?
Absolutely...the method you use has absolutely no difference on the requirement to pay and even in most cases, collect tax. Phone orders, internet orders and in person orders are basically all the same. The not taxing internet you may be thinking of, which has now been basically gotten around by …
Popularity: 105
What are the implications of not paying taxes?
Penalties, fines, not receiving benefits (like social security, retirement, health & disability, etc) and of course, criminal record which may include jail time.
Popularity: 124
Whether unrecognized tax benefits is a tax portion?
An unrecognized tax benefit is the difference between the tax benefit reflected on the income tax return and the amount of the benefit recorded on the financial statements. Example: taxpayer deducts $100 on its return but believes that a $60 deduction will be the most likely outcome in a negotiated …
Popularity: 153
Which tax pays for the United States Army?
Federal taxes...several different ones may contribute.
Popularity: 135
Does net amount received include tax or not?
Yes
Popularity: 159
When a person dies how do you file their taxes?
When you are the personal representative of the taxpayer estate, etc. you are the administrator, executor or the person in charge of the taxpayer property. You will use one of the 1040 tax forms for the final return of the taxpayer.After entering the deceased name, and date of death at the top of th…
Popularity: 126
What is a schedule M on a 1040A tax return?
This schedule was for 2009 only and it was to figure the 'Making Work Pay and Government Retiree Credits.' Everyone who is not someone else's dependent (most adults not being taken care of by mom or dad) and who works gets the 'Making Work Pay' credit and will want to fill out this schedule. It's u…
Popularity: 133
Owe federal tax 10000 what doi owe with reductions?
There was a mistake on my last years return. The IRS says we owe $3600+. We are living on my husband's disability social security and have a son in college. I am still unemployed after being laid off over three years ago. Is there any way to get this reduce? ans Your obviously confused (saying you o…
Popularity: 152
Do elderly pay taxes on lottery winnings?
Yes. The full amount of your (Lottery) gambling winnings for the year must be reported on line 21 other income of the IRS Form 1040 page 1.The amount will be added to all of your other gross worldwide income and taxed at your marginal tax rate.
Popularity: 165
Industrial injury compensation is this taxable?
Amounts you receive as workers' compensation for an occupational sickness or injury are fully exempt from tax if they are paid under a workers' compensation act or a statute in the nature of a workers' compensation act. The exemption also applies to your survivors. The exemption, however, does not a…
Popularity: 114
Can you include utilities bill for your income tax?
Sure if you have a business then you can use the utility bills as your deductions
Popularity: 152
Is income tax calculated before or after expenses?
Your income tax liability will be determined on your net profit.Go to the IRS gov web site and at the top choose BUSINESSESClick on the below Related Link
Popularity: 139
What line in 1040 a is total federal tax paid?
Page 2 line 71 of the 1040 income tax return TOTAL PAYMENTS 71 $$$$
Popularity: 150
Can you file your taxes in any state?
It doesn't matter from where you file (mail) your taxes. However, you don't get to pick what state you want to file a tax return for. In general, you are required to file a state return for the state in which your primary residence is. If you have income that derives from a source in another state (…
Popularity: 98
What are non-statutory deductions?
its a voluntary deduction from the pay of employee. like:1.subscription to trade union 2.contributions to a pension scheme 3.deductions under holiday pay schemes etc. a.r.
Popularity: 105
How much tax is taken out for the Texas lottery?
If you have any kind of gambling winnings the payer may have to withhold income tax at a flat 28% rate.When you complete your federal income tax return correctly and your lottery winnings is large enough you your marginal tax rate maximum amount at this time for the 2009 and 2010 tax year was and is…
Popularity: 96
How can you expedite the tax return process?
Efile online. Taxact.com-federal is free.
Popularity: 104
What percent of medical bills are tax decuctible?…
Popularity: 118
What does not tax exempt mean?
NOT tax exempt would mean that SOME TAXES MAY BE OWED ON THE TYPE OF INCOME that has been received for this purpose. And IF this is a NONPROFIT CHARITABLEorganization a type of UNRELATED BUSINESS INCOME that will have to be reported on the income tax return and some taxes paid on the UBI that was re…
Popularity: 121
What happens if you forget to file your state income taxes?
If you do NOT get it filed to your state it possible that you will be receiving a bill form the state tax department at some time in the future with the amount if any of past due taxes plus the penalties and interest that will be due when your do receive the bill it could be 2 or 3 years or less tha…
Popularity: 160
What are the salary income tax rates for Egypt for 2010?
Found 32 result when searched for SALARY TAX RATES when searched in Allegypt.gov.eg/english/laws/egypt.gov.eg/General/tipdetails.aspx?ID=652New Service.2009 Incom tax returnNow you can file your 2009 taxes (Individuals) online on the government portal.
Popularity: 160
Do retirees have to file an income tax return?
Whether you have to file a tax return (and the really important but different thing is PAY A TAX) depends, in part, on your filing status, deductions, amount & type of income. There are no such things as "start and stop" ages, not having to pay because of retirement or on social security or work…
Popularity: 142
What percent do retired people pay in taxes?
Retired people don't pay taxes, they get paid instead by the government. ans The above is entirely wrong. I suspect he may mean that "retired" people get social security - which is not at all always true. one can retire at any age, and frequently one retires because they have made or accumulated…
Popularity: 134
The claims of creditors against the assets are?
Liabilites
Popularity: 93
Does a taxpayer have taxable income from debt that was canceled in a decedent's will?
The answer is No (with no qualifiers). That's what I answered for a tax test and it was correct. ans I'm not sure what the above relies on other than not being marked wrong.cancellation of debt is income.If it is cancelled by the estate of a decedent it would be taxable just as if he was alive, or m…
Popularity: 138
Are tax providers providing a 2010 holiday loan?
Yes, Jackson Hewitt and H&R Block are providing holiday loans in 2010. H&R Block has committed to providing the loans in early December.Jackson Hewitt has committed to providing the loans in mid-November.
Popularity: 140
Can you charge taxes on haircut?
Generally there are no taxes on a service. ans There was a time that the aboe was a good generalization. No longer!While it depends on the State, over the last ecade many f not most have started some form of taxes on services and personal services (as compared to professional services) are generally…
Popularity: 114
What is a pull tax?
I believe you mean a "poll" tax, which was a tax on voting, basically...
Popularity: 227
Does the car dealer have to collect state sales tax when selling a car in the state of Arkansas?
yes.
Popularity: 119
When did earned income tax credit start?
It was enacted in 1975, but has seen many revisions over the years.
Popularity: 125
Is the service charge taxable in Singapore?
Yes. In Singapore, the service charge, if imposed by a business, is taxable under GST according to the IRAS.
Popularity: 97
When will Missouri tax forms be ready?
According to Turbo Tax, the forms are scheduled to be ready on January 15. But as they are not yet and today is the 16th, hopefully soon.
Popularity: 127
Does Oregon have a state income tax?
Yes...and it is high
Popularity: 146
Do you charge sales tax on used clothing?
Not both taxes just one
Popularity: 155
Can you file taxes if you've never had a job?
yes you can!
Popularity: 105
Can you put yourself as a dependent in your taxes?
In effect, that is what you are doing when you take the exemption for yourself when you pay taxes. You are "dependent" upon yourself. You are expected by the IRS to take your personal exemption.
Popularity: 139
Is workmanship comp taxable in California?
In California, generally benefits under Workers' Compensation such as temporary disability benefits are exempt from federal, state or local income tax. Also you don't have to pay Social Security, taxes, union dues or retirement fund contributions when on Workers' Comp.
Popularity: 134 yo…
Popularity: 151
When can you file long form tax returns?
I heard it was February 14th.
Popularity: 145
Do you have to pay capital gains tax in PA?
I think so...
Popularity: 130
Is an LLC required to file tax return if no income?
In certain states, all corporation are required to file a tax return regardless of income. This is also to pay their annual dues or fees to the state.
Popularity: 134 thro…
Popularity: 154
How can taxes be shifted backward?
You can either income average over multiple years ( which is best utilized if you have large swings in income from consecutive years.) or you can apply Net Operating Losses forward or backward with the Form 1065 to reduce your taxes in a certain year. You can only do this if you had a net operating …
Popularity: 148
How much is the state tax in Texas?
8.25%
Popularity: 135
What is the last date of income tax return submission for 2011-2012?
Last date for filing Income Tax Return for AY 2012-13 is 31st July 2012.Lord knows what your trying to ask (if it is a US tax Q). You can/must file for forever, there is never a time that you don't. You can pay estimated amounts, without filing and have no financial detriments. If you underpaid, yo…
Popularity: 115
What is the purpose of a federal tax form 1040?
The purpose to track how much money one has made during a year and how much has been taken away due to taxes and social security. Filling out this form helps one receive a refund check.
Popularity: 99
What age do you pay FICA Tax?
In most cases, there is no minimum age for payroll tax withholding. Parents do not have to withhold payroll taxes on payments for services of a child under the age of 18 who works in a trade or business if the trade or business is a sole proprietorship or a partnership in which each partner is a par…
Popularity: 95
Why do millionaires pay less tax?
they have enough money for a good lawers = ans Oh what a stupid question! Yes, there are stupid questions! They don't pay less tax...in fact they pay much, much more tax. They just may pay less of a percent of what they earn and have. Don't you think its kind of dumb and really unfair they have…
Popularity: 97
1
2
3
...
174
> | http://www.answers.com/Q/FAQ/2083 | CC-MAIN-2016-40 | refinedweb | 3,886 | 62.88 |
Several Cash Flow and NPV Problems
This content was STOLEN from BrainMass.com - View the original, and get the already-completed solution here!
1. (Payback and discounted payback period calculations) The Bar-None Manufacturing Co. manufactures fence panels used in cattle feed lots throughout the Midwest. Bar-None's management is considering three investment projects for next year but doesn't want to make any investment that requires more than three years to recover the firm's initial investment. The cash flows for the three projects (Project A, Project B, and Project C) are as follows:
Year Project A Project B Project C
0 $(1000) $(10,000) $(5,000)
1 600 5,000 1,000
2 300 3,000 1,000
3 200 3,000 2,000
4 100 3,000 2,000
5 500 3,000 2,000
a. Given Bar-None's three-year payback period, which of the projects will qualify for acceptance?
b. Rank the three projects using their payback periods. Which project looks the best using this criterion? Do you agree with this ranking? Why or why not?
c. If Bar-None uses a 10 percent discount rate to analyze projects, what is the discounted payback period for each of the three projects? If the firm still maintains its three-year payback policy for the discounted payback, which projects should the firm undertake?
2. (Net present value calculation) Big Steve's makers of swizzle sticks, is considering the purchase of a new plastic stamping machine. This investment requires an initial outlay of $100,000 and will generate net cash inflows of $18,000 per year for 10 years.
a. What is the project's NPV using a discount rate of 10 percent? Should the project be accepted? Why or Why not?
b. What is the project's NPV using a discount rate of 15 percent? Should the project be accepted? Why or why not?
c. What is this project's internal rate of return? Should the project be accepted? Why or why not?
3. (Incremental earnings from advertising synergies) Bangers, Inc. is a start-up manufacturer of Australian-style frozen veggie pies located in San Antonio, Texas. The company is five years old and recently installed the manufacturing capacity to quadruple its unit sales. To jump start the demand for its products, the company founders have hired a local advertising firm to create a series of ads for its new line of meat pies. The ads will cost the firm $400,000 to run for one year. Boomerang's management hopes that the advertising will produce annual sales of $2 million for its meat pies. Moreover, the firm's management expects that sales of its veggie pies will increase by $200,000 next year as a result of the company name recognition derived from the meat pie ad campaign. If Boomerang's operating profits per dollar of new sales revenue are 40 percent and the firm faces a 30 percent tax bracket, what is the incremental operating profit the firm can expect to earn from the ad campaign? Does the decision to place the ad look good from the perspective of the anticipated profits?
4. (Calculating project cash flows and NPV) You are considering expanding your product line that currently consists of skateboards to include gas powered skateboards, and you feel you can sell 10,000 of these per year for 10 years (after which time this project is expected to shut down, with solar-powered skateboards taking over). The gas skateboards would sell for $100 each with variable costs of $40 for each one produced, and annual fixed costs associated with production would be $160,000. In addition, there would be a $1,000,000 initial expenditure associated with the purchase of new production equipment. It is assumed that this initial expenditure will be depreciated using the simplified straight-line method down to zero over 10 years. The project will also require a one-time initial investment of $50,000 in net working capital associated with inventory, and this working capital investment will be recovered when the project is shut down. Finally, assume that the firm's marginal tax rate is 34 percent.
a. What is the initial cash outlay associated with this project?
b. What are the annual net cash flows associated with this project for Year 1 through 9?
c. What is the terminal cash flow in Year 10 (that is, what is the free cash flow in Year 10 plus any additional cash flows associated with termination of the project)?
d. What is the project's NPV given a 10 percent required rate of return?
5. (Calculating cash flows-comprehensive problem) The C Corporation, a firm in the 34 percent marginal tax bracket with a 15 percent required rate of return or discount rate, is considering a new project. This project involves the introduction of a new product. This project is expected to last five years and then, because this is somewhat of a fad product, it will be terminated. Given the following information, determine the net cash flows associated with the project, the project's net present value, the profitability index, and the internal rate of return. Apply the appropriate decision criteria.
Cost of new plant and equipment: $198,000,000
Shipping and installation cost: $ 2,000,000
Unit Sales:
Year Units Sold
1 1,000,000
2 1,800,000
3 1,800,000
4 1,200,000
5 700,000
Sales price per unit: $800/unit in Years 1-4, $600/unit in Year 5
Variable cost per unit: $400/unit
Annual fixed costs: $10,000,000
Working capital requirements: There will be an initial working capital requirement of $2,000,000 just to get production started. For each year, the total investment in net working capital will equal 10 percent of the dollar value of sales for that year. Thus, the investment in working capital will increase during Years 1 through 3, then decrease in Years 4, Finally, all working capital is liquidated at the termination of the project at the end of Year 5.
The depreciation method: Use the simplified straight-line method over five years. It is assumed that the plant and equipment will have no salvage value after five years.
Solution Preview
Please see the answers attached in the Excel file. All answers are on a separate ...
Solution Summary
The solution answers the five problems of varying complexity using Excel on the topics of payback and discounted payback period caculations. All formulas are provided in Excel cells. Step by step instructions are given as well which are easy to follow along.
Finance: Benson Designs' project; Rank Caradine Corp project's proposed risk
P. Determine the net present value (NPV) for the project.
b. Determine the internal rate of return (IRR) for the project.
c. Would you recommend that the firm accept or reject the project? Explain your answer.
P4. Caradine Corp., a media services firm with net earnings of $3,200,000 in the last year, is considering several projects:
Project Initial Investment Details
A $ 35,000 Replace existing office furnishings.
B 500,000 Purchase digital film-editing equipment for use with several existing accounts.
C 450,000 Develop proposal to bid for a $2,000,000 per year 10-year contract with the U.S. Navy, not now an account.
D 685,000 Purchase the exclusive rights to market a quality educational television program in syndication to local markets in the European Union, a part of the firm's existing business activities
The media services business is cyclical and highly competitive. The board of directors has asked you, as chief financial officer, to do the following:
a. Evaluate the risk of each proposed project and rank it low, medium, or high.
b. Comment on why you chose each ranking. | https://brainmass.com/business/finance/several-cash-flow-npv-problems-597396 | CC-MAIN-2018-47 | refinedweb | 1,306 | 62.48 |
BBC micro:bit
Scroll:bit Clock
Introduction
This was the first project I knocked up with the Pimoroni Scroll:bit. I like to make clocks with matrix displays. If I can find a way to display the time nicely, I know that the display is usable for displaying other things, like analog readings. It is the same LED layout and IS31FL3731 driver chip that is found on the Scroll pHAT HD but with an edge connector instead of a Pi GPIO header. There are libraries for MakeCode and MicroPython available from Pimoroni which can be used to do all sorts of cool things with the matrix. You can also write your own code to use the matrix. It costs £13.50. The charlieplexing of the LEDs means that it doesn't draw much power in use but still dazzles. The 119 pixels are in an 11x7 grid which is enough to make readable text, a decent-sized grid game, display a sensor reading, do animations etc. It's a really classy accessory.
In this project, I used the scroll:bit with a 4tronix Inline Breakout, a handy product that gives access to the micro:bit pins whilst giving you another edge to plug into the accessory. I connected a Hobbytronics DS1338 RTC (which works at 3V) to the I2C pins to read the time. I made a minor adjustment to code I'd previously written and, before I knew it, I had a working clock.
Programming
This is the very simple 3x7 font that I've used in the past. You can fit 4 digits and a colon across the scroll:bit using these digits. It's also enough for displaying analog or accelerometer readings.
The font is embedded in the code in the list called ds. The numbers are worked out from the binary patterns formed by the red and white cells in the columns. The pixels are turned on one at a time when the display is updated with no buffering. The display only updates when one of the 4 digits of the time needs to change but checks for this once a second.
from microbit import * class Matrix: def __init__(self): self.wreg(0x0A,0,1) self.wreg(0x0A,1,1) self.wreg(0,0,1) self.wreg(1,0,1) self.fill(0) for f in range(8): for i in range(18): self.wreg(i,0xff) self.wreg(0x06,0,1) def wreg(self,r,v,c=0): if c: i2c.write(0x74, b'\xFD\x0B') else: i2c.write(0x74, b'\xFD\x00') i2c.write(0x74, bytes([r,v])) def fill(self, value): i2c.write(0x74, b'\xFD\x00') for i in range(6): d = bytearray([0x24 + i * 24]) + bytearray(([value]*24)) i2c.write(0x74, d, repeat=False) def set_xy(self,x,y,b): y = 6 - y if x > 8: x = x - 8 y = 6 - (y + 8) else: x = 8 - x led = x * 16 + y self.wreg(0x24+led,b) def set_led(self,led,b): self.set_xy(led%17,led//17,b) def set_dig(self,c,d): x = [0,4,10,14][c] ds = [127,65,127,66,127,64,121,73,79,73,73,127,15,8,127, 79,73,121,127,72,120,1,1,127,127,73,127,15,9,127] for i in range(3): v = ds[d*3+i] bts = [(v >> j & 1) for j in range(7)] for y in range(7): self.set_xy(x+i,y,bts[y]*128) def bcd2bin(value): return (value or 0) - 6 * ((value or 0) >> 4) def get_time(): i2c.write(0x68, b'\x00') buf = i2c.read(0x68, 7) m = bcd2bin(buf[1]) h = bcd2bin(buf[2]) return m,h a = Matrix() lastm,lasth = get_time() a.set_dig(0,lasth//10) a.set_dig(1,lasth%10) a.set_xy(8,2,128) a.set_xy(8,5,128) a.set_dig(2,lastm//10) a.set_dig(3,lastm%10) while True: mm,hh = get_time() if mm!=lastm or hh!=lasth: a.set_dig(0,hh//10) a.set_dig(1,hh%10) a.set_dig(2,mm//10) a.set_dig(3,mm%10) lastm = mm lasth = hh sleep(1000)
What I like about using the scroll:bit is not having to do any work to connect the display. Removing that connection makes for a really compact project. The inline breakout connector was really helpful here. Soldering the wires for the RTC would make it easier to hide it if this goes into an enclosure as a more permanent project. Adding a buzzer for an alarm and programming some interactions with the buzzer would be a nice way to finish the project.
The DS1338 turned out to be a good-value RTC that worked on the micro:bit with no fuss and seems to keep time pretty well for my purposes. | http://multiwingspan.co.uk/micro.php?page=scrollbit | CC-MAIN-2018-22 | refinedweb | 799 | 81.83 |
The Ransom of Red Chief Group Presentation by: Sahib Chandnani, Vibhav Joopelli, Viren Joopelli, Konan Mirza, Arun Sabapathy Plot Diagram Exposition: Characters: Inciting Incident: Rising Action: Climax: Falling Action: Resolution: Denouement: Character Breakdown Sam and Bill: Sam is the protagonist who tells the story. Sam is partners with Bill who are working together to get money through kidnapping. From what Sam dictates, we can see he is the head of the operation. He says, “I’ll be back some time this afternoon...You must keep the boy amused and quit till I return (Henry 7).” Sam is clever when making the plan for the operation, but they are both greedy. Against Bill’s wishes, he sets a fifteen hundred dollar ransom. They want to press out as much money as they can out of the operation. Sam’s clever scheme ultimately fails, their greed and lack of understanding of the character caused the ridding of the child to fail. Bill: Red Chief: Thematic Question O. Henry: Summary: Major Conflicts: Thematic Question Answer: Homework: Citations: The End
Thanks For Watching Sam Dricoll, Bill Driscoll, Johnny Dorset,
and Ebenezer Dorset.
Proatgonist: Sam Driscoll
Antogonist: Johnny Dorset Setting: Summit, Alabama Background Information: Sam, and Bill are two young boys trying
to make easy money so they kidnap Johnny. The inciting incident occurs when Bill and Sam kidnap Johnny.
Relation to Major Conflict:
- Without this incident, the major conflict (Red Chief's rambunctious insanity driving the kidnappers to madness) would not be possible, and the plot would be left unable to properly unfold. the havoc Johnny causes while with
them- the scalping of Bill, hitting him with a rock, etc.
Bill attempts to get rid of Johnny.
Sam delivers the ransom note The climax of this story is when Ebenezer returns the ransom note back asking for his kid and 250 dollars on top of that. The falling action in this story is when Sam nad Bill decide to accept the offer, and then return Johnny along with the money. The resolution is that Bill and Sam runaway to prevent
Johnny from ever catching them. The denouement is that Sam and Bill's plan backfired and they were unsuccessful. Trying to make easy money Sam and Bill Driscoll kidnap
the son of Ebenezer Dorset, Johnny. After spending quite a while with Johnny AKA " The Red Chief" they find him to be obnoxious and irritating. Bill who can no longer tolerate this behavior tries to get rid of him. Unsuccesful with that plan they move onto giving the ransom note. This backfires when the counteroffer is for Sam to pay Ebenezer instead. Sam and Bill who have had it with the "Red Chief" graciously accept the offer, and return the boy. They then flee the city out of fear of the boy returning Courier. "O. Henry - Biography & Works."Literature Collection. Art Branch Inc., n.d. Web. 12 Oct. 2012.
<>
Henry, O.. The ransom of Red Chief and other stories. New York: Gramercy Books, 1996. Print.
"O Henry - Biography and Works. Search Texts, Read Online. Discuss.
." The Literature Network: Online classic literature, poems, and quotes. Essays & Summaries. N.p., n.d. Web. 12 Oct. 2012. <>. Johnny is the child abducted by Bill and Sam. They completely misconceive the child based on his innocent looks. He was a child that would look to be a large sum for the kidnappers, due to his father being Ebenezer Dorset, one of the wealthiest men in the town. However, Johnny, also known as the Red Chief, uses a variety of resources to keep the kidnappers off balance. Johnny has excellent street smarts because of the orphan-like raising he had to endure with a wealthy yet detached father. He is also quite clever which is something foreign to the kidnappers in their profession. In times of desperation, is it justified to commit acts, such as violence, that oppose the norms which dictate modern society? - Man vs. Man: A major conflict present throughout the story occurs between the kidnappers and the Red Chief, in which Red Chief harasses them to the point where they are driven to a new stage of desperation. They are ultimately left with no other option but to pay Ebenezer to have the boy taken off their hands. "We'll take him home, pay the ransom and make our get-away" (page 3).
However, this conflict is made present by the original inciting conflict in which Red Chief is primarily kidnapped for a 2,000 dollar ransom. - Man vs. Man: Another conflict occurs between Ebenezer and the two desperate men. This is evident as they demand to be paid for the boy, but are outsmarted by the boy's cunning father."I hereby make you a counter-proposition, which I am inclined to believe you will accept. You bring Johnny home and pay me two hundred and fifty dollars in cash"- Ebenzer Dorset (page 2). During a difficult or desperate situation, one often ignores his moral virtues and chooses the "logical" or "easy" option, even if it may bring harm to someone else, physically or mentally. Through the story, however, one can conclude that even in these desperate times, one should not resort to violence or harm, for they may end up facing a predicament worse than the original. Your job tonight is to create a 6 word short story with the theme of desperation.
Good Luck!! 1862-1910
mother died at age 3 and father was heavy drinker
first studied writing with aunt
started humorous weekly The Rolling Stone, but was discontinued year later
was convicted of embezzlement, wrote first short stories in prison to raise money to support daughter
was known for surprise twist endings Buzzard feathers- the buzzard feathers signify the manifestation of death in a tangible form. “ … the boywas watching a pot of boiling coffee, with two buzzard tailfeathers stuck in his red hair” (1).This leads into the emphasis of the boy representing death or more specifically the Grim Reaper because he wears the these feathers upon his head.
Cave- the cave represents hell and therefore reinforces the idea that the Red Chief whoresides there for most of the story is somehow related to the idea of death. “There was afire burning behind the big rock at the entrance of the cave” (1). The fire burning at thentrance of the cave alludes directly to the description of hell that is provided in the bible.
Red Chief- He represents the Grim Reaper. The buzzard feathers in his hair representdeath and the cave that he resides in is described as a sort of hell. These combine with thepersona of the red chief himself to symbolize the harbinger of death, the Grim Reaper. Symbols Ransom of Red Chief
Group Presentation
byTweet
konan mirzaon 15 October 2012
Please log in to add your comment. | https://prezi.com/tdvv22ztursv/the-ransom-of-red-chief/ | CC-MAIN-2018-05 | refinedweb | 1,138 | 61.67 |
GIS package for reading, writing, and converting between CRS formats.
Project description
PyCRSx is a modified clone of the PyCRS package to make it compatible on Python 3.5 and 3.6.
PyCRSx is a pure Python GIS package for reading, writing, and converting between various common coordinate reference system (CRS) string and data source formats.
Below is the description from the original PyCRS package.
Introduction.
When I created PyCRS, hope that PyCRS.
Status.
Platforms
Python 2 and 3, all systems (Windows, Linux, and Mac).
Dependencies
Pure Python, no dependencies.
Installing it
PyCRSx is installed with pip from the commandline:
pip install pycrsx
It also works to just place the “pycrsx” package folder in an importable location like “PythonXX/Lib/site-packages”.
PyCRSx can also be installed via conda:
conda install -c mullenkamp pycrsx
Example Usage
Begin by importing the pycrs module:
import pycrsx
Reading
The first point of action when dealing with a data source’s crs is that you should be able to parse it correctly. In most situations this will mean reading the ESRI .prj file that accomponies a shapefile or some other file. PyCRS has a convenience function for doing that:
fromcrs = pycrsx.loader.from_file("path/to/shapefilename.prj")
The same function also supports reading the crs from GeoJSON files:
fromcrs = pycrsx.loader.from_file("path/to/geojsonfile.json")
If your crs is not defined in a file there are also functions for that. For instance if you know the url where the crs is defined you can do:
fromcrs = pycrsx.loader.from_url("")
Or if you are provided with the actual string representation of the crs, given by a web service for instance, you can load it using the appropriate function from the parser module or let PyCRS autodetect and load the crs type for you:
fromcrs = pycrsx.parser.from_unknown_text(somecrs_string)
Converting
Once you have read the crs of the original data source, you may want to convert it to some other crs format. A common reason for wanting this for instance, is if you want to reproject the coordinates of your spatial data. In Python this is typically done with the PyProj module which only takes proj4 strings, so you would have to convert your datasource’s crs to proj4:
fromcrs_proj4 = fromcrs.to_proj4()
You can then use PyCRS to define your target projection in the string format of your choice, before converting it to the proj4 format that PyProj expects:
tocrs = pycrsx.parser.from_esri_code(54030) # Robinson projection from esri code tocrs_proj4 = tocrs.to_proj4()
With the source and target projections defined in the proj4 crs format, you are ready to transform your data coordinates with PyProj, which is not covered here.
Writing
After you transform your data coordinates you may also wish to save the data back to file along with the new crs. With PyCRS you can do this in a variety of crs format. For instance:
with open("shapefile.prj", "w") as writer: writer.write(tocrs.to_esri_wkt())
PyCRS also gives access to each crs element and parameter that make up a crs in the “elements” subpackage, so you could potentially also build a crs from scratch and then save it to a format of your choice. Inspect the parser submodule source code for inspiration on how to go about this.
More Information:
This tutorial only covered some basic examples. For the full list of functions and supported crs formats, check out the API Documentation.
License:
This code is free to share, use, reuse, and modify according to the MIT license, see license.txt
Credits:
- Karim Bahgat
- Micah Cochrain
- Wassname
Changes
0.1.3 (2016-06-25)
- Fixed various bugs
- Pip install fix for Mac and Linux
- Python 3 compatability
0.1.2 (2015-08-05)
- First official release
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/pycrsx/ | CC-MAIN-2021-21 | refinedweb | 652 | 54.32 |
Bigtop::ScriptHelp::Style - Factory for scripts' command line and standard in handlers
use Bigtop::ScriptHelp; use Bigtop::ScriptHelp::Style; my $style = ... my $style_helper = Bigtop::ScriptHelp::Style->get_style( $style ); # pass this style as the first parameter to # Bigtop::ScriptHelp->get_big_default # Bigtop::ScriptHelp->augment_tree
This module factors command line argument and standard in handling out of scripts and ScriptHelp. It is a simple factory. Call
get_style with the name of a style module, to receive a style suitable for passing to
<Bigtop::ScriptHelp-get_big_default>> and
<Bigtop::ScriptHelp-augment_tree>>. All styles live in the Bigtop::ScriptHelp::Style:: namespace. The default style is 'Kickstart'.
Each stye must implement
get_db_layout which is the only method called by
Bigtop::ScriptHelp methods. See below for what it receives and returns.
Factory method.
Parameter: style name. This must be the name of a module in the Bigtop::ScriptHelp::Style:: namespace.
Returns: and object of the named style which responds to
get_db_layout.
Trivial constructor used internally to make an object solely to provide dispatching to
get_db_layout. All Styles should subclass from this class, but they are welcome to override or augment
new.
All subclasses should live in the Bigtop::ScriptHelp::Style:: namespace and implement one method:
Parameters:
usually useless
all command line args joined by spaces. Note that flags should have already been consumed by some script.
a hash reference keyed by existing table name, the values are always 1
Returns: a single hash reference with these keys:
the tables hash from the passed in parameters with an extra key for each new table
an array reference of new tables names
( Optional )
an array reference of new three way join tables
( Optional )
a hash reference keyed by table name storing an array reference of the table's new foreign keys
Phil Crow, <crow.phil@gmail.com>
This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself, either Perl version 5.8.6 or, at your option, any later version of Perl 5 you may have available. | http://search.cpan.org/~philcrow/Bigtop/lib/Bigtop/ScriptHelp/Style.pm | CC-MAIN-2017-26 | refinedweb | 337 | 54.12 |
The definition of XML sparked significant interest in standardized storage and transfer of structured information. While at present, XML enjoys significant interest and investment from industry, it is only one example of a generic data structuring language. XML itself, for example, emerged merely as a simple subset of the Structured General Markup Language (SGML), a mid-1980s ISO standard.
A large number of alternative technologies for representing structured data have been proposed or are emerging. Many of these were proposed to address shortcomings of XML. Some are merely alternative syntaxes for XML's abstract data model ("infoset"). Others propose radically different data models, some of which are both simpler to handle and at the same time more expressive.
Some are designed for general use, while others have specific purposes. Each embodies different features and trade-offs, and, depending on the application, may present a better option than the currently very popular choice XML.
The aim of this brief survey is to collect a comprehensive list of
existing markup languages and similar generic structured file formats,
to review their differences and relative trade-offs, and to promote
discussion about next generation standards. I will be adding
information as I discover more technologies and proposals in this
area. Any comments, suggestions for improvements or recommendations
for other languages, formats, references, etc. are greatly
appreciated. Please email me at
Steven.Murdoch at cl.cam.ac.uk.
YAML is not a generalised markup language, but is instead a human readable serialisation format for scripting languages native data structures. Files may be serialised/deserialised in their entirety or may be processed as a stream. They are not designed for random access.
Collections A YAML file is made out of one or more documents, each of which contains exactly one collection. The collections supported are based on those provided to scripting languages like Perl and Python. These are "Sequence" — an ordered set of elements, and "Mapping" — an unordered association of unique keys to values. Elements of the collections (sequence members, mapping keys and values) may be other collections (so forming a generalised graph structure) or scalars.
Scalars Scalars are defined as a sequence of Unicode characters, however when parsed these may be given an explicit or implicit type from a type library. The type library may be a custom one linked to by the document or one of the standard YAML types, for which a shorthand is provided.
Alias Mechanism In order to serialise the data the in memory structure is flattened from a graph to a tree. The structure lost through this operation is replaced by alias mechanism. When a collection or a scalar is defined it may be assigned an alias name, this alias name may then be used as a member of a collection indicating that the referred scalar or collection is a member.
Syntax The YAML syntax is optimised for human readability rather than parsing so the syntax rules are complex, however in practice the parser will hide this complication from an application author. In XML whether whitespace should be passed to the application from a parser is unclear, however in YAML there is a clear definition of which whitespace is for helping indicate to a human reader that there is structure but not part of the data, and significant whitespace that should be passed to the application program. So that documents may be word-wrapped for easy reading, single new-lines in scalars are ignored, in a similar way to the LaTeX paragraph formatting rules.
Also whitespace is used to define the structure, in a similar way to Python and Haskell using indentation to indicate blocks. This makes documents easy to read but exceptions to the parsing rules are needed when significant whitespace needs to be included in a multi-line scalar.
While Unicode text may be directly included as UTF-8, UTF-16 (LE or BE) or UTF-32 (LE or BE), escape codes are defined to allow non-ASCII unicode characters to be represented in an ASCII file.
Elements of a sequence are indicated by a leading "-", mappings are indicated by the key, a ":", followed by a value. If the key is not a scalar then it must be preceded by a "? ".As mentioned before scope is indicated by indentation. For example:List of scalars
- List element 1 - List element 2Mapping of scalars to scalars
Key 1: Value 1 Key 2: Value 2Mapping of scalars to sequences
Key A: - Element A1 - Element A2 Key B: - Element B1
The representation of sequences and mapping also has a compact syntax which is identical from the data model perspective but may aid readability in some circumstances. Sequences are represented as comma separated lists in square brackets, mappings are represented as key ":" value pairs, separated by commas enclosed by curly brackets. For example the above mapping of scalars to sequences may be represented as:
Key A: [Element A1, Element A2] Key B: [Element B1]or as:
{Key A: [Element A1, Element A2], Key B: [Element B1]}
A parser must accept either syntax but a program producing YAML may use either. It is expected that libraries will output YAML in a way judged best for human readability
Documents are separated by "---"
Other features The syntax permits comments to be embedded in documents
Structure is shown through indentation (one or more spaces). Sequence items are denoted by a dash, and key value pairs within a map are separated by a colon.
Example from YAML website
--- .
LMNL (pronounced "liminal") is a general purpose text encoding langauge with a similar intended application domain to XML, but with a different data model. The key differences are that elements (known as ranges in LMNL) may overlap and that attributes (known as annotations) may have structure. Whereas XML is defined first as a syntax which then implies multiple data models, LMNL is defined by a data model and one example syntax is given.
Layers A LMNL document is made out of a tree of layers. A document has exactly one base text layer which is a list of Unicode characters. The document also has zero or more range layers. Each range layer has exactly one base layer and its content is a sequence of ranges. A layer L may be the base to zero or more layers. These layers are the overlays of layer L. The overlays of a layer form a set and do not have any ordering.
Ranges Each range belonging to layer L spans over the contents of the base of layer L. If the base is a text layer then the ranges span over characters, if the base is a range layer then the ranges span over ranges. Ranges may have a name, or they may be anonymous. A range belongs to exactly one layer. The range is defines by a start index and a length. Ranges may be of length 0, in which case they are points. A range has a list of zero or more annotations. The ranges belonging to a layer are assigned an order in the list of ranges, but in addition there is an implied order based on the start index.
Annotations Annotations are similar to XML attributes, with the notable difference that they may have structure. An annotation belongs to exactly one range or exactly one annotation. Each annotation must have a name, a sequence of annotations and a value. The value is a text layer and so can have layers and ranges as above.
Syntax Layers are declared using the processing instruction "[!layer name="layer_name" base="base_name"]", providing the layer name and the name of the base layer. Two layers are implicitly defined "#base" — representing the base text layer and "#default" representing the ranges that are not explicitly assigned to a layer.
Ranges are started by the "[range_name~layer_name}" tag and terminated by th "{range_name~layer_name]" tag. The range name may be empty, in which case it is anonymous, and the layer_name may be empty (and the "~" omitted), in which case the range belongs to the "#default" layer. Ranges may overlap, so in cases where it is ambiguous which range is to be closed, both the start and end tag can be assigned an ID to remove ambiguity. For example "[range_name~layer_name=key_id}".
Annotations use the same start and end tags as ranges and are inserted after the name of the range in the start tag, but before the "}". Alternativley annotations may be inserted in the end tag, after the name but before the "]". Annotations may also be added to other annotations in a similar manner. Since annotations can not overlap the end tag for an annotation may be abbreviated to "{]".
Other features LMNL also permits namespaces and entities in a similar but simplified way to XML and allows comments to be inserted in documents. There may be multiple documents in the above syntax that could produce a given document model, so any LMNL document can be represented as a "reified LMNL" document, in which ranges are defined over the document, ranges, annotations, text, and so on. The resulting data model preserves ordering information which would be lost in the original LMNL data model. Finally the specification also defines two subsets of the data model. In the "Flat" subset there is precisely one text layer and at most one range layer. In the "Tree" subset each layer must have at most one overlay, in one layer ranges must not overlap, no annotation value may have any overlays and no annotation may have any annotations. This subset may be represented as a node tree, and if the top level layer contains one range which spans over the entire document content then this may be represented as a well formed XML document.
DL is a data (as opposed to document) representation format and so does not have an equivalent feature to XML attributes. It designed to work well with streaming so a document can have multiple entry points. Types are specified inside the file, rather than being inferred from an external schema. The type system is closely related to Java but is applicable to most modern language types.
Values Elements of a DL file are either values or structures. Values are atomic, and may be primitive (string, int, float, boolean), where the type is implied from the syntax, or constructed where the type is explicitly stated (dates, uris, etc...). The parameter to a constructed type is a value, which will be converted into the stated type.
Structures Structures are containers for values or other structures. There are three types — Maps, Vectors and Arrays. A Map is a mapping between a name and a value/structure, the value/structure may be absent. A vector is an ordered collection of values or structures. An array is a vector where each element is of the same type.
Syntax Primative values are represented in the normal way for programming languages:
Constructed types are built from type, followed by the value from
which they are to be constructed from, enclosed in brackets. For
example:
uri("")
A map is a name (a sequence of characters, not in quotes), optionally followed by a structure or value.
A vector is enclosed in curly braces. Elements are separated by spaces, except where a name (implying an empty map) is followed by a value. Here a semicolon must be inserted so that the name is not associated with the value. An array is a vector where all the elements are of the same type
ONX is a more compact and easy to parse replacement to XML, for general use, but designed with a view to be used in RPC
Data Model A ONX stream is constructed from one or more anonymous information blocks. Each information block contains a sequence of nodes, each of which is either a Value Node or a Container Node. A value node has a name (which when converted to lowercase may not begin with "onx") and a sequence of zero or more strings. Strings are a sequence of arbitrary bytes. A container node contains a sequence of zero or more nodes, both value nodes and further container nodes.
Syntax An information block is started with ":onx{" and terminated with "}". Value nodes are started with ":", the node name, followed by "{". They are terminated by "}". The contents of a value node is a sequence of strings, enclosed in quotes. Inside a string, backslashes and quotation marks are escaped with a backslash. Null bytes are written as "\0" and other non-ASCII characters are written using their hexadecimal representation as "\xHH". Arbitrary bytes can be included in the file by preceding them with "\[HHHHHHHH]", where HHHHHHHH is the 32 bit hexadecimal representation of the length of the block in bytes. For the specified number of bytes, null bytes, quotation marks and backslashes will be treated literally. The length of such a block is limited to 232–1 bytes but more than one be used in each string. Container nodes are started with ":", the node name, followed by "{". They are terminated by "}". Elements in a container node are separated with whitespace.
Whitespace outside of strings is ignored by the parser. For debugging reasons whitespace can be used to aid human readability, but in production use it will normally be omitted. Also for debugging purposes, the end of a node may be explicitly stated, i.e. "}onx" for an information set, "}name" for a container node and "]name" for a value node. This increases the size of the stream but allows the parser to detect invalid nesting and may make it easier for a human to understand the file.
Plists are designed for the serialization of small amounts (less than a few hundred kilobytes) of data. They were part of the NeXT Step oprating system and now are used on Max OS X.
Data Model A Plist contains exactly one object, which may either be a container or a value. Containers contain further objects. The value types are CFString, representing a Unicode string, CFNumber, representing a numeric value, CFDate representing a date, CFBoolean representing a boolean and and CFData representing a sequence of bytes. The container objects are CFArray representing a sequence of objects and CFDictionary representing a mapping between keys and objects. Since there is no pointer type only trees may be created, not graphs.
Syntax Three syntaxes are available for Plists, all of which are transparent to the programmer since they are supported by libraries built into the operating system. The original syntax used by NeXT was ASCII based, while it supports archiving all types CFDate and CFNumber will be unarchived to CFStrings. Strings are enclosed in double quotes and are encoded in Unicode. Binary data is encoded in hexadecimal ASCII and are enclosed in angle brackets. Arrays are enclosed in parentheses and elements are demited by commas. Dictionaries are enclosed in curly braces and name=value pairs are separated by semicolons. For single work alphanumeric values quotes may be omitted. Whitespace outside of strings and inside binary data objects is ignored.
The new XML syntax has a root tag of <plist> containing exacly one object. String values are enclosed in a <string> tag, an integer in <integer>, a floating point number in a <real> tag, a date in a <date> tag, data is base-64 encoded in a <data>tag and booleans are either <true /> or <false />. An array is enclosed by an <array> tag an encloses further object tags. Dictionaries are enclosed by a <dict> tag, containing zero or more key-value pairs, each consisting of a <key> tag enclosing the key, followed by an object tag.
A binary represation also exists and this may be used via the operating system libraries
GODDAG is a data model designed for the representation of (possibly overlapping) ranges over text. The structure is defined using graph theoretic terms but essentially it is a similar to a parse tree of SGML/XML, except that elements may have multiple parent nodes.
No syntax is defined for the model, however GODDAG is used as the basis of TexMecs.
A specification language is based on standard mathematical concepts of sets, lists, relations, and functions. It has a clearly separated data-model and has an ASCII based representation. Acceptable values for a document are defined by a type declaration system. These types may be included in the document to provide validity checking or may be automatically inferred.
Scalar Types Documents are made out of collections, which contain further collections or scalars. The scalars defined are "Void" which contains only one value — void."Bool" contains true and false. "Num" is the set of whole numbers, "NatNum" is the set of positive whole numbers including zero, "PosNum" is the set of positive whole numbers excluding zero, "RatNum" is the set of rational numbers and "RealNum" is the set of real numbers.
The parameterized type "NumRange(f,t)" is the set of all real numbers from f to t, including both f and t. "Option(t)" includes all the values of type t and void. "Enum(id1,...,idn) is the type containing the listed ids.
Records The record collection type is defined as "Record(id1:t1,...,idn:tn), meaning that in each record value, the value of type tx can be access through the name idx.
Sets and lists A set is a collection of zero or more elements. Each element in the collection is of the same type, specified on declaration. Duplicates are not permitted. In a set defined by "Set(t)" the order of elements is not preserved, in a set defined by "OrderedSet(t)" the order of elements is preserved. In a list defined by "List(t)", order is preserved and duplicates are permitted.
Functions and Maps Functions are defined by "Function(td,tr)" where td is the type of the domain and tr is the type of the range. As in the normal mathematical sense, the function must provide a mapping from every element of the domain. A map is defined similarly as "Map(td,tr)", however there is no restriction that all members of the domain are represented (i.e. it is partial function). There is also a corresponding "OrderedFunc(td,tr)" and "OrderedMap(td,tr)" where the elements of the domain are ordered. For Function and OrderedFunc the type provides a mapping between every element of the domain to exactly one element of the range. For Map and OrderedMap the type provides a mapping between a each element of a subset of the domain to exactly one element of the range.
Syntactic shorthand A relation is defined by "Relation(id1:t1,...,idn:tn)" , and is a set of records, where in each record the value of type tx can be accessed through the name idx. Also Functions and Maps can be defined as mapping records to records using the "Function(idd1:td1,...,iddn:tdn -> idr1:tr1,...,idrn:trn)" syntax.
Sets as types Types are essentially the set of valid values, therefore it is logical that values that are sets may be used as types themselves. This permits an assertion that one scalar value of a record is a member of the set which is also a member of the record. This may be done simply by stating the name of the set as the type of the scalar. If the set in question is at a higher nested level than the scalar to be defined then the name of the set is perpended by as many "^" characters as necessary to break out of the nested levels.
tXML is a more concise alternative syntax to XML designed to be easier to read and write. Situations where tXML would be useful include configuration files, where mixed content is rare.
Syntax Tags are opened with "{" and closed with "}". Before the "{", the name of the tag is specified, followed by whitespace and name=value pairs for attributes, each of these is separated by whitespace. Attribute values only have to be enclosed in quotes if they include spaces or other special characters. Empty tags or wmay be terminated with a semicolon. To differentiate tags from text, character data is enclosed in quotes and followed by a semicolon. Tags which contain only character data may ommit the curly braces.
CDATA is started with "<[" and closed with "]>" Entity references are expanded within strings, as with XML. Processing instructions are normal tags prepended with "$PI", but shortcuts exist for the prologue XML and DOCTYPE declarations. Comments are as with Java, single line comments are started with "//" and multiline comments are started with "/*" and ended with "*/"
BM (Better Markup) is a variation on the XML syntax. It is designed to be more compact and easier for a human to write.–
$Date: 2005-07-28 14:02:55 +0100 (Thu, 28 Jul 2005) $. | http://www.cl.cam.ac.uk/~sjm217/projects/markup/survey/ | CC-MAIN-2016-07 | refinedweb | 3,460 | 61.67 |
Introduction
The Python language has many advantages when it comes to scripting. The power of python can be felt when you start working with and try new things with it. It has modules which can be used to create scripts to automate stuff, play with files and folders, Image processing, controlling keyboard and mouse, web scraping, regex parsing etc.
For those of you who are familiar with Kali Linux, many scripts are written in Python. There are many freeware tools which are there in the market which can get the job done, the why script it with Python? The answer to the question is simple. For those who have written the tools have a superset of requirements, they want to cover all the scenarios and add customisations to the tools. This ends up in tools getting complicated and cumbersome. Moreover, every time we do not have the feasibility to use tools and hence scripting comes handy. We can script tasks as per our need and requirement set.
For security professionals, Python can be used for but not limited to:
- Penetration testing
- Information gathering
- Scripting tools
- Automating stuff
- Forensics
I will be discussing a few examples where Python can be used along with the code and comments. I have tried to heavily comment the code so that it becomes easy to understand and digest. The approach which I have taken is to break the requirement into small steps and generate a flow control for how the code should go.
NOTE: I assume that the user is having basic knowledge of Python-like syntax, data types, flow control, loops, functions, sockets, etc. since the article will not be discussing the basics and will be drifting away from the conventional “hello world” approach.
Some prerequisites before baking the code
Python can be installed from
NOTE: In case you have not installed Python, I would recommend Python3, although Python3 is backward compatible with Python2 this might require some troubleshooting at times. Moreover, you can install both Python 2 and 3 at the same time in a system.
Module name: os
This module provides a way of using operating system dependent functionality with Python. It can be beneficial when working with files (opening, reading, writing), windows paths (both absolute and relative), folders (creating folders, finding size and folder contents), check path validity, running OS commands like clearing the cmd screen, etc.
Example 1: What’s your name?
# Code
Import os
os.name
# Output for windows
‘nt’
Example 2: Joining the paths
#Code
Import os
os.path.join(‘harpreet’,’py_scripts’)
#Output
‘harpreet\\py_scripts’
Example 3: Where you are working right now?
#Code
Import os
os.getcwd()
#Output
‘C:\\Users\\harpreetsingh\\Desktop’
Example 4: Are you there or not?
# Checks the presence of a path/directory
#Code
Import os
os.chdir(‘C:\\Users\\harpreetsingh\\Desktop’)
Example 5: Creating folders/directories
#Code
Import os
os.makedirs(‘C:\\resources\\infosec\\harpreetsingh’)
Example 6: Running the window commands in Python
# Clearing the cmd inside Python screen
# Code
Import os
os.system(‘cls’)
# Getting the IP address
os.system(‘ipconfig’)
Example 7: Opening a file
# Steps involved:
- Open
- read /write
- Close the file
# Step 1: Opening a file:
# Code
Import os
Myfile= open(‘C:\\Users\\harpreetsingh\\Desktop\\test.txt’)
# Step 2(a): Reading contents of a file
Import os
Myfile=open(‘C:\\Users\\harpreetsingh\\Desktop\\text.txt’)
Myfile.read()
# Step 2(b): Writing contents to a file
# Code
Import os
Myfile = open(‘C:\\Users\\harpreetsingh\\Desktop\\test.txt’, ‘w’)
Myfile.write(‘I am written by Python in test file’)
Myfile.close()
# Step 3: Closing the file
# Code
Import os
Myfile=open(‘C:\\Users\\harpreetsingh\\Desktop\\text.txt’)
Myfile.close()
For further reading about os module:
Open cmd
Python
>>>import os
>>>help(os)
# Output
Module name: Webbrowser
This module is used to open link in the browser. We will be using the open function of this module to open links in the browser in the below examples. It can take two optional parameters ‘–n’ to open URL in a new window or ‘-t’ to open URL in new tab.
It will open the URL in the browser. In no time it will fire up the browser and open the link.
Example 1: Opening a URL in a web browser
# Code
Import webbrowser
webbrowser.open(‘’)
For further reading about web browser module
Open cmd
Python
>>>import webbrowser
>>>help(webbrowser)
# Output
Module name: Sys
With this module, command line arguments can be passed to the Python scripts
sys.argv[0] à name of the script
sys.argv[1] à argument passed by the user
Example 1: Printing the name of the script and user input
# Code
import sys
print (‘\n’)
print “The name of the script is : %s” %sys.argv[0]
print (‘\n’)
print “The input to the script is : %s” %sys.argv[1]
# Output
For further reading about sys module
Open cmd
Python
>>>import sys
>>>help(sys)
# Output
Module name: Urllib2
Urllib2 is used to fetch internet resources. We will be using this to fetch the response code for the URLs. It can even be used to download files, parse/fetch URL’s, encode URLs, etc.
For further reading about urllib2 module
Open cmd
Python
>>>import urllib2
>>>help(urllib2)
# Output
Module Name: Socket
This module is used when we need to mix Python with networking. It can be used to create socket connections (TCP/UDP), binding the sockets to the host and port, closing the connection, promiscuous mode configurations and much more.
For further reading about socket module
Open cmd
Python
>>>import socket
>>>help(socket)
# Output
Ctypes:
It is a means of using C (low-level language) code within Python scripts. It will be used to decode the IP header in one of the below examples.
For further reading about ctypes module
Open cmd
Python
>>>import ctypes
>>>help(ctypes)
# Output
IP packet architecture
- Version: IP version used – 4 for IPv4 and 6 for IPv6.
- IHL – IP Header Length: No of 32-bit words forming the header
- Identification: a 16-bit number which is used to identify a packet uniquely.
- Flags: Used to control fragment permissions for that packet.
- TTL (Time to live): No of hops for which the packet may be routed. This number will get decremented by one each time the packet is routed through hops. This is used to avoid routing loops.
- Protocol: This field helps us to identify the type of packet
- 1 ICMP
- 6 TCP
- 17 UDP
- Header Checksum: Used for error detection which might have been introduced during transit.
- Source address: Source address from where the packet has originated
- Destination address: Address for which the packet is destined for.
Let’s code stuff
Bursting the directories
The below-discussed code will take two inputs – URL/IP address and the directory list which you would like to test. It will test for the existence of the directories and will open the URI in the browser if it exists.
Code Flow
Code and comments
“”” *********** USAGE ****************
Create a file named dir.txt with the below data
/jmx-console
/images
/audio
/php-my-admin
/tag/sniffers/
Save the file dir.txt to a location and copy the location address (Replace the ‘\’ in the address with ‘\\’). Replace the address in the first line of the first for loop with this address.
Copy and paste the below code in an IDE and save it(dirb.py)
Command to run the code: python dirb.py (URL)
Example: python dirb.py sectools.org
“””
# Import required packages
import os,webbrowser,sys,urllib.request,urllib.error,urllib.parse
# Print the input of the user on the screen
print(“The URL/IP entered is “)
print(str(sys.argv[1]))
print (“\n”)
url=str(sys.argv[1])
files=[‘dir.txt’]
url_list=[]
for f in files:
hellow=open(os.path.join(‘C:\\Users\\harpreetsingh\\Desktop\\py_scripts’, f))
directories = hellow.readlines()
# iterate through the directory list and create a list with directories appended to the IP/URL and save it
for i in directories:
i=i.strip()
url_list.append(‘http://’+url+i)
# Iterate through the items from the newly created list and check the response code
# Incase the response code is 200, open the link in the browser
for url in url_list:
print(url)
try:
connection = urllib.request.urlopen(url)
print(connection.getcode())
connection.close()
if connection.getcode() == 200:
webbrowser.open(url)
except urllib.error.HTTPError as e:
print(e.getcode())
# Output
# Response codes on the output screen
Ethical Hacking Training – Resources (InfoSec)
The program can be used to check for the existence of a directory or a set of directories for a URL. I was once assigned a task to check if the JMX-console was opened for 160 public IP addresses. Manually checking this would have been tedious and time-consuming. The list of directories can be downloaded from the internet or the same list as used by the DirBuster (tool in kali) can be used.
Packet capturing in windows
The script will capture the IP packet and will display the contents on the screen.
Code flow
Code and comments
“”” ************* USAGE ********************
Copy and save the below code (capture.py)
Command to run the script: python capture.py
Change the IP address in the below code to that of yours.
Run the command prompt as admin as it is required for getting into promiscuous mode
“””
# Import the modules required
import socket,os
# Set the IP address of the host, change this to the IP address of your windows PC
IP = “192.168.0.105”
# define the socket protocol
socket_protocol = socket.IPPROTO_IP
# intitalize the socket
sniff = socket.socket(socket.AF_INET,socket.SOCK_RAW, socket_protocol)
# Bind the socket to host IP and port
sniff.bind((IP,0))
# Include the IP headers HDRINCL (Header_Include)
sniff.setsockopt(socket.IPPROTO_IP, socket.IP_HDRINCL, 1)
# Turn on the promiscus mode
sniff.ioctl(socket.SIO_RCVALL, socket.RCVALL_ON)
# Print what has been sniffed
print(sniff.recvfrom(65565))
# Turn of the promiscus mode
sniff.ioctl(socket.SIO_RCVALL, socket.RCVALL_OFF)
# Output
The output is not human readable and needs to be decoded. Moreover, we have just captured a single packet which is not of much use.
Packet capturing and decoding
Here we will be capturing all the packets and will be decoding the contents of the header. We will decode and print protocol number, source address, and destination address.
Code and comments
“”” ****************** USAGE *********************
Copy the below code and save it(decoder.py)
Command to run the script: python decoder.py
Change the IP address of the host in the below script to that of your’s and run the command prompt as admin for promiscuous code permissions.
“””
#Import the required modules
import socket,os,struct
from ctypes import *
# Set the IP address of the host, check the IP address of your windows PC and replace it in tye below line of code
IP_of_host=”192.168.0.105″
# use ctypes to map the ip header
class IP(Structure):
_fields_= [
(“Version”, c_ubyte, 4), # Version is of 4 bytes
(“Ihl”, c_ubyte, 4), # IHL is of 4 bytes
(“TYPE_OF_SERVICE”, c_ubyte, 8), # Type of service is of 8 bytes
(“Total_Length”, c_ushort, 16), # Total length is of 16 bytes
(“Identification”, c_ushort, 16), # Identification is of 16 bytes
(“Fragment_offset”, c_ushort, 16), # Fragment offset is of 16 bytes
(“Time_to_live”, c_ubyte, 8), # TTL is of 8 bytes
(“Protocol_number”, c_ubyte, 8), # Protocol Number is of 8 bytes
(“Header_checksum”, c_ushort, 16), # Header checksum is of 16 bytes
(“Source_address”, c_ulong, 32), # Source address is of 32 bytes
(“Destination_address”, c_ulong, 32) # Destination address is of 32 bytes
]
“””
If you add the above bytes, it sums up to 160 which when divided by 8 = 20, This confirms that the first 20 bytes of the packet are the IP headers. The size of the individual bits can be verified from IP packet architecture as well.
“””
def __new__(self, socket_buffer=None):
return self.from_buffer_copy(socket_buffer)
# getting the source address and destination address to what can be read by a normal person,
# We will be displaying the protocol Number, source address and destination address of the packets
# Rest can be ignored as of now.
# You can dig deeper into displaying other header fields if you like.
def __init__(self, socket_buffer=None):
self.src_address = socket.inet_ntoa(struct.pack(“<L”, self.Source_address))
self.dst_address = socket.inet_ntoa(struct.pack(“<L”, self.Destination_address))
self.protocol = str(self.Protocol_number)
# Specifying the socket protocol, Same as previous example
socket_protocol = socket.IPPROTO_IP
# Defining the socket
sniff = socket.socket(socket.AF_INET,socket.SOCK_RAW, socket_protocol)
# Binding the socket to host IP and port and
sniff.bind((IP_of_host,0))
# We need the IP headers as well, IP_HDRINCL (Header_Include)
sniff.setsockopt(socket.IPPROTO_IP, socket.IP_HDRINCL, 1)
# Switch on the promiscus mode
sniff.ioctl(socket.SIO_RCVALL, socket.RCVALL_ON)
try:
# Starting the while loop with True so as to create an infinite loop to capture all packets continuously
while True:
raw_buffer = sniff.recvfrom(65565)[0]
# As we have discussed above that first 20 bytes are the IP header
ip_header = IP(raw_buffer[0:20])
# Printing the protocol number, Source address and destination address
print(“Protocol %s %s –> %s” %(ip_header.protocol, ip_header.src_address, ip_header.dst_address))
# The interrupt will stop the always true while loop and turns off the promiscuous mode
except KeyboardInterrupt:
sniff.ioctl(socket.SIO_RCVALL, socket.RCVALL_OFF)
# Output
Run the script and open a website in a browser and you can see the packets getting captured. The packets can also be saved in a file by running the below command.
Python decoder.py > test.txt
The test.txt file can then be analyzed for later use.
| https://resources.infosecinstitute.com/python-security-professionals-part-1/ | CC-MAIN-2019-13 | refinedweb | 2,232 | 63.39 |
CTabView Class
The new home for Visual Studio documentation is Visual Studio 2017 Documentation on docs.microsoft.com.
The latest version of this topic can be found at CTabView Class.
The
CTabView class simplifies the use of the tab control class ( CMFCTabCtrl) in applications that use MFC's document/view architecture.
Public Methods
Protected Methods
This class makes it easy to put a tabbed view into a document/view application.
CTabView is a
CView-derived class that contains an embedded
CMFCTabCtrl object.
CTabView handles all messages required to support the
CMFCTabCtrl object. Simply derive a class from
CTabView and plug it into your application, then add
CView-derived classes by using the
AddView method. The tab control will display those views as tabs.
For example, you might have a document that can be represented in different ways: as a spreadsheet, a chart, an editable form, and so on. You can create individual views drawing the data as needed, insert them into your
CTabView-derived object and have them tabbed without any additional coding.
TabbedView Sample: MFC Tabbed View Application illustrates usage of
CTabView.
The following example shows how
CTabView is used in the TabbedView sample.
class CTabbedViewView : public CTabView { protected: // create from serialization only CTabbedViewView(); DECLARE_DYNCREATE(CTabbedViewView) // Attributes public: CTabbedViewDoc* GetDocument(); // Operations public: // Overrides public: virtual void OnDraw(CDC* pDC); // overridden to draw this view virtual BOOL PreCreateWindow(CREATESTRUCT& cs); protected: virtual BOOL OnPreparePrinting(CPrintInfo* pInfo); virtual void OnBeginPrinting(CDC* pDC, CPrintInfo* pInfo); virtual void OnEndPrinting(CDC* pDC, CPrintInfo* pInfo); BOOL IsScrollBar () const { return TRUE; } // Implementation public: virtual ~CTabbedViewView(); #ifdef _DEBUG virtual void AssertValid() const; virtual void Dump(CDumpContext& dc) const; #endif protected: afx_msg int OnCreate(LPCREATESTRUCT lpCreateStruct); afx_msg BOOL OnEraseBkgnd(CDC* pDC); afx_msg void OnContextMenu(CWnd*, CPoint point); afx_msg void OnFilePrintPreview(); DECLARE_MESSAGE_MAP() };
Header: afxTabView.h
Adds a view to the tab control.
Parameters
[in]
pViewClass
A pointer to a runtime class of the inserted view.
[in]
strViewLabel
Specifies the tab's text.
[in]
iIndex
Specifies the zero-based position at which to insert the view. If the position is -1 the new tab is inserted at the end.
[in]
pContext
A pointer to the
CCreateContext of the view.
Return Value
A view index if this method succeeds. Otherwise, -1.
Remarks
Call this function to add a view to the tab control that is embedded in a frame.
Returns the index of the specified view in the tab control.
Parameters
[in]
hWndView
The handle of the view.
Return Value
The index of the view if it is found; otherwise, -1.
Remarks
Call this function to retrieve the index of a view that has a specified handle.
Returns a pointer to the currently active view.
Return Value
A valid pointer to the active view, or
NULL if there is no active view.
Remarks
Returns a reference to the tab control associated with the view.
Return Value
A reference to the tab control associated with the view.
Called by the framework when creating a tab view to determine whether the tab view has a shared horizontal scroll bar.
Return Value
TRUE if the tab view should be created together with a shared scroll bar. Otherwise,
FALSE.
Remarks
The framework calls this method when a
CTabView object is being created.
Override the
IsScrollBar method in a
CTabView-derived class and return
TRUE if you want to create a view that has a shared horizontal scroll bar.
Called by the framework when the tab view is made active or inactive.
Parameters
[in]
view
A pointer to the view.
Remarks
The default implementation does nothing. Override this method in a
CTabView-derived class to process this notification.
Removes the view from the tab control.
Parameters
[in]
iTabNum
The index of the view to remove.
Return Value
The index of the removed view if this method succeeds. Otherwise -1.
Remarks
Makes a view active.
Parameters
[in]
iTabNum
The zero-based index of the tab view.
Return Value
TRUE if the specified view was made active,
FALSE if the view's index is invalid.
Remarks
For more information see CMFCTabCtrl::SetActiveTab.
Hierarchy Chart
Classes
CMFCTabCtrl
CView Class | https://msdn.microsoft.com/en-us/library/bb983705.aspx | CC-MAIN-2018-13 | refinedweb | 680 | 56.76 |
What is MFA or 2FA?
As per Wikipedia or Two-factor authentication is a subset of that. Just a type of MFA where you only need two pieces of evidence.
Apart from having a secure and strong password, different developers use different approaches to enhance the security of their applications. Some applications send OTP (one-time-password) via SMS, while some applications send the OTP or a unique link via email. Some applications even call on your mobile. Some applications may ask to answer a security question.
Every approach has its pros and cons. Sending OTP via SMS or email incurs additional costs. Users have to remember the answer to questions in the security question approach.
TOTP or time-based one-time password approach has the advantage over both of these approaches. There is no additional cost involved and users do not have to remember anything. TOTP is widely used in 2FA. In this article, we will see how to implement TOTP in your Django application.
HOTP meaning HMAC-based One-Time Password is the One-Time Password algorithm and relies on two pieces of information. The first is the secret key, called the "seed", which is known only by the token and the server that validates submitted OTP codes. The second piece of information is the moving factor which.
hmac_sha1 = hmac.new(key, msg=None, digestmod='').hexdigest() hotp = truncate(hmac_sha1, length=6)
hamc.new() method parameter 'key' is a byte or bytearray object giving a secret key. msg is the counter and digestmod is the name of the hash algorithm e.g. hashlib.sha1.
Time-based OTP (TOTP),.
The benefit of using TOTP instead of HOTP is that the TOTP passwords are short-lived, they only apply for a given amount of human time. HOTP passwords are potentially longer lived, they apply for an unknown amount of human time.
To generate TOTP, we start with a random key and then generate the base32-encoded token from that random key. We use the hmac.new() function to generate hmac object. But instead of counter, we pass the timestep (not timestamp) as msg parameter.
Since hmac object's hexdigest() returns a long string which is impossible to enter by the user when prompted, we are going to truncate the output to 6 digits.
Step 1: Generating a base32-encoded token
# length of OTP in digits length = 6 # timestep or time-window for which the token is valid step_in_seconds = 30 import base64 # randon key key = b"123123123djwkdhawjdk" token = base64.b32encode(key) print(token.decode("utf-8"))
This will print the base32-encoded token, GEZDGMJSGMYTEM3ENJ3WWZDIMF3WUZDL in our case, which we will use later on.
Step 2: Generating hmac hexdigest
import hashlib import hmac import math import time t = math.floor(time.time() // step_in_seconds) hmac_object = hmac.new(key, t.to_bytes(length=8, byteorder="big"), hashlib.sha1) hmac_sha1 = hmac_object.hexdigest() # truncate to 6 digits offset = int(hmac_sha1[-1], 16) binary = int(hmac_sha1[(offset * 2):((offset * 2) + 8)], 16) & 0x7fffffff totp = str(binary)[-length:] print(totp)
This will print the TOTP generated at that particular time (valid for 30 seconds).
Step 3: Verifying that the TOTP generated is correct.
There are multiple mobile applications available online which are used to set up 2FA and generate the TOTP. I am using Microsoft Authenticator.
Install and open the application and add an account. The application will ask you to scan the QR code. We will generate the QR code in the next steps. For now, click the 'Enter code manually' at the bottom. Provide the account name (anything) and then Secret Key. The Secret Key is the base32-encoded token generated in Step 1 above i.e. GEZDGMJSGMYTEM3ENJ3WWZDIMF3WUZDL.
After the account is added, a new OTP will be displayed by the application. Run your code (Step 2 above) and it should generate the same OTP.
Step 4: Generating QR code
Typing the secret key or token is a tedious task and prone to human errors. Hence all applications (like Google's Authenticator, Duo) provide the functionality to scan the QR code.
Every security application may accept tokens or security keys and account names in different formats. We are going to use Microsoft's Authenticator format below to generate the QR code.
# Generating QR Code image_path = "/tmp/token_qr.png" import qrcode qr_string = "otpauth://totp/Example:my-email-address@gmail.com?secret=" + token.decode("utf-8") +"&issuer=Example&algorithm=SHA1&digits=6&period=30" print(qr_string) img = qrcode.make(qr_string) img.save(image_path)
This code snippet will save the image in
/tmp/ directory. Now open the image, scan the QR code while adding the account in the Authenticator app and you will have a fresh OTP. You can again compare the OTP generated by your code with the one generated by the authenticator application.
If you are using Jupyter to write the code, you can display the image using the code given below.
from IPython.display import Image Image(filename=image_path)
To enable 2FA for a user in your Django application, follow these steps:
- Generate the base32-encoded secret key or token. Either use a random string to generate the token or derive the string from the user's email address. Derive means use some secure methods to generate a random string where the seed is email. Do not use the email or any login information directly.
- Generate a QR code image and display it to the user. Ask the user to scan the image using any authenticator application and add the account.
- Now on the login screen, in addition to username and password, ask for the 6 digit TOTP.
- When the user submits the login information, match the password with the once stored in the database. Django automatically takes care of authentication, password hashing, and login.
- If the username and password matches, proceed to the next step else return to the login screen with an appropriate error message.
- Now pick the random string i.e. key of that user (from database) or derive it dynamically from the email as you would have done in the first step above and generate the TOTP.
- Match the TOTP generated by the application with the one submitted by the user.
- If the match is successful, let the user login, else return to the login screen with the appropriate error message.
The above code is available on Github.
-. | https://pythoncircle.com/post/731/implementing-2fa-in-python-django-using-time-based-one-time-password-totp/ | CC-MAIN-2021-43 | refinedweb | 1,054 | 57.98 |
Hi,
I am having two problem but it seems i wont be able to fix it since im quite new to programming. I’ve been looking on the internet for tips but i cant find any solution witch works.
I have a countdown timer on my LCD screen, but whenever the counter goes from 100 to 99 it prints 990, 980.
I have tried to clear my whole screen (Witch obviously doesnt work), and tried to find different solutions for like 3 hours but I still dont have any.
I also want my code to be adding +1 to a interger with a puls on one of my inputs.
But if i give a pulse of +5V to my DI6 (witch is the right port in my program) it doesn’t reconise it (i guess).
I hope anyone of you can help me with this, i would really appreciate it
#include <LiquidCrystal.h> #include <SimpleTimer.h> LiquidCrystal lcd(12, 11, 5, 4, 3, 2); int plusPin = 6; int minPin = 7; int I=0; int U=0; int T=0; void setup() { pinMode(plusPin, INPUT); pinMode(minPin, INPUT); lcd.begin(16, 2); lcd.setCursor(0,0); lcd.print("Verkeerstelling"); delay(10000); } void loop() { lcd.clear(); if (plusPin, HIGH) { (I++); } if (minPin, HIGH) { (U++); } lcd.setCursor(0,0); lcd.print("In: "); lcd.setCursor(4,0); lcd.print(I); lcd.setCursor(8,0); lcd.print("Uit: "); lcd.setCursor(13,0); lcd.print(U); T = (I-U); lcd.setCursor(0,1); lcd.print("Totaal: "); lcd.setCursor(8,1); lcd.print(T); lcd.setCursor(13, 1); int n = 0; for ( n = 110; n > 0; n--) { lcd.setCursor(13, 1); delay(1000); lcd.print(n); } } | https://forum.arduino.cc/t/lcd-screen-countdown-timer-problem-and-input-problem/234544 | CC-MAIN-2022-27 | refinedweb | 279 | 73.37 |
I will start with a pair of classes that might form a very simple model in any application:
public class Person { public Person() { FirstName = string.Empty; LastName = string.Empty; DateCreated = DateTime.UtcNow; Qualifications = new HashSet<Qualification>(); } public int PersonId { get; set; } public string FirstName { get; set; } public string LastName { get; set; } public string FullName { get { return string.Format("{0} {1}", FirstName, LastName); } } public DateTime DateCreated { get; set; } public DateTime BirthDate { get; set; } public ICollection<Qualification> Qualifications { get; set; } }
public class Qualification { public Qualification() { Awardees = new HashSet<Person>(); Name = string.Empty; } public int QualificationId { get; set; } public string Name { get; set; } public DateTime WhenAwarded { get; set; } public virtual ICollection<Person> Awardees { get; set; } }
The model features a
Person class and a
Qualification class, and a many-to-many relationship between them. The associations between the classes are declared as
, indicating that they are navigation properties in a Code First model intended for use with Entity Framework. Auto properties have largely been used because there is no logic to be executed when setting or getting the property values. The
virtual
FullName property in the
Person class is constructed from the
FirstName and
LastName properties. Default values for some properties are defined in class constructors. The first three new C# 6 features are all concerned with improving syntax when defining simple classes like these. I will look at how they affect the
Person class.
1. Auto-property initializers
If you want to set a default value for a property in a class, you do this in the constructor in C# 5 or less. C# 6 introduces Auto-property initializers that enable you to assign a default value for a property as part of the property declaration. This is how using auto-property initializers changes the look of the
Person class:
public class Person { public int PersonId { get; set; } public string FirstName { get; set; } = string.Empty; public string LastName { get; set; } = string.Empty; public string FullName { get { return string.Format("{0} {1}", FirstName, LastName); } } public DateTime DateCreated { get; set; } = DateTime.UtcNow; public DateTime BirthDate { get; set; } public ICollection<Qualification> Qualifications { get; set; } = new HashSet<Qualification>(); }
2. Expression bodied members
The
FullName property in the
Person class is readonly. There is no
set accessor for the property. The value is generated from an expression in the
get accessor. In C# 6, this expression can be used in the same way as the auto-property initializer, reducing the syntax down a bit:
public string FullName => string.Format("{0} {1}", FirstName, LastName);
The
=> sign does not denote a lambda expression in the same way as if it is being used with LINQ or any other delegate-based scenario. You can also use this approach to define the body of a method. For example, you might choose to expose a person's age as a method called
GetAge() like this:
public TimeSpan GetAge() { return DateTime.Now - BirthDate; }
You can now replace this with an expression body function like so:
public TimeSpan GetAge() => DateTime.Now - BirthDate;
3. Getter-only auto-properties
When you use auto implemented properties in C# 5 and lower, you must provide a
get and
set. If you want the property value to be immutable, you can use the
private accessor on the setter, but that has always been considered to be a bit of a hack. With C# 6, you can now omit the
set accessor to achieve true readonly auto implemented properties:
public DateTime BirthDate { get; }
4. String interpolation
Up until now, I have used
string.Format to generate the
FullName value in the
Person class. I've always found
string.Format a little clumsy to use although I would prefer to use it rather than have bunch of concatenation operators dotted around my code. A new feature named string interpolation provides a large improvement (in my opinion) in this area.:
public string FullName => $"{FirstName} {LastName}";
Apart from this saving a number of key strokes, it should also minimise (if not remove) the possibility of
FormatExceptions being generated from inadvertently supplying too few values to the argument list.
5. Null-conditional operators
var people = new List<Person>(); var name = string.Empty; if(people.FirstOrDefault() != null) { name = people.First().FullName; }
How many times have you written code that checks for null to prevent a possible
NullReferenceException when attempting to reference a member on an object? If the answer is "way too many", you will probably grow to love the null conditional operator, which is a simple question mark ? placed just before member you want to reference:
var people = new List<Person>(); var name = people.FirstOrDefault()?.FullName;
If
people.FirstOrDefault() returns null, evaluation is terminated without any exceptions being raised, and null is assigned to the
name variable. Now that's a useful addition to the language! One thing to be aware of - if you assign a value type to an expression that uses a null conditional operator, the result will be a nullable type.
6. Using static
I'm not sure I've made my mind up about this one. But here it is. This feature provides a shortcut to accessing static methods on classes by importing the name of the class via a
using directive qualified by the
static keyword. For example, this is how you would do that with the
System.IO.File class, which includes quite a number of static utility methods:
using static System.IO.File;
Now you can use the static methods without having to use the class name:
var file = @"C:\test.txt"; if (!Exists(file)) { Create(file); }
The
System.Text.RegularExpressions.Regex class also houses a range of static methods:
using static System.Text.RegularExpressions.Regex; ... ... Replace("input", "pattern", "replacement");
I'm not entirely sure what problem this feature is intended to solve. Chances are that I will forget to use this at all. Utility method names are very similar -
Replace,
Create,
Delete, and if you are using
Regex and
File (as an example) in the same file, you will find yourself having to disambiguate between methods across classes that have the same name and signature (like
Replace).
7. Index initializers
This feature provides a new way to initialize index-based collections such as dictionaries. Previously, you might do this:
Dictionary<int, string> dict = new Dictionary<int, string> { {1, "string1" }, {2, "string2" }, {3, "string3" } };
Now you can do this:
Dictionary<int, string> dict = new Dictionary<int, string> { [1] = "string1", [2] = "string2", [3] = "string3" };
Apart from this blog post, I wonder if I'll ever get to use this new feature.
Summary
This post has reviewed most of the new features in C# 6 that the average ASP.NET developer is most likely to use or encounter. Personally, I can see myself using the first 5 features regularly, but I am yet to be convinced about the benefits of the last two, and will most likely forget that they are available. | https://www.mikesdotnetting.com/article/271/7-c-6-0-features-that-every-asp-net-developer-should-know-about | CC-MAIN-2020-40 | refinedweb | 1,142 | 53.92 |
We’ve already talked many times about the MVVM pattern on this blog and how to implement it in Windows Phone 8 apps using Caliburn Micro or in Universal Windows apps with Caliburn Micro and Prism. The Model-View-ViewModel pattern is very useful in XAML based projects, because the separation between logic and user interface gives many advantages in testability and maintainability. However, when we’re dealing with projects that target multiple platforms with a shared code base (like with Universal Windows apps), using the MVVM pattern is, more or less, a basic requirement: thanks to the separation between logic and user interface, it becomes easier to share a good amount of code with the different projects.
Xamarin Forms makes no exceptions: applying the MVVM pattern is the best way to create a common codebase that can be shared among the iOS, Android and Windows Phone projects. In this post, we’ll see how to create a simple project using one of the most popular toolkits out there: MVVM Light.
Why MVVM Light?
MVVM Light is, for sure, the most simple and flexible MVVM toolkit available right now. It’s main advantage is simplicity: since it’s implementation is very basic, it can be easily ported from one platform to another. As you’re going to see in this post, if you have already worked with MVVM Light on other platforms, you’ll find yourself at home: except for some minor difference, the approach is exactly the same you would use in Windows Phone, Windows Store or WPF.
However, the MVVM Light simplicity is also its weak point: compared to frameworks like Caliburn Micro or Prism, it misses all the infrastructure that is often required when you have to deal with platform specific features, like navigation, application lifecycle, etc. Consequently, as we’re going to see in the next posts, you may have the need to extend MVVM Light, in order to solve platform specific scenarios. In the next post I will show you the implementation I did to solve these problem: for now, let’s just focus on implementing a basic Xamarin Forms project with MVVM Light. This knowledge, in fact, it’s important to understand the next posts I’m going to publish.
Creating the first MVVM project
The first step is the same we’ve seen in a previous post: creating a new Xamarin.Forms Blank App. After we’ve checked that we’re using the latest Xamarin Forms version, we need to install also MVVM Light in the shared project: you’ll need to install the specific version for PCL libraries, which is
Now you’re ready to set up the infrastructure to create the application: let’s start by adding our first View and our first ViewModel. It’s not required, but to better design the application I prefer to create a folder for the views (called Views) and a folder for the ViewModels (called ViewModels). Then add a new Xamarin Forms page into the Views folder (called, for example, MainView.xaml) and a new simple class into the ViewModels folder (called, for example, MainViewModel.cs).
The ViewModelLocator
One of the basic requirements in a MVVM application is to find a way to connect a View to its ViewModel: we could simply create a new instance of the ViewModel and assign it as data context of the page, but this way we won’t be able to use techniques like dependency injection to resolve ViewModels and the required services at runtime (we’ll talk again about this approach in the next post). The typical approach when you work with MVVM Light is to create a class called ViewModelLocator, which takes care of passing to every View the proper ViewModel. Here is how the ViewModelLocator class looks like in a Xamarin Forms project:
public class ViewModelLocator { static ViewModelLocator() { ServiceLocator.SetLocatorProvider(() => SimpleIoc.Default); SimpleIoc.Default.Register<Main>(); } } }
When the class is created, we register the default dependency injection provider we want to use: in this case, we use the native one offered by MVVM Light, called SimpleIoc. Then, we register in the container, by using the Register<T>() method, all the ViewModels and services we want to use. In this sample, we won’t use any service: we’ll see in the next post how to manage them; so we just register our MainViewModel in the container. The next step is to create a property that will be used by the View to request the proper ViewModel instance: we use again the dependency injection container, in this case to get a registered class, by using the GetInstance<T>() method (where T is the object’s type we need).
Now we can use the ViewModelLocator to assing a ViewModel to its View: in our case, theMainView should be connected to the MainViewModel. In a Windows Phone app, this goal is typically achieved by declaring the ViewModelLocator as a global resource in the Appclass and then, in the XAML, assigning the proper property (in this case, Main) to theDataContext property of the page. This way, the ViewModel will be assigned asDataContext of the entire page and every nested control will be able to access to the commands and properties that are exposed by the ViewModel.
However, this approach doesn’t work in Xamarin Forms, since we don’t have the concept of global resources: we don’t have an App.xaml file, where to declare resources that are shared across every page of the application. The easiest way to solve this problem is to declare theViewModelLocator as a static property of the App class in the Xamarin Forms shared project, like in the following sample:
public class App: Application { public App() { this.MainPage = new MainView(); } private static ViewModelLocator _locator; public static ViewModelLocator Locator { get { return _locator ?? (_locator = new ViewModelLocator()); } } }
This way, you’ll be able to connect the ViewModel to the View by using this static property in the code behind file of the page (in our case, the file MainView.xaml.cs):
public partial class MainView { public MainView() { InitializeComponent(); this.BindingContext = App.Locator.Main; } }
You can notice one of the most important differences between the XAML in Windows Phone and the XAML in Xamarin Forms: the DataContext property is called BindingContext. However, its purpose is exactly the same: define the context of a control.
Define the ViewModel
Creating a ViewModel it’s easy if you’ve already worked with MVVM Light, since the basic concepts are exactly the same. Let’s say that we want to create a simple applications where the user can insert his name: by pressing a button, the page will display a message to say hello to the user. Here is how the ViewModel to manage this scenario looks like:
public class MainViewModel: ViewModelBase { private string _name; public string Name { get { return _name; } set { Set(ref _name, value); ShowMessageCommand.RaiseCanExecuteChanged(); } } private string _message; public string Message { get { return _message;} set { Set(ref _message, value); } } private RelayCommand _showMessageCommand; public RelayCommand ShowMessageCommand { get { if (_showMessageCommand == null) { _showMessageCommand = new RelayCommand(() => { Message = string.Format("Hello {0}", Name); }, () => !string.IsNullOrEmpty(Name)); } return _showMessageCommand; } } }
You can see, in action, all the standard features of a ViewModel created using MVVM Light as a toolkit:
- The ViewModel inherits from the ViewModelBase class, which gives you some helpers to properly support the INotifyPropertyChanged interface that is required to notify the controls in the View when the properties that are connected through binding are changed.
- Every property isn’t defined with the standard get – set approach but, when the value of the property changes (in the set method), we call the Set() method offered by MVVM Light which, other than just assigning the value to the property, takes care of dispatching the notification to the controls in the View.
- When you work with the MVVM pattern, you can’t react to user’s actions using event handlers, since they have a strict dependency with code behind: you can’t declare an event handler inside another class. The solution is to use commands, which are a way to express actions with a property, that can be connected to the View using binding. MVVM Light offers a class that makes this scenario easier to implement, called RelayCommand. When you create a RelayCommand object, you need to set: 1) the action to perform (in our case, we define the message to display to the user) 2) optionally, the condition that needs to be satisfied for the command to be activated (in our case, the user will be able to invoke the command only if the property called Name isn’t empty). If the condition isn’t met, the control be automatically disabled. In our sample, if the user didn’t insert his name in the box, the button will be disabled.
- When the value of the Name property changes, we also call theRaiseCanExecuteChanged() method offered by the RelayCommand we’ve just defined: this way, every time the Name property changes, we tell to the command to evaluate its status, since it could be changed.
The View
The following code, instead, show the Xamarin Forms XAML page that is connected to the ViewModel we’ve previously seen:
<?xml version="1.0" encoding="utf-8" ?> <ContentPage xmlns="" xmlns: <StackLayout Padding="12, 0, 12, 0"> <Label Text="Insert your name:" /> <Entry Text="{Binding Path=Name, Mode=TwoWay}" /> <Button Command="{Binding Path=ShowMessageCommand}" Text="Show message" /> <Label Text="{Binding Path=Message}" FontSize="30" /> </StackLayout> </ContentPage>
It’s a simple form, made by a text area where the user can insert his name and a button that, when it’s pressed, displays the message using a label. You can notice some difference with the standard XAML available in Windows Phone:
- The StackPanel control, which is able to display the nested controls one below the other, is called StackLayout.
- To define the distance of the control from the screen’s border, we use the Paddingproperty instead of the Margin one.
- The TextBlock control, used to display a text to the user, is called Label.
- The TextBox control, used to receive the input from the user, is called Entry.
- The content of the button (in this case, a text) is set using the Text property, while in the standard XAML is called Content and it accepts also a more complex XAML layout.
Except for these differences in the XAML definition, we’re using standard binding to connect the controls with the properties defined in the ViewModel:
- The Entry control has a property called Text, which contains the name inserted by the user: we connect it to the Name property of the ViewModel, using the two-way binding.
- The Button control has a property called Command, which is connected to theShowMessageCommand we’ve defined in the ViewModel. This way, the button will be enabled only if the Name property isn’t empty; if it’s enable, by pressing it we’ll display the hello message to the user.
- The hello message is stored into the Message property of the ViewModel, which is connected using binding to the last Label control in the page.
Wrapping up
In this post we’ve seen the basic concepts of using MVVM Light in a Xamarin Forms: except for some differences (like the ViewModelLocator usage), the approach should be very familiar to any Windows Phone developer that has already worked with the MVVM pattern and the MVVM Light toolkit. In the next posts we’ll take a look at how the dependency injection approach works in Xamarin Forms and how we can leverage it in a MVVM Light project. As usual, you can find the sample code used in this post on GitHub at
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/xamarin-forms-windows-phone-1 | CC-MAIN-2016-44 | refinedweb | 1,957 | 50.91 |
Writing Modular JavaScript — Pt 1
This is part 1 of a series on writing modular JavaScript applications. In part 2, we will actually build a simple modular application together. Finally, in part 3 we will use Gulp, the Node task runner, to prepare our files for deployment.
Introduction
If you have so much as glanced at a computer programming language, you have surely heard the mantra: Don’t Repeat Yourself. Basically, if you find yourself writing the same snippet of code more than once, you are better off sticking that code inside a function instead. In this post, I will show you how we can extend that mantra beyond its use within a single file. We’ll apply the concept at the application level — within a single project and even across projects — by breaking monolithic applications into smaller, self-contained, reusable modules.
Maybe you want to use some previously-written utility in your new application — instead of rewriting the whole thing, you can just copy your file into your new project and link to it with a
<script> tag. The time savings alone makes this a worthwhile endeavor.
But there’s another, possibly greater benefit to modular design: it makes collaborating with other people a whole lot easier. When you are working with a team of developers, where each of you is creating a part of a larger application, you will eventually need to merge all of those separate parts together. If you’ve adopted a modular design structure in advance, joining everything up at the end is trivially easy. Develop modular habits today and you’ll be the hero of your team tomorrow.
Finally, modular design patterns make it much easier to read your code. You should personally care about this because you (or others — see previous paragraph) will eventually have to make sense of code you previously wrote. The fewer lines of code, the quicker you and others can make sense of it. A module is by definition a smaller, focused part of some larger whole.
This article is aimed at beginning JavaScript developers who already have some experience with the language, but who may not yet know how to structure code into modules, or why the modular design pattern is valuable.
I assume that you have basic familiarity with client-side / front-end JavaScript. My examples use a small amount of jQuery, but no other libraries or frameworks are required to get started. I will mention IIFEs and closures, but not go into too much detail about them here.
Application Structure
A modular application begins with a solid file/folder structure. There are many ways to organize a project’s files. I tend to use one of two ways: either by type of file (css, js, assets, etc.) or by application feature (header, landing, login, about, footer, etc). On large projects, organizing by feature is almost a requirement. But we’re going to use the simpler by type system for this post.
|— /dist
| |
| |— /js
| |— /css
| |— /assets
|
|— /src
| |
| |— /js
| | |
| | | app.js
| | | module1.js
| | | module2.js
| | | module3.js
| |
| |— /css
| | |
| | | style.css
| |
| |— /assets
| |
| | image.jpg
| | etc.jpg
|
|— /node_modules
|
| .gitignore
| gulpfile.js
| package.json
| index.html
Briefly:
/dist- ultimately our production-ready files will live here. Gulp will create them later - we don't need to do anything here.
/src- we'll write all our in-development code here (css, JavaScript, etc). Later, Gulp will process these files to produce the files in /dist. The module file names used above are just examples - you should definitely use more meaningful file names!
/node_modules- Node for front-end apps? Yes! We will use Node and NPM to install and run Gulp tasks.
.gitignore- files and folders specified in this file will be ignored (not tracked) by Git. They also won't be pushed to GitHub. We'll be ignoring the
node_modules/folder - no reason to track or push all those 3rd-party packages.
gulpfile.js- we'll write our Gulp concatenation, transpilation and minification tasks in here.
index.html- during development, we'll itemize each script separately. Later, we will just link to a single production-ready JavaScript file.
package.json- Created by NPM. Among other things, this file catalogs all the packages our application depends on. Ours will include dependencies for Gulp and various Gulp plugins.
The Nitty and the Gritty
Let’s begin by looking at the structure of our
index.html. During development, we'll simply link to each of the JavaScript files in our
/src/js folder. So our index.html might look like this:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Nice Boilerplate</title>
</head>
<body>
<!-- =============== vendor javascript ================ -->
<script src=""></script>
<!-- ================ our javascript ================== -->
<script src="src/js/module1.js"></script>
<script src="src/js/module2.js"></script>
<script src="src/js/module3.js"></script>
<script src="src/js/app.js" ></script>
</body>
</html>
Order is important! Our modules must load first so that their public methods are available to be called. That statement will make more sense as we progress. For now, know that
app.js must load last because we're going to use it to call all our other modules' public methods. Other than that, it's just a normal
index.html file. Scaffold your application as you would normally.
Things start to look a little different when we get to our
app.js file:
$(document).ready(function () {
Module1.init();
Module2.init();
Module3.init();
);
That’s it! If you’re used to seeing/writing volumes of code inside of
$(document).ready(), the above may look a little strange. Our
app.js has only one responsibility: call our separate modules'
.init() methods. All of the actual application logic resides in those other modules. Each time you add a new module to your application, you add a call to its
.init() method here. Easy and organized.
Because we use
app.js to bootstrap our other modules, it now makes sense that
index.html has to load those modules first. If the modules were loaded after
app.js, calling their
.init() methods would fail.
We will write each module in a separate file to promote reuse. And we’ll adopt the style known as the revealing module pattern. The “revealing” qualifier hints at the module’s internal structure: some features are private, others are revealed publicly.
Here’s an example:
var Module1 = (function() {
'use strict';
// placeholder for cached DOM elements
var DOM = {};
/* =================== private methods ================= */
// cache DOM elements
function cacheDom() {
DOM.$someElement = $('#some-element');
}
// bind events
function bindEvents() {
DOM.$someElement.click(handleClick);
}
// handle click events
function handleClick(e) {
render(); // etc
}
// render DOM
function render() {
DOM.$someElement
.html('<p>Yeah!</p>');
}
/* =================== public methods ================== */
// main init method
function init() {
cacheDom();
bindEvents();
}
/* =============== export public methods =============== */
return {
init: init
};
}());
We begin by declaring a variable (
Module1) and assigning a function to it. All of the module’s code goes inside this function.
Note that the function is both (wrapped in parentheses and ends with another set of parentheses)()) :
var Module1 = (function () {
// all module logic goes here
}());
The parentheses
() at the end causes the function to be interpreted and evaluated as an expression. This is known as an IIFE, or Immediately Invoked Function Expression. The IIFE means we don't call the function separately — the function is called when the JavaScript engine assigns it to our variable. And that happens as soon as the module loads. The wrapping
() are not technically required, but are used by convention to let other programmers know that this function is an IIFE.
In addition to “auto-loading” our module, the function also creates its own local scope for all the variables and functions within it. Anything declared inside a function is scoped to the function — it can’t be called or referenced outside of the function (except when it can! We’ll get to this shortly). Isolating scope this way also prevents adding all kinds of variables and functions to the global scope.
Within the IIFE, our example module uses
At the very beginning we declare all required module-scope variables. Standard stuff you already do — no big deal. This example only declares a single variable, an empty object that will cache a DOM element.
Next we define a few private methods. The example methods are commonly found in many front-end applications:
cacheDom(),
bindEvents(), and
render(). They do exactly what you expect them to do. Additional private methods might handle form submission, AJAX requests, manipulating JSON responses, etc. What’s important to note is that these methods are private — we don't need or want them to be available outside of the module they're defined in. And they're not, because they're encapsulated within our IIFE. For example, there's no reason for any module's
bindEvents() function to be available globally — it's irrelevant to any other module, so we keep it private.
Also note that multiple modules can have the same private functions. They will never collide with each other because they are restricted to each module’s local function scope. Beyond that, each module can have the same public methods as well, since each of those is namespaced to its own module. For example, you might have the three modules, each with the same public method:
modals.init(),
greeting.init(),
login.init(). Those identically-named methods will never collide with each other because they're all called on unique modules. Very courteous.
In the next section we define our public methods. In this case we only have one:
init(). But you could have any number of them. Notice that
init() does only one thing: make calls to our private methods. When called,
init() closes over the private module-scoped variables and functions, making them available outside the scope they were declared in (yay, closure).
You may ask, “what makes the
init() function public? — it looks just like the other private functions above it." Good question — now go to the next paragraph!
The final section returns an object literal. This is what makes the revealing module pattern so cool. What we’re returning is essentially a set of “links” to any private methods that we want to make public outside our module. This allows us to separate the private and public parts of our modules, and — to answer the preceding paragraph’s question — is what makes
init() a public method.
Remember that an object’s properties are
{ key : value } pairs:
return {
init : init
};
In the above, the left-hand ‘key’ is the name of the method we’ll call outside our module. The right-hand ‘value’ refers to the name of the function defined inside the module. Although it looks strange to see
init : init, it's common practice to use the same name externally that we use internally. It's also one less name you have to remember.
Bringing Everything Together
By returning public methods via the object literal, we “attach” them to the variable that we declared at the beginning:
Module1 gets the
init() method. And because
Module1 was declared globally, it and its public methods are available in the global namespace. Once a module exposes its public methods globally, any other module can call those methods. So we can call
Module1.init() anywhere we like — which is exactly what we did earlier in
app.js.
Module1 might have additional public methods — those can be called anywhere in the application as well.
See how it’s all starting to fit together? And how, in a team setting, each team member can work independently and then publish their module’s public methods, adding any necessary calls to their module’s public method(s) in
app.js?
I typically limit each module to a single unique application feature or a single unique application utility. An application feature might have many private methods and not need anything other than a single
init() method exposed publicly. In contrast, a utility module might have few (or zero) private methods and instead expose all its methods publicly. And utility modules may not need
init() methods at all.
Summarizing
We break our application logic into chunks based on single feature or utility. Each team member composes their module(s) as an IIFE, consisting of all the business logic needed by the module. Each module reveals specific public methods, which can be called elsewhere in the application. Our main
app.js bootstraps those modules by calling each of their public
init() methods. Other project team members recognize your brilliance and buy you mint juleps.
Up Next: Let’s Build a Modular App!
Now that we understand the principles of basic modular application design, we’ll put them to use building a simple web application in part 2. | https://medium.com/@jrschwane/writing-modular-javascript-pt-1-b42a3bd23685 | CC-MAIN-2017-17 | refinedweb | 2,111 | 66.33 |
Details
- Type:
Bug
- Status: Closed
- Priority:
Major
- Resolution: Fixed
- Affects Version/s: None
- Fix Version/s: None
- Component/s: Misc Library
- Labels:None
- Environment:
Actor, Actors
Description
According to the documentation, receiveWithin should block at most msec milliseconds
if no message matches any of the cases of the partial function passed as argument.
This is not correct if the function is not defined on TIMEOUT and if the msec argument
is > 0.
The function
def main(args: Array[String]){ Actor.receiveWithin(0) { case str: String => println(str) } }
throws an "unhandled timeout" RuntimeException, however
def main(args: Array[String]){ Actor.receiveWithin(1) { case str: String => println(str) } }
blocks indefinitely. The TIMEOUT object is stored in the mailbox after the timeout, but it remains there as it is not consumed.
Actually a new TIMEOUT object is added to the mailbox after each additional msec milliseconds.
The function receiveWithin defines the local function receiveTimeout which checks, whether the function is defined at TIMEOUT, but this function is only called if msec == 0L.
Dominik
Activity
Replying to [comment:1 extempore]:
> This sounds like a duplicate of
SI-3799, but not sure:
> perhaps submitter could see if that sounds true and if so,
> close this as a duplicate.
SI-3799 reports, that messages may be lost. In my example no message is lost, all messages remain in the actor's mailbox, but receiveWithin does not terminate after the timeout. Actually, the TIMEOUT message can be found in the mailbox. Note that this behavior changed from 2.7.7 to 2.8.0.
=> no duplicate.
This sounds like a duplicate of
SI-3799, but not sure: perhaps submitter could see if that sounds true and if so, close this as a duplicate. | https://issues.scala-lang.org/browse/SI-3838 | CC-MAIN-2014-10 | refinedweb | 286 | 53.71 |
{-# LANGUAGE CPP , BangPatterns , DeriveDataTypeable , NoImplicitPrelude , UnicodeSyntax #-} -------------------------------------------------------------------------------- -- | -- Module : Control.Concurrent.RLock -- ( ... ) -- @ -- -------------------------------------------------------------------------------- module Control.Concurrent.RLock ( RLock -- * Creating reentrant locks , new , newAcquired -- * Locking and unlocking , acquire , tryAcquire , release -- * Convenience functions , with , tryWith , wait -- * Querying reentrant locks , State , state ) where -------------------------------------------------------------------------------- -- Imports -------------------------------------------------------------------------------- -- from base: import Control.Applicative ( liftA2 ) import Control.Concurrent ( ThreadId, myThreadId ) import Control.Concurrent.MVar ( MVar, newMVar, takeMVar, readMVar, putMVar ) import Control.Exception ( bracket_, onException ) import Control.Monad ( Monad, return, (>>) ) import Data.Bool ( Bool(False, True), otherwise ) import Data.Eq ( Eq ) import Data.Function ( ($) ) import Data.Functor ( fmap, (<$>) ) import Data.Maybe ( Maybe(Nothing, Just) ) import Data.Tuple ( fst ) import Data.Typeable ( Typeable ) import Prelude ( Integer, succ, pred, error ) import System.IO ( IO ) #if __GLASGOW_HASKELL__ < 701 import Prelude ( fromInteger ) import Control.Monad ( fail, (>>=) ) #endif -- from base-unicode-symbols: import Data.Eq.Unicode ( (≡) ) import Data.Function.Unicode ( (∘) ) import Data.Monoid.Unicode ( (⊕) ) -- from concurrent-extra (this package): import Control.Concurrent.Lock ( Lock ) import qualified Control.Concurrent.Lock as Lock ( new, newAcquired, acquire, release, wait ) import Utils ( mask, mask_ ) -------------------------------------------------------------------------------- -- Reentrant locks -------------------------------------------------------------------------------- {-| A reentrant lock is in one of two states: \"locked\" or \"unlocked\". When the lock is in the \"locked\" state it has two additional properties: * Its /owner/: the thread that acquired the lock. * Its /acquired count/: how many times its owner acquired the lock. -} newtype RLock = RLock {un ∷ MVar (State, Lock)} deriving (Eq, Typeable) {-| The state of an 'RLock'. * 'Nothing' indicates an \"unlocked\" state. * @'Just' (tid, n)@ indicates a \"locked\" state where the thread identified by @tid@ acquired the lock @n@ times. -} type State = Maybe (ThreadId, Integer) -------------------------------------------------------------------------------- -- * Creating reentrant locks -------------------------------------------------------------------------------- -- | Create a reentrant lock in the \"unlocked\" state. new ∷ IO RLock new = do lock ← Lock.new RLock <$> newMVar (Nothing, lock) {-| Create a reentrant lock in the \"locked\" state (with the current thread as owner and an acquired count of 1). -} newAcquired ∷ IO RLock newAcquired = do myTID ← myThreadId lock ← Lock.newAcquired RLock <$> newMVar (Just (myTID, 1), lock) -------------------------------------------------------------------------------- -- * Locking and unlocking -------------------------------------------------------------------------------- {-| Acquires the 'RLock'. Blocks if another thread has acquired the 'RLock'. @acquire@ behaves as follows: * When the state is \"unlocked\", @acquire@ changes the state to \"locked\" with the current thread as owner and an acquired count of 1. * When the state is \"locked\" and the current thread owns the lock @acquire@ only increments the acquired count. * When the state is \"locked\" and the current thread does not own the lock @acquire@ /blocks/ until the owner releases the lock. If the thread that called @acquire@ is woken upon release of the lock it will take ownership and change the state to \"locked\" with an acquired count of 1. RLock → IO () acquire (RLock mv) = do myTID ← myThreadId mask_ $ let acq = do t@(mb, lock) ← takeMVar mv case mb of Nothing → do Lock.acquire lock putMVar mv (Just (myTID, 1), lock) Just (tid, n) | myTID ≡ tid → let !sn = succ n in putMVar mv (Just (tid, sn), lock) | otherwise → do putMVar mv t Lock.wait lock acq in acq {-| A non-blocking 'acquire'. * When the state is \"unlocked\" @tryAcquire@ changes the state to \"locked\" (with the current thread as owner and an acquired count of 1) and returns 'True'. * When the state is \"locked\" @tryAcquire@ leaves the state unchanged and returns 'False'. -} tryAcquire ∷ RLock → IO Bool tryAcquire (RLock mv) = do myTID ← myThreadId mask_ $ do t@(mb, lock) ← takeMVar mv case mb of Nothing → do Lock.acquire lock putMVar mv (Just (myTID, 1), lock) return True Just (tid, n) | myTID ≡ tid → do let !sn = succ n putMVar mv (Just (tid, sn), lock) return True | otherwise → do putMVar mv t return False {-| . -} release ∷ RLock → IO () release (RLock mv) = do myTID ← myThreadId mask_ $ do t@(mb, lock) ← takeMVar mv let err msg = do putMVar mv t error $ "Control.Concurrent.RLock.release: " ⊕ msg case mb of Nothing → err "Can't release an unacquired RLock!" Just (tid, n) | myTID ≡ tid → if n ≡ 1 then do Lock.release lock putMVar mv (Nothing, lock) else let !pn = pred n in putMVar mv (Just (tid, pn), lock) | otherwise → err "Calling thread does not own the RLock!" -------------------------------------------------------------------------------- -- * Convenience functions -------------------------------------------------------------------------------- {-| A convenience function which first acquires the lock and then performs the computation. When the computation terminates, whether normally or by raising an exception, the lock is released. Note that: @with = 'liftA2' 'bracket_' 'acquire' 'release'@. -} with ∷ RLock → IO α → IO α RLock → IO α → IO (Maybe α) tryWith l a = mask $ \restore → do acquired ← tryAcquire l if acquired then do r ← restore a `onException` release l release l return $ Just RLock → IO () wait l = mask_ $ acquire l >> release l -------------------------------------------------------------------------------- -- * Querying reentrant locks -------------------------------------------------------------------------------- {-| Determine the state of the reentrant lock. Note that this is only a snapshot of the state. By the time a program reacts on its result it may already be out of date. -} state ∷ RLock → IO State state = fmap fst ∘ readMVar ∘ un -- The End --------------------------------------------------------------------- | http://hackage.haskell.org/package/concurrent-extra-0.6.0.1/docs/src/Control-Concurrent-RLock.html | CC-MAIN-2013-48 | refinedweb | 786 | 66.84 |
RustemSoft provides the common controls you always wanted. Show date/time and numeric text boxes with simple and easy-to-manage properties. Provide end-users with easy-to-use Calculator interface. Create especial style buttons with mouse over and click effects. Make anything appear as IP Address, SS#, Phone numbers, etc.
These elements are chock full of functionality that you wont find in the Microsoft .NET 2.0 controls, that make it easy to build professional and forcing user interfaces. RustemSoft.Controls .NET 2.0 assembly from RustemSoft is a WinForms components software package specifically designed for .NET 2.0 developers. The assembly allows you to use all strengths of the MS .NET 2.0 IDE without waiving the user interface elements your customers need.
RustemSoft.Controls is a .NET 2.0 (MS Visual Studio 2005 and later) class library with several powerful controls, fully integrated with the Microsoft Visual Studio .NET 2.0 IDE and especially designed for easy inserting and arranging data on your customer .NET 2.0 Windows Forms. With its complex features and user-demanded functionalities, you can create rich and usable application interfaces with:
RustemSoft Combobox Control
XL Button Control
DateTime box Control
TimeUpDown box Control
Numeric box Control
Calculator box Control
Text Fractions box Control
Memo box Control
DataGridViewColumns .NET 2.0 assembly
More about DataGridViewColumns.dll
Download DataGridViewColumns.dll
Order DataGridView Columns
Skater .NET Obfuscator
More about Skater .NET Obfuscator
Download Skater .NET Obfuscator
Order Skater .NET Obfuscator
RustemSoft Combobox Control
The RustemSoft.Controls library contains a combobox component for your .Net form. It is not just a dropdown combobox. This .Net 2.0 RustemSoft.Controls RSComboBox control has the following attractive features:
This combobox automatically fills the text at the cursor with the value of its dropdown values list. As characters are typed, the RSComboBox component finds the closest matching item from the list and automatically fills in the remaining characters. Appended characters are highlighted so that they are replaced by any further typing. For this auto-filling feature you can setup the character case sensibility.
This RustemSoft Control gives you ability to instantly update dropdown values with a really simple and friendly user interface. When you click the additional combo dropdown button a dictionary of its list values will be displayed below the combobox. You can update values, and insert and delete rows in the dictionary grid which is filled by data from a related datatable. RSComboBox just as an easy dropdown RustemSoft.Controls' combobox.
You can set its DataSource, DisplayMember, and ValueMember to bind the combobox to a foreign table.
Syntax
VB: Dim MyRSCombo As RustemSoft.Controls.RSComboBox
C#: RustemSoft.Controls.RSComboBox MyRSCombo;
XML Converter is available!
More about XML Converter
Download XML Converter
Order XML Converter
Parameters
DataSource - a source for RSComboBox control values list as System.Data.DataTable
DisplayMember - field to display in combo as Integer (index of column) or as String (name of table column)
ValueMember - field with values, which binds to RSComboBox.
XL Button Control
A Button control is a control which we click and release to perform some action. RustemSoft XL Button .NET 2.0 Control allows you to provide the .NET winforms interface your users are requesting, while supporting the utterly powerful Microsoft .NET 2.0 (MS Visual Studio 2005) Windows Forms functionalities.
RustemSoft XL Button could be presented in two styles on your .NET Form: XP style and Vista style.
The XLButton class makes the MS .NET Form control one of the most powerful controls that you ever had used. The RustemSoft XLButton .NET component on a .NET Form control gives you ability to define custom functionality for items on the Form beyond the edit, delete, select, and hyperlink features of other .NET 2.0 contols.
The XLButton .NET component has a pushbutton-style only. The button captions can be a text read from a database.
When you define a button control, you specify a command associated with the button. The Click event of XLButton class occurs when the user clicks the button, the button's command is passed to a container where you can handle it with your custom code.
It gives you ability to adjust XL Button appearance on your .NET Forms by setting up several Colors properties of the class.
Syntax
VB: Dim MyXLButton As RustemSoft.Controls.XLButton
C#: RustemSoft.Controls.XLButton MyXLButton;
Handled Event
RustemSoft XLButton control Click event runs when the XLButton control is clicked.
You have to have the following specification in your code:
VB .NET
Private Sub MyXLButton_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyXLButton.Click
......
End Sub
C#
public void MyXLButton_Click(object sender, System.EventArgs e)
{
..................
}
DateTime box Control
You have a TextBox object on your .NET Windows Form with a DateTime value RustemSoft DateTime box Control can help you design the concept that will resolve your needs.
The DateTime box control is used to allow the user to insert formatted date and time, and to display that date/time on your .NET 2.0 Form.
Syntax
VB: Dim MyDateTimeBox As RustemSoft.Controls.DateTimeBox
C#: RustemSoft.Controls.DateTimeBox MyDateTimeBox; "/".
TimeUpDown box Control
The RustemSoft TimeUpDown Box represents a Time up-down control that displays Time values. It allows you to show a TimeUpDownBox control on a .NET Form.
The TimeUpDownBox control contains a single Time value that can be incremented or decremented by clicking the up or down buttons of the control in a TimeUpDown Box. You may also enter in a value.
The Time display may be formatted by setting the TimeFormat property that can have the following values:
HHMM12
HHMMSS12
HHMM24
HHMMSS24
Syntax
VB: Dim MyTimeBox As RustemSoft.Controls.TimeUpDownBox
C#: RustemSoft.Controls.TimeUpDownBox MyTimeBox;
Numeric box Control
This formatted intelligent Numeric box control can mask the numbers, digits, decimal and checks the validation, and automatically set the delimiter location. You may input any number. If there is a numeric scale defined with the field, it is used to format the display and the input.
Note By default the .NET NumericNumericBox As RustemSoft.Controls.NumericBox
C#: RustemSoft.Controls.NumericBox MyNumericBox;).
Calculator box Control
Calculator box control provides a calculator environment with Real and Complex arithmetic support. This formatted intelligent Calculator boxCalculatorBox As RustemSoft.Controls.CalculatorBox
C#: RustemSoft.Controls.CalculatorBox MyCalculatorBox;
Parameters
SpecialFunctionsVisible - as Boolean. Define if Special Functions buttons are Visible in Calculator. Default is False.
CloseByEqualButton - as Boolean. Close Calculator Window by clicking Equal '=' Button. Default is False.).
DecimalLength - as integer, specifies a fixed decimal length. Default is empty numeric (null/Nothing).
Text Fractions box Control
This formatted intelligent RustemSoft Text Fractions box control can mask the text fractions. It gives you ability to manage the IP Address, SS#, Phone numbers, etc., and checks the validation, and automatically set the delimiter location.
The Fraction box Control
VB: Dim MyFractionBox As RustemSoft.Controls.FractionBox
C#: RustemSoft.Controls.FractionBox MyFractionBox;
To identify each text fraction you must set values of ten properties where settings of the fractions are stored. The first five properties (I_stFractionFormat, II_ndFractionFormat, III_rdFractionFormat, IV_thFractionFormat, and V_thFractionFormat) five box specific properties (I_stFractionLength, II_ndFractionLength, III_rdFractionLength, IV_thFractionLength, and V_thFractionLength) specifies overall fraction's length.
Memo Box control
Syntax
MemoBox()
Some Properties
Caption - a title of Memo editor window as String
showCancelButton - Show Cancel Button or not for Memo box as Boolean
Nationalize_Button_Ok_Text - a title of Button Ok as String
Suppose you are building a project using Visual Studio .NET, and you decide that you want to start consuming RustemSoft Controls .NET assembly to use its components on Windows Forms. The first step that you will generally take is, you will add RustemSoft Control to a form. To do that you need to open ToolBox in .NET IDE. You can open ToolBox by clicking View-->ToolBox menu item. Select the RustemSoft Controls tooltab. You add controls either by drag-drop or by double clicking on the control.
If for some reason you can not find the "RustemSoft Controls" tooltab in your .NET toolbox you can add it manually. To add RustemSoft Controls to the Toolbox do the following:
Right-click the Toolbox and choose Add Tab from the shortcut menu. Insert "RustemSoft Controls Trial" as a name of the tab. Double click the new tooltab to expand it.
Right-click the Tooltab and choose Add/Remove Items from the shortcut menu. ("Customize Toolbox" in oldest .NET versions) The Customize Toolbox dialog box opens.
Choose the .NET Framework Components tab and click Browse. Browse to the folder where the RustemSoft Controls library is located and select RustemSoft.ControlsTrial.dll. RustemSoft Controls appear in the list of components in the Customize Toolbox dialog box.
Close the window by clicking Ok.
The RustemSoft Controls are added to the new tab of the .NET Toolbox.
How to add a reference to the RustemSoft.Controls .NET assembly, which is residing in some directory on your PC hard drive? Visual Studio .NET will then add a new item under Solution Explorer called 'References', and it will create a row node underneath it called RustemSoft.Controls.
In order to add the reference to RustemSoft.Controls .NET assembly please follow the steps:
Select Project from the menu bar
Choose Add Reference from the drop down menu
Click on the Browse button
Browse to and choose RustemSoft.Control.Controls
C#
using RustemSoft.Controls;
C++
#using <RustemSoft.ControlsTrial.dll>
using namespace RustemSoft;
Download the RustemSoft.Controls .NET assembly code samples for
VB.NET and C# that show how you can use the RustemSoft Controls in .NET Windows Forms. | http://www.rustemsoft.com/net_controls.htm | crawl-002 | refinedweb | 1,578 | 51.44 |
Write code resembling elm-lang/html and serialize to either a string or actual HTML
This package copies the entire API of
elm-lang/html, but adds 2 functions:
toHtmlwhich serializes the resulting
Html.String.Html msgnode into a
Html.Html msgnode
toStringwhich serializes the node into a
Stringwith optional indentation
Two caveats:
lazynodes since those can't be expressed in Elm
import Html.String as Html exposing (Html) import Html.String.Attributes as Attr import Html.String.Events as Events type Msg = Increment | Decrement counter : Int -> Html Msg counter count = Html.div [ Attr.classList [ ( "below-zero" = count < 0 ), ( "counter" = True ) ] ] [ Html.button [ Events.onClick Decrement ] [ Html.text "-" ] , Html.text <| toString count , Html.button [ Events.onClick Increment ] [ Html.text "+" ] ] counterAsString : String counterAsString = Html.toString 2 <| counter -5 {- Expected output: <div class="below-zero counter"> <button> - </button> -5 <button> + </button> </div> -}
Yes.
br,
img, etc.)
False, and added as boolean properties when
True. I.e.
selected Truewill result in
<someNode selected>.
Made with ❤️ and licensed under BSD-3. Fork me and send me some pull-requests! | https://package.frelm.org/repo/1105/1.0.2 | CC-MAIN-2018-51 | refinedweb | 175 | 54.29 |
This is something kind of neat that I recently learned. I have programmed in C for many years (two decades now I guess, wow) and I never knew about this. I'm more of a casual programmer so I don't tend to keep up on every nuance of the language. Anyhow, this is a really neat way to do strings in C, fairly easily! I really love it!
Try this program out, using a C compiler (I use GCC with Code::Blocks), it will compile and run just fine without errors.
The output is just as you would expect, it outputs each string as they are easily changed without a problem.
Has anyone else done this? Thoughts? Besides using C++, I prefer C only and generally compile with the "-std=gnu11" option which uses the latest 2011 C standard.
---Deluxe Pacman 1: Pacman 2::
It's the "simple" way of using structs directly, instead of pointers to structs that we are so used to (BITMAP*, ...)It's only possible if the struct has a fixed size, and it will perform a memcpy of sizeof(your struct) bytes every time you assign - which is wasteful if your struct contains null-terminated strings with a lot of free space.I've more often seen it used for light data, such as a 2D or 3D position, or a RGB triplet.
Note that you can only assign a value with {} on declaration, and only with data which is constant at compile time :string a = {"a NEW string"};string b = { __LINE__ };
These will not compile :
string c;
c = {"a NEW string"}; // nope
string d = {tolower("FOO")}; // nope
It's not just the strings, it's all typedefs initialisation.
Admitting you'd also stored the lenght of string inside the struct:
/* Great example of assigning typedef structs in C (I think it works in all versions) */
#include <string.h>
#include <stdio.h>
typedef struct {
char value[100];
int written;
} string;
int main(int argc, char *arcv[])
{
string a = { "hello" , strlen( "hello") };
printf( "value of a:\"%s\", len:%d\n" , a.value , a.written );
}
Edit:
Has anyone else done this? Thoughts? Besides using C++, I prefer C only and generally compile with the "-std=gnu11" option which uses the latest 2011 C standard.
And it would not be a problem if you don't, at least for structs initialisations.
Edit:
Last thought, this is also valid:
char a[100]="This is a REAL string initialisation";
"Code is like shit - it only smells if it is not yours"Allegro Wiki, full of examples and articles !!
Obviously the tolower one won't because it takes and returns a character, not a string. However, these will work in C99:
string s;
s = (string){"a NEW string"};
s = (string){"a REPLACEMENT string"};
Good luck with MSVC though, MS is still neglecting an 18 year old C standard.
-- "Do not meddle in the affairs of cats, for they are subtle and will pee on your computer." -- Bruce Graham
Oh for sure, but it works, and it's a much simpler, elegant solution I think. It takes some of the pain out of using strings in C.
string d = {tolower("FOO")}; // nope
I would think the reason that wouldn't compile is obvious. tolower() returns an int. Not a string.
c = {"a NEW string"}; // nope
That will not compile unless you typecast it as a (string), otherwise C doesn't know the size of it, which is why we can't normally assign a string in this way. You're basically trying to assign a normal array of chars to a string struct, you need to typecast it as was already shown.
c = (string){"a NEW string"}; // yup
When you create a new integer in C, it knows the size of it, 32bits for a normal int, so it can allocate exactly that much. When you create a char in C, it's the same deal, C knows the size of char, 8bit, so it can allocate the memory needed. When you create a `char *` you are creating a pointer to memory which will contain an unknown number of characters, if you assign a string when you create it with:
char *c = "some text here";
Than C knows how much memory to assign for it given what you provided, so it allocates that much when it is created. But you cannot do that later on with this pointer as later on, C only knows that *c is 32 (or 64) bits in size, and points to memory, but it does not know anything else, so it cannot assign a new string to that without risking overflows.
This is why you need to check the size of the string with special C string functions like strlen().
I'm not 100% clear on why structs work well for this though, but I assume many of the normal C rules for variables and pointers still apply.
If anyone has a clear reason why this works for structs, I would be interested. I would say they have a fixed size, but clearly in these examples, the strings were not a fixed size each time and it still worked. It's not enough for me to know it works, I NEED to know why.
Good luck with MSVC though, MS is still neglecting an 18 year old C standard.
This is exactly why I refuse to use MSVC. I stick to GNU C, specifically Code::Blocks with MinGW 5.3.0 at this time with the -std=gnu11 switch so i get the GNU version of the 2011 standard. There's also -std=c11, but the gnu version fixed some problems, I forget what now.
It's just because you had enough room in char value. If not it would have sigseved.
Why it works because string toto and a string *toto=malloc(1*sizeof(toto)) are basically the same, except that one is alloced by you and dynamically.
Pascal strings always were limited to 255 characters (since they stored the length in the first byte) - and who would ever need a string longer than that anyway?
But yes, it's basically two things:
b = a;
This always works in C if a and b have the same struct type, and is identical to doing a full memcpy of the entire struct (always 100 characters in your case).
a = (string) {"another string!"};
This is a new feature introduced in C99. You can even do this:
a = (string) {.value = "another string!"};
--"Either help out or stop whining" - Evert
Fascinating stuff. I really like this though.
So, because this is a struct, the memory is deallocated at program exit? That alone could make this worth doing, at least for more simpler tasks anyhow.
I'm not sure if it's clear, but there are two distinct behaviors :
char *c = "some text here";
// identical to
char *c;
c = "some text here"
The only thing allocated here is a pointer, and assuming this piece of code is in a function, it is allocated on the stack - so it will be handed back when function return."some text here" is a literal array of characters. The compiler eliminates duplicates, and stores one instance in a piece of memory that's accessable fom everywhere in your program. The instructions makes "c" point to this part of memory. It is read-only, so don't try to modify the characters.
char c[] = "ABCD";
// identical to
char c[5] = "ABCD";
// identical to
char c[5] = { 'A', 'B', 'C', 'D', '\0' };
Allocates an array of 5 characters and immediately initializes it with contents : A B C D \0The string is writable in this case.
The difference also takes place when you do this :
typedef struct {
char value[100];
} value_string;
typedef struct {
char *value; // only a pointer
} ref_string;
value_string a = {"TEST"};
ref_string b = {"TEST"}; // looks like the above, isn't it ?
a.value[0] = 't'; // no problem, this is part of the struct that you declared
b.value[0] = 't'; // No! this piece of memory doesn't belong to you!
So, because this is a struct, the memory is deallocated at program exit? That alone could make this worth doing, at least for more simpler tasks anyhow.
Yes, it is, at exit. If your concern is memory deallocation, remember that in these modern days the OS will free everything for you if you don't.
Edit:
Audric, I think that b->value[0] would work, admitting it's initialized.
No, the syntax that I typed is correct, but it causes a runtime error.
Output on Jdoodle:
a: BEST Address: 140723708153920
b: TEST Address: 94500360522128
const_string: TEST Address: 94500360522128
Segmentation fault (core dumped)
I couldn't find an online compiler which didn't crash (or silently halt). Note that the last two addresses are the same : a.value and const_string are actually pointing to the same memory area.
I just wanted to raise awareness about how = {"string"} can be misleading. Depending on context, it initializes by copying the entire data, or it merely points to a piece of constant, read-only data.
Since I always find it fascinating, this is the actual assembly created for the Neil-strings:
string a = {"hello"};
movabsq $478560413032, %rax // $478560413032 = 0x6f6c6c6568 = 'h', 'e', 'l', 'l', 'o', 0, 0, 0
movl $11, %ecx // 11 * 8 = 88
leaq 8(%rsp), %rdi // move address of 8 after "a" to RDI
movq %rax, (%rsp) // move "hello\0\0" into "a"
movq %rbp, %rax // move 0 into RAX
rep stosq // place 88 0-bytes into "a" (RAX to RDI)
movl $0, (%rdi) // move remaining 4 0-bytes into "a" to make it 100
What's interesting is that there is no static string containing "hello" at all - the compiler figured out to just encode it as a number.
a = (string) {"another string!"};
movabsq $2338042655863172705, %rax // "another "
leaq 16(%rsp), %rdi // move address 16 after "a" to RDI
movl $10, %ecx // 10 * 8 = 80
movq %rax, (%rsp) // move "another " into "a"
movabsq $113723913172083, %rax // "string!"
movq %rax, 8(%rsp) // move "string!" into "a"
movq %rbp, %rax // move 0 into RAX
rep stosq // place 80 0-bytes into "a"
movl $0, (%rdi) // remaining 4 0-bytes to make it 100 again
I found it interesting that there is zero difference between initialization and assignment - and because the string does not fit into a single number it splits it into two numbers this time!
Instead of using one of the "rep" instructions it simply uses 13 moves for a 100 byte string.
You've made me want to check as well From my tests with godbolt.org and a stack variable char mydata[] = "ABCDEFGHI";, GCC-x64 seems to favor the "unrolled loop", and clang seems to favor a kind of memcpy(), having stored the entire string in a data segment.
Ohh, I didn't know godbolt.org. And yes, I can see that - clang inserts an actual "call memcpy" for the "b = a", very disappointing. The icc compiler actually uses "rep movsd". Three compilers three completely different implementations of the same "b = a;" statement
Now the next step would be to benchmark the different versions!
Since I always find it fascinating, this is the actual assembly created for the Neil-strings:
That was fascinating. I looked at some of the assembly, but I have less experience with it than you do. I was curious as to how all of that ended up as assembly as well. So glad you posted this. | https://www.allegro.cc/forums/thread/616883/1030231 | CC-MAIN-2018-05 | refinedweb | 1,911 | 68.81 |
When building an application powered by Next.js it's probable that you'll need to fetch data from either a file, an internal API route or an external API such as the Dev.to API. Moreover, determining what data fetching method to use in a Next.js application can easily become confusing - especially as it isn't as simple as making an API request inside your components render function, as you might in a stock React app.
The following guide will help you carefully select the server-side data fetching method that suits your app (FYI you can use multiple methods in a single app). For each method, I have outlined when it runs, it's benefits and an example of when you could use the method in your Next.js application.
The following methods fetch data either at build time or on each request before the data is sent to the client.
getStaticProps (Static Generation)
Fetch data at build time.
The
getStaticProps method can be used inside a page to fetch data at build time, e.g. when you run
next build. Once the app is built, it won't refresh the data until another build has been run.
Note: Added in Next 9.3
Usage:
export async function getStaticProps(context) { const res = await fetch(``) const data = await res.json() if (!data) { return { notFound: true, } } return { props: {}, // will be passed to the page component as props } }
Benefits:
- It enables the page to be statically generated and will produce fast load times of all the data fetching methods.
- If each page in your app uses
getStaticProps(or no server-side data fetching methods) then Next.js will be able to export it into static HTML using
next export. This is advantageous if you want to create a static site that can be hosted on places such as GitHub Pages.
- The data is rendered before it reaches the client - great for SEO.
Example usage:
Imagine you have a personal blog site that renders pages from markdown files at build time.
getStaticProps can read the files and pass the data into the page component at build time. When you make a change to a blog entry, you rebuild the site to see the changes. ameira.me, a site I built, uses this method - each time Ameira makes a change to her portfolio, Vercel automatically rebuilds and republishes the site.
getServerSideProps (Server-side Rendering)
Fetch data on each request.
The
getServerSideProps method fetches data each time a user requests the page. It will fetch the data before sending the page to the client (as opposed to loading the page and fetching the data on the client-side). If the client makes a subsequent request, the data will be fetched again.
Note: Added in Next 9.3
Usage:
export async function getServerSideProps(context) { const res = await fetch(`https://...`) const data = await res.json() if (!data) { return { notFound: true, } } return { props: {}, // will be passed to the page component as props } }
Benefits:
- The data is refreshed each time a client loads the page meaning that it is up to date as of when they visit the page.
- The data is rendered before it reaches the client - great for SEO.
Example usage:
getServerSideProps is perfect for building an application that requires the client to see the most up to date information, but isn't refreshed while the client is on the page (see client-side for constantly updating information). For example, if I had a page on my personal site that displayed information about my last GitHub commit or my current Dev.to stats, I'd want these fetched each time a page is viewed.
getInitialProps (Server-side Rendering)
Fetch data on each request.
getInitialProps was the original way to fetch data in a Next.js app on the server-side. As of Next.js 9.3 you should use the previously discussed methods over
getInitialProps but I'm including it in this article because:
- It can still be used - although the Next.js maintainers advise you not to as
getStaticPropsand
getServerSidePropsenable you to choose from static or server-side data fetching.
- Knowing about
getInitialPropshelps when you come across an old Stack Overflow query that mentions it, and also that you shouldn't just copy and paste that solution!.
Note: If you're on Next.js 9.3 or above, use the two methods above.
Usage:
function Page({ stars }) { return <div>Next stars: {stars}</div> } Page.getInitialProps = async (ctx) => { const res = await fetch('') const json = await res.json() return { stars: json.stargazers_count } } export default Page
Benefits:
Same as
getServerSideProps - use
getServerSideProps!
Example usage:
Same as
getServerSideProps - use
getServerSideProps!
How to decide which one to use?
When using Next.js, I always aim to make each page static. This means that I try to avoid using
getServerSideProps and favour
getStaticProps. However, if the data that I am fetching changes often then of course I will use
getServerSideProps. I never use
getInitialProps anymore.
So normally I try
getStaticProps and if that is causing data to become outdated then I move to
getServerSideProps.
A word on client-side data fetching
This article hasn't covered any client-side data fetching methods but you can use the
useEffect hook to make the request or the
useSwr custom hook made by Vercel engineers which implements
stale-while-revalidate.
SWR is a strategy to first return the data from cache (stale), then send the fetch request (revalidate), and finally come with the up-to-date data.
SWR Usage:
import useSWR from 'swr' function Profile() { const { data, error } = useSWR('/api/user', fetcher) if (error) return <div>failed to load</div> if (!data) return <div>loading...</div> return <div>hello {data.name}!</div> }
Final words
In this article, I've introduced three Next.js methods that can be used to fetch data either at build time or before each client request.
Liked this article? Hit the like button!
Thanks for reading!
Discussion (5)
Nice article. You could also say a word about using
getStaticPropswith Incremental Static Regeneration which allows you to use both statically generated content with dynamic updates when needed.
Here's what Next.JS documentation says about it:
Inspired by stale-while-revalidate, background regeneration ensures traffic is served uninterruptedly, always from static storage, and the newly built page is pushed only after it's done generating.
Example
Thanks for adding this! I’ve used it before and it’s pretty slick.
Easy to understand and complete. Nice work !
Great article and it's helped me understand the difference between getStaticProps and getServerSideProps. Thanks
Thanks, happy to have helped! | https://dev.to/jameswallis/different-ways-to-fetch-data-in-next-js-server-side-and-when-to-use-them-1jb0 | CC-MAIN-2021-17 | refinedweb | 1,097 | 65.83 |
Java - To find the Second Highest Digit
find second largest number in array java
find second largest number in array java using scanner
java pgm to find the second largest element in an array
find largest number in array in java
find second largest number without using array in java
find third largest number in array without sorting in java
find second largest number in array javascript
I need to:
Write a function that accepts a string and returns the second highest numerical digit in the input as an integer.
The following rules should apply:
Inputs with no numerical digits should return -1
Inputs with only one numerical digit should return -1
Non-numeric characters should be ignored
Each numerical input should be treat individually, meaning in the event of a joint highest digit then the second highest digit will also be the highest digit
For example:
- "abc:1231234" returns 3
- "123123" returns 3
This is my code currently:
public class Solution { public static int secondHighestDigit(String input) { try{ int k = Integer.parseInt(input); char[] array = input.toCharArray(); int big = Integer.MIN_VALUE; int secondBig = Integer.MIN_VALUE; for(int i = 0; i < array.length; i++){ System.out.println(array[i]); for(int n = 0; n < array[i]; n++){ if(array[i] > big) { secondBig = big; big = array[i]; }else if(array[i] > secondBig && array[i] != big){ secondBig = array[i]; } } } System.out.println(secondBig); }catch(Exception e) { System.out.println("-1"); } return -1; } }
Tests:
import org.junit.*; import static org.junit.Assert.*; public class Tests { @Test public void test1() { Solution solution = new Solution(); assertEquals(3, solution.secondHighestDigit("abc:1231234")); } @Test public void test2() { Solution solution = new Solution(); assertEquals(3, solution.secondHighestDigit("123123")); } }
The program should print 3 for abc:1231234 and 123123 but instead it is returning -1 for both.
I am lost are where to go from here. I would be grateful if someone could help. Thanks.
One possible solution is to remove all non numeric characters and sort the string
public int secondGreatest(String s) { String newStr = s.replaceAll("[^0-9]*", ""); if (newStr.isEmpty() || newStr.length() == 1) { return -1; } else { char[] c = newStr.toCharArray(); Arrays.sort(c); return c[newStr.length() - 2] - '0'; } }
Java Program to find Second Largest Number in an Array, We can find the second largest number in an array in java by sorting the array and returning the 2nd largest number. Let's see the full example to find the second� Java Program to find Second Largest Number in an Array with examples of fibonacci series, armstrong number, prime number, palindrome number, factorial number, bubble sort, selection sort, insertion sort, swapping numbers etc.
Don't try to convert your string to an Integer at first - since you already know that it's possible it may contain some non-numeric characters.
Instead, parse through each character in the String, if it is an integer, add it to a list of integers. Let's use an ArrayList so we can add to the list and have it dynamically sized.
Your code should look something like this: (giving you some work to do)
// Iterate through characters in the string // You could also use "substring" so you don't have to deal with chars for (int i = 0; i < s.length(); i++){ char c = s.charAt(i); if(Character.isDigit(c)) { // Convert c to an int appropriately and add it to your list of ints } } // Sort the list of ints in descending order // Now, we write a seperate method public int getNthBiggest(int n) { // Return the nth biggest item (you could just use 2) }
Checking if the string has 0 or 1 "digits" should be trivial enough for you, if you've gotten this far.
WAP to find and print second largest digit in the given number , We do not do your HomeWork. HomeWork is not set to test your skills at begging other people to do your work, it is set to make you think and to� Approach: Find the second largest element in a single traversal. Below is the complete algorithm for doing this: 1) Initialize two variables first and second to INT_MIN as first = second = INT_MIN 2) Start traversing the array, a) If the current element in array say arr[i] is greater than first.
Run this code:
private static int secondHighestDigit(String input) {
String pattern = "(\\d+)"; // Create a Pattern object Pattern r = Pattern.compile(pattern); // Now create matcher object. Matcher m = r.matcher(input); if (m.find( )) { String foundedDigits = m.group(0); System.out.println("Found value: " + m.group(0) ); char[] characters = foundedDigits.toCharArray(); List<Integer> integers = new ArrayList<>(); for(char c : characters){ integers.add(Integer.parseInt(c +"")); } if(integers.size()==1) return -1; Collections.sort(integers); System.out.println("One Biggest int: " + integers.get(integers.size()-1)); System.out.println("Two Biggest int: " + integers.get(integers.size()-2)); return integers.get(integers.size()-2); }else { System.out.println("NO MATCH"); return -1; } }
*[Code challenge]* find the second largest digit in a given number , find the second largest digit in a given number without using arrays and string operation. Given a number N.The task is to find the largest and the smallest digit of the number. Examples : Input : N = 2346 Output : 6 2 6 is the largest digit and 2 is samllest
Find Second largest element in an array, Find Second largest element in an array. 14-06-2017. Given an array of integers, our task is to write a program that efficiently finds the second largest element� And if digit is smallest than value at variable smallestNumber replace the value at smallestNumber with the new digit obtained; Next dividing the number by 10 so that after getting first digit it will be removed from original number. Loop will be continue upto number > 0.
Find Second largest digit in given number without using any , using System; namespace ConsoleApplication1 { class Program{ static void Main (string[] args) {int num,k=0; int a = 0, b = 0, tmp = 0;num� C Program to find Second largest Number in an Array. This program for finding the second largest number in c array asks the user to enter the Array size, Array elements, and the Search item value. Next, this C program will find the Second largest Number in this Array using For Loop.
Java program to find second largest number in an array, Java program to find second largest number in an array. By candid | Posted : 18 Apr, 2016 | Updated : 2 Jul, 2017. Step 1: Iterate the given array. Step 2 (first if� if the number is 52163429 then highest and second highest are 9 & 6 . if the number is 45886 the highest and second highest should be 8 & 6.Also it should be done using loop and if statements – user2991964 Nov 14 '13 at 12:22
- "Where to go from here" is a very vague question. Please try to describe what issue you are facing with your program.
- The program should print 3 for abc:1231234 and 123123 but instead it is returning -1
- Maybe it helps to split the problem into smaller ones. We need to split the string into its parts. We need to detect if it is a number. We need to collect these in a list. Then sort the list. Then find the last and the second last item in that list. And then we would have very specific questions to ask for.
- Well that's kind of basic, you are only returning a hardcoded -1 from your method. You need to add another return statement earlier in the method.
- @JoakimDanielson Where should I put this Return statement and what should the value of this be? Thanks.
- Hi. I am getting an error when it trys "Arrays.sort(c);" It says Solution.java:9 error: cannot find symbol. Arrays.sort(c); EDIT: Just needed to import Java arrays.
- Would you be open to a DM?
- Sorry, but what's a DM ?
- Hi. It means Direct Message. I just wanted to know if you could explain to me what this line of code does?
return c[newStr.length() - 2] - '0';I am just curious is all because I don't understand what the
-2] '0';means. Thanks.
- I sent a message above but forgot to @ you. | http://thetopsites.net/article/59140466.shtml | CC-MAIN-2021-10 | refinedweb | 1,370 | 64.51 |
This C# Program Finds the Cube Root of a Given Number. Here the cube root of the given number is obtained using the math function.
Here is source code of the C# Program to Find the Cube Root of a Given Number. The C# program is successfully compiled and executed with Microsoft Visual Studio. The program output is also shown below.
/*
* C# Program to Find the Cube Root of a Given Number
*/
using System;
class CubeRoot
{
public static void Main()
{
double num, res;
Console.Write("Enter the Number : ");
num = double.Parse(Console.ReadLine());
res = Math.Ceiling(Math.Pow(num, (double)1 / 3));
Console.Write("Cube Root : " + res);
}
}
Here is the output of the C# Program:
Enter the Number : 8 Cube Root : 2
Sanfoundry Global Education & Learning Series – 1000 C# Programs.
If you wish to look at all C# Programming examples, go to 1000 C# Programs. | http://www.sanfoundry.com/csharp-program-cuberoot-number/ | CC-MAIN-2018-09 | refinedweb | 145 | 67.65 |
how to make widget (createWindowContainer) fill parent widget
I have this problem for months. I try to implement a webview on iOS with QtWebView by Qml and embed the webview into a widget by QWidget::createWindowContainer (following a online tutorial). But I just can't get this widget to fill the parent widget, nor change its size at all.
I have write the following minimal example for explaining the problem:
main.cpp
#include <QApplication> #include <QQuickView> #include <QBoxLayout> #include <QWidget> int main(int argc, char *argv[]) { QApplication a(argc, argv); QWidget *mainwindow = new QWidget(); mainwindow->setStyleSheet("background-color:red;"); // so that we can see the problem more clearly QHBoxLayout *layout = new QHBoxLayout(mainwindow); QQuickView *view = new QQuickView(); QWidget* container = QWidget::createWindowContainer(view, mainwindow); // container->setMinimumSize(200, 200); // container->setMaximumSize(200, 200); // the displaying size doesn't change with or without these lines // container->setFocusPolicy(Qt::TabFocus); view->setSource(QUrl("qrc:///webview.qml")); layout->addWidget(container); mainwindow->setLayout(layout); mainwindow->show(); return a.exec(); }
webview.qml
import QtQuick 2.2 import QtWebView 1.0 Item { visible: true anchors.fill: parent WebView { anchors.fill: parent url: "" } }
When I run this mini program on iphonesimulator in QtCreator, the layout is like the following screenshots: We can see the container doesn't fill the top area. And even when the parent widget is much smaller (in my actual program), the container has always this same size. When I rotate the iphonesimulator, it looks then like this!.
Could anyone please help to solve this? It's annoying me for months. Thanks!
You seem to try setting limits to the size, but for setting the size you need to use serGeometry(), setWidth() etc... according to what the mainwindow size is.
@mvuori those lines with size setting are commented. I have on Android tested, everything works and I can also change the size of container. Is this a bug on iOS?
- JKSH Moderators
Have you tried QQuickWidget? It has now replaced QWidget::createWindowContainer().
@JKSH WOW!!! With QQuickWidget is the layout perfect! Thanks! But the webview remains now white and no page (google homepage) ist displayed. Do you know why?
@JKSH
For example, with following code and the same webview.qml previously, I get only white screen.
QQuickWidget *view = new QQuickWidget(); view->setSource(QUrl("qrc:/webview.qml")); view->show();
- JKSH Moderators
But the webview remains now white and no page (google homepage) ist displayed. Do you know why?
I'm not sure, sorry. I have never used Qt WebView myself.
If you still can't find the answer, maybe you can subscribe to the Interest Mailing List and ask there: -- Qt engineers are active in that list. | https://forum.qt.io/topic/64561/how-to-make-widget-createwindowcontainer-fill-parent-widget | CC-MAIN-2018-09 | refinedweb | 439 | 51.04 |
PSR-Duh!
In a previous lesson here on Nettuts+, you learn about PSR; however, that article didn't detail the process of integrating that coding style into your projects. Let's fix that!
Note: this article assumes that you've read PSR-Huh?, and understand what PSR refers to. Let's begin with the first standard: PSR-0.
PSR-0 - The Autoloading Standard
The PHPCS plugin is the most helpful tool I've used.
In the past, we included PHP files in one of two ways:
- Using a giant block of include statements at the top of each file.
- List all includes in a single file and include that single file within your project.
There are pros and cons to both of these approaches, but, I think we can all agree that neither are optimal or modern solutions. PHP5 introduced the concept of autoloading files based on their class names; so, PSR-0 aims to keep filenames consistent.
Namespaces have nothing to do with filenames or autoloading; you can technically declare different namespaces in the same file. For example, the following code is perfectly valid.
<?php namespace Nettuts; Class Hello { public function __construct() { echo "Nettuts+"; } } namespace Gabriel; Class Hello { public function __construct() { echo "Gabriel"; } } $h = new \Nettuts\Hello(); $h = new \Gabriel\Hello();
There are two
Hello classes in this single file, but they reside within different namespaces. The final two lines of this code instantiate the
Hello() classes on their respective namespaces. The first outputs "Nettuts+", while the second echos "Gabriel." Namespaces allow you to differentiate between two classes with the same name, much like you might be used to with folders on your desktop. The PSR-0 standard simply leverages the benefits of namespaces, making it easy to autoload your classes. By consistently naming your files, you can create a function that locates the necessary files automatically.
To be PSR-1 compliant, you also must follow PSR-0.
Be sure to read the full standard, but to summarize:
- Each class must be namespaced with the project's (or creator's) name.
- Underscores within the class' name should be converted to directory separators.
- Files must have the
.phpextension.
For example, a class reference of:
\Nettuts\Database\SQL_Postgres
if following PSR-0, should translate to this path:
./Nettuts/Database/SQL/Postgres.php
How might we implement this functionality? The most obvious solution is to use Composer, which ships with a PSR-0 compliant autoloader. If you leverage Composer in your projects (and you should), then opt for its autoloader, rather than writing your own.
A PSR-0 compliant loader allows you to specify a base path, informing the loader which directory to look in first. To get started, create a simple
composer.json file that contains the following JSON:
{ "autoload": { "psr-0": { "Nettuts": "./", "Gmanricks": "vendor/" } } }
This JSON file tells Composer that we want to use the PSR-0 standard to autoload all
Nettuts-namespaced files with the current directory (the root folder) as the base path. We also want to autoload all classes with the
Gmanricks namespace, relative to the
vendor folder (e.g.
./vendor/Gmanricks/ClassName).
Now, type "
composer install" to generate the autoload classes, or "
composer dump-autoload" on subsequent edits to regenerate the autoload classes. Also, don't forget to require the autoloader somewhere early in your project.
<?php require 'vendor/autoload.php';
Composer is your best option, but there may be scenarios when you want a small, simple autoloader. The PHP-FIG provides a sample autoloader that you can use:
function __autoload($className) { $className = ltrim($className, '\\'); $fileName = ''; $namespace = ''; if ($lastNsPos = strrpos($className, '\\')) { $namespace = substr($className, 0, $lastNsPos); $className = substr($className, $lastNsPos + 1); $fileName = str_replace('\\', DIRECTORY_SEPARATOR, $namespace) . DIRECTORY_SEPARATOR; } $fileName .= str_replace('_', DIRECTORY_SEPARATOR, $className) . '.php'; require $fileName; }
It's important to note that this loader attempts to load all classes using the PSR standard in the current directory.
Now that we're successfully autoloading classes, let's move on to the next standard: the basic coding standard.
PSR-1 - The Basic Coding Standard
PSR-1 defines general coding guidelines, which can be divided into two parts.
Naming Conventions
Namespaces allow you to differentiate between two classes with the same name.
As with any programming language, following naming conventions ultimately makes your code easier to read and maintain. Here's a few rules to follow:
- Class names use PascalCase.
- Method names should be in camelCase.
- Constants require all capital letters, separating each word with an underscore (e.g.
CONSTANT_VARIABLE).
Code Conventions:
There's more to it than naming conventions; follow these guidelines as well:
- Only use
<?phpor
<?=in your code. Don't close PHP within a class.
- Files should either declare symbols or use them.
- Files must be in UTF-8 format without BOM for PHP code
Most of these are self-explanatory, but the middle convention is slightly confusing. It essentially dictates that any declaration, be it functions, classes, etc., should be separated into their own files. This not only promotes best practices like code-reuse and separation, but it keeps your code neat and clean.
It's worth mentioning that each PSR standard builds upon the previous PSR standard. As such, to be PSR-1 compliant, you also must follow PSR-0. By following these two standards, your code will be properly namespaced and autoloaded. There really isn't a reason not to follow them.
Yes, some developers complain about PSR and prefer to follow other conventions, but by following this standard, you can share code with everyone without worrying about its consistency. Having said that, nobody is forcing your hand here. It's simply a recommended guideline.
The next standard, PSR-2, dives into the specifics of how you should structure your code.
PSR-2 - The Advanced Coding Standard
PSR-2 dives into the specifics of how you should structure your code.
Next, we come to the one standard that PHP developers struggle with most; in fact, it's the reason why I chose to write this article.
PSR-2 defines many rules, many of which are listed below:
- Four spaces should be used instead of tabs.
- The ideal line length should be under 80 characters, but a soft limit of 120 characters should be imposed on all lines.
- There should be one blank line under the
namespaceand
usedeclarations.
- A method's or class' opening brace must be on its own line.
- A method's or class' closing brace must go on the line immediately after the body.
- All properties and methods require a visibility level.
- The '
abstract' / '
final' keywords should appear before the visibility while '
static' goes after.
- Control structure keywords must be followed by one space.
- A control statement's opening brace should appear on the same line as the statement.
Be sure to view the entire spec for a complete overview.
PSR-2 is just as important as PSR-1 (and PSR-0). It intends to make code easy to read and maintain. But, as they say, "The devil is in the details." There are a lot of details to remember, which can be difficult if your programming habits differ from what the standard defines. Thankfully, if you're on board, there are tools that help you adhere to PSR-0, PSR-1 and PSR-2. Perhaps the best tool is the Sublime Text plugin, PHPCS.
PHPCS - PHP Code Sniffer
The PHPCS plugin is the most helpful tool I've used, when it comes to getting code into shape. It allows you to not only ensure that your code follows the PSR standards, but it also uses PHP's linter to check for syntax errors. This is a great time saver; you no longer have to worry about syntax errors when you test your code in the browser.
Install the package through Sublime Package Control (it's called Phpcs), or, alternatively, with Git, using the following commands:
cd ~/Library/Application\ Support/Sublime\ Text\ 2/Packages/ git clone git://github.com/benmatselby/sublime-phpcs.git Phpcs
This installs the plugin, but you need a few dependencies before you can configure PHPCS. Once again, the easiest way to install them is with Composer. Browse to a directory of your choice and create a
composer.json file with the following JSON:
{ "name": "Nettuts PHPCS Demo", "require": { "squizlabs/php_codesniffer": "*", "fabpot/php-cs-fixer": "*", "phpmd/phpmd": "*" } }
This installs the three dependencies into the current folder. Open a terminal window to your installation location and type
composer install, and it will download the necessary packages.
Now you can configure the plugin in Sublime Text. Navigate to 'Preferences' > 'Package Settings' > 'PHP Code Sniffer' > 'Settings - User'.
The plugin needs to know where the three dependencies reside, as well as the standard that we want our code to adhere to:
{ "phpcs_additional_args": { "--standard": "PSR2", "-n": "" }, "phpcs_executable_path": "DEPENDENCY_PATH/vendor/bin/phpcs", "phpmd_executable_path": "DEPENDENCY_PATH/vendor/bin/phpmd", "php_cs_fixer_executable_path": "DEPENDENCY_PATH/vendor/bin/php-cs-fixer" }
These settings inform PHPCS that we want to ahere to the PSR2 standard and provide each dependency's path. Don't forget to replace
DEPENDENCY_PATH with your actual path.
Restart Sublime, and the code sniffer will scan your code when you save your PHP files.
Right-clicking in the editor will also list several new options, such as clearing error marks and attempting to fix the non-standard issues. However, considering that the point of this article is to get you used to the standard, I suggest manually fixing your code and avoiding the automatic fixer feature.
Conclusion
The PSR standards were created so that code could easily be reused from project to project, without sacrificing on code style consistency. While they may feel overwhelming at first, you can use the ideas and tools from this article to help you make the transition.
To reiterate one last time: nobody is forcing you to change the way that you code in PHP. It's simply a guide, originally meant for framework interoperability. That said, at Nettuts+, we consider it a best practice to follow. Now make up your own mind! If you have any questions, let's hear them below!
Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this postPowered by
| http://code.tutsplus.com/tutorials/psr-duh--net-31061 | CC-MAIN-2015-40 | refinedweb | 1,687 | 56.15 |
On 15 Sep 2003 at 11:11, David Abrahams wrote: > > #include <python.h> > ^^^^^^^^^^^^^^^^^^^ > > Don't do this. You should just include a Boost.Python header to get > python.h. When I get rid of that, the only crash I can reproduce > goes away. This is due to the debug-mode handling provided by > wrap_python.hpp as described in > I had already tested the program not including the python.h header but also crashed. The problem is that boost_python_debug.dll points to python23.dll instead of python23_d.dll. How have I to modify the project that builds boost_python_debug.dll in order to point to python23_d.dll instead of python23.dll? > > namespace py = boost::python; > > void func() > { > py::object main_module > = py::extract<py::object>(PyImport_AddModule("__main__")); > PyRun_SimpleString("a = 3"); > py::dict main_namespace(main_module.attr("__dict__")); > } This does not compile anyway, the first line gives this error: cannot convert from 'struct boost::python::extract<class boost::python::api::object>' to 'class boost::python::api::object' Thanks. David Lucena. | https://mail.python.org/pipermail/cplusplus-sig/2003-September/005195.html | CC-MAIN-2018-05 | refinedweb | 165 | 63.25 |
Web application technologies continue to move forward at a bristling pace -- and many of them based on HTML5 and/or JavaScript. As more and more HTML5 technologies are integrated into browsers, the amount of JavaScript that's required to support applications increases significantly. Technologies such as geolocation, local storage, canvas, web workers, WebSockets, and many others rely heavily on JavaScript. In addition to HTML5 technologies, the popularity of single-page applications, Ajax, Node.js, and other JavaScript frameworks continues to push JavaScript forward and make it increasingly more relevant to developers.
The ever-increasing usage of JavaScript code in applications makes reuse, maintainability, bugs, and other related areas crucial. This is especially true for enterprise-scale applications that could have thousands of lines of JavaScript code. Fortunately, developers have options for streamlining their JavaScript coding -- for example, by using any of several different JavaScript patterns, such as the Revealing Module Pattern and the Revealing Prototype Pattern, to structure code, and ECMAScript 6 will offer increased modularity in the future. However, ensuring that correct types (e.g., strings, numbers, Booleans) are being passed between functions can be tricky unless you add to the application robust unit tests using test frameworks such as QUnit or Jasmine. It goes without saying, though, that it would certainly be helpful to be able to structure code more easily and catch type issues up front.
To help developers who are writing "application-scale" JavaScript applications, Microsoft has released a new language called TypeScript, which addresses the aforementioned areas that can be problematic for JavaScript-based enterprise application development. Here I will introduce the fundamentals of TypeScript, describe the benefits it offers, and show how you can use TypeScript to generate JavaScript code. I'll also talk about how TypeScript can be used in several different editors, to give you code help and syntax highlighting.
Before jumping in, you should be aware that although TypeScript has several great features, it's definitely not for everyone or every project. The goal of this article isn't to convince you to use TypeScript instead of standard JavaScript. Throughout the article I'll describe different features that TypeScript offers and let you decide whether or not TypeScript is a good fit for your applications.
What Is TypeScript?
TypeScript was created by Anders Hejlsberg (the creator of the C# language) and his team at Microsoft. In a nutshell, TypeScript is a new language that can be compiled to JavaScript much like alternatives such as CoffeeScript or Dart. Here's the official definition from the TypeScript website: "TypeScript is a language for application-scale JavaScript development. TypeScript is a typed superset of JavaScript that compiles to plain JavaScript. Any browser. Any host. Any OS. Open Source."
One of the best things about TypeScript is that it is a "superset" of JavaScript. It isn't a standalone language completely separated from JavaScript's roots. This means that you can place standard JavaScript code in a TypeScript file (a file with a .ts extension) and use it directly, which is important given all the existing JavaScript frameworks and files. Once a TypeScript file is saved, it can be compiled to JavaScript using TypeScript's tsc.exe compiler tool.
So what are some of the key features of TypeScript? First, TypeScript is built upon JavaScript, which makes it very easy for a developer to start using the language. Second, TypeScript provides built-in type support, meaning that you define variables and function parameters as being "string", "number", "bool", and others to avoid incorrect types being assigned to variables or passed to functions. Third, TypeScript provides a way to write more modular code by directly supporting class and module definitions, and it even provides support for custom interfaces. Finally, TypeScript integrates with several different tools, such as Visual Studio, Sublime Text, Emacs, and vi, to provide syntax highlighting, code help, build support, and more, depending on the editor.
Not only does TypeScript provide excellent tool support, you can also use TypeScript with existing JavaScript frameworks, such as Node.js and jQuery, and even catch type issues and provide enhanced code help. Special "declaration" files that have a d.ts extension are available for Node.js, jQuery, and other libraries out of the box. Visit the TypeScript page on CodePlex for an example of a jQuery TypeScript declaration file that can be used with tools such as Visual Studio 2012 to provide additional code help and ensure that a string isn't passed to a parameter that expects a number. Although declaration files certainly aren't required, TypeScript's support for declaration files makes it easier for you to catch problems up front while working with existing libraries such as jQuery. In the future, I expect TypeScript declaration files will be released for different HTML5 APIs, such as canvas, localStorage, and others.
Getting Started with TypeScript
One of the best ways to get started learning TypeScript is to visit the TypeScript Playground. The playground provides a way to experiment with TypeScript code, get code help as you type, and see the JavaScript that TypeScript generates once the TypeScript code is compiled. Figure 1 shows an example of the TypeScript Playground in action.
One of the first things that might stand out to you about the code shown in Figure 1 is that classes can be defined in TypeScript. This makes it easy to group related variables and functions into a container, which helps tremendously with reuse and maintainability, especially in enterprise-scale JavaScript applications. Although you can certainly simulate classes using JavaScript patterns (note that ECMAScript 6 will support classes directly), TypeScript makes the task of defining classes quite easy, especially for developers who come from an object-oriented programming background.
As an example, let's look at the Greeter class in Listing 1, which is defined using TypeScript.
class Greeter { greeting: string; constructor (message: string) { this.greeting = message; } greet() { return "Hello, " + this.greeting; } }
Looking through the code, you'll notice that static types can be defined on variables and parameters (e.g., greeting: string), that constructors can be defined, and that functions can be defined -- for example, greet(). The ability to define static types is a key feature of TypeScript (and where its name comes from) and one that can help you identify bugs before even running the code. TypeScript supports many types, including primitive types such as string, number, bool, undefined, and null, as well as object literals and more complex types such as HTMLInputElement (for an tag). Custom types can be defined as well.
The JavaScript output generated by the TypeScript class in Listing 1 is shown in Listing 2.
var Greeter = (function () { function Greeter(message) { this.greeting = message; } Greeter.prototype.greet = function () { return "Hello, " + this.greeting; }; return Greeter; })();
Notice in the output the use of JavaScript prototyping and closures to simulate a Greeter class in JavaScript. The body of the code is wrapped with a self-invoking function to remove the variables and functions from the global JavaScript scope. This is an important feature that helps you avoid naming collisions between variables and functions.
In cases where you'd like to wrap a class in a naming container (similar to a namespace in C# or a package in Java), you can use TypeScript's module keyword. Listing 3 shows an example of wrapping an AcmeCorp module around the Greeter class. To create a new instance of Greeter, you must now use the module name. This can help avoid naming collisions that could occur with the Greeter class.
module AcmeCorp { export class Greeter { greeting: string; constructor (message: string) { this.greeting = message; } greet() { return "Hello, " + this.greeting; } } } var greeter = new AcmeCorp.Greeter("world");
In addition to defining custom classes and modules in TypeScript, you can also take advantage of inheritance by using TypeScript's extends keyword. Listing 4, the FinanceReport class automatically has access to the print() function in the base class. In this example, the FinanceReport class overrides the base class's print() method and adds its own. The FinanceReport class also forwards the name value it receives in the constructor to the base class by using the super() call.
Listing 5 shows the JavaScript code that's generated from the TypeScript code shown in Listing 4. Notice that the code includes a variable named __extends that handles the inheritance functionality in the generated JavaScript.
var __extends = this.__extends || function (d, b) { function __() { this.constructor = d; } __.prototype = b.prototype; d.prototype = new __(); } var Report = (function () { function Report(name) { this.name = name; } Report.prototype.print = function () { alert("Report: " + this.name); }; return Report; })(); var FinanceReport = (function (_super) { __extends(FinanceReport, _super); function FinanceReport(name) { _super.call(this, name); } FinanceReport.prototype.print = function () { alert("Finance Report: " + this.name); }; FinanceReport.prototype.getLineItems = function () { alert("5 line items"); }; return FinanceReport; })(Report); var report = new FinanceReport("Month's Sales"); report.print(); report.getLineItems();
TypeScript also supports the creation of custom interfaces when you need to provide consistency across a set of objects. Listing 6 as a result of the public surface: Surface parameter in the constructor. Adding public varName: Type to a constructor automatically adds a typed variable into the class without your having to explicitly write the code as with normal and intersect.
Although TypeScript has additional language features, defining static types and creating classes, modules, and interfaces are some of the key features it offers. Now that you've seen a few of the language features, let's look at how TypeScript can be used in editors and some of the benefits they provide.
Using TypeScript in Editors
Although you can use the TypeScript Playground to write TypeScript and generate the JavaScript code shown in the previous listings, you'll want to step up to a more robust editor at some point. This is especially true when you're building enterprise-scale JavaScript applications that involve a lot of code. Fortunately, TypeScript provides out-of-the-box support for a variety of editors -- as mentioned, Visual Studio 2012, Sublime Text, Emacs, vi, and others. To get started using TypeScript in these editors, visit the TypeScript website. By integrating TypeScript into an editor, you can get syntax highlighting, code help, and even build support (depending on the editor). Integrating TypeScript into your editor will help you more easily identify coding issues while writing code instead of after the fact.
Microsoft offers a TypeScript extension for Visual Studio 2012 that provides excellent IntelliSense and syntax highlighting to help you identify issues while you're coding -- for example, pointing out incorrect types being passed to functions. The extension also handles the task of automatically compiling .ts files to JavaScript when you save a TypeScript file, so that you don't have to use the tsc.exe command-line compiler tool. JavaScript files are automatically nested under TypeScript files, as shown in Figure 2, allowing you to access them easily and reference them from HTML, ASP.NET, or other types of web pages.
If you'd like to see the JavaScript that's generated as you add TypeScript code into a .ts file, download the Web Essentials extension for Visual Studio 2012. It provides a split-screen view of code with TypeScript on the left and JavaScript on the right, as shown in Figure 3. You can hide the JavaScript window when you don't want to see the generated JavaScript.
Visual Studio 2012 isn't the only game in town for TypeScript. Sublime Text has a syntax-highlighting file available as well, and you can even add a custom build system so that you can press Ctrl+B to compile a TypeScript file to JavaScript without having to resort to the command line and tsc.exe. Visit the TypeScript website to download a .zip file that contains a file named typescript.tmlanguage. Drop the file in the C:\Users\[user name]\AppData\Roaming\Sublime Text 2\Packages\User folder, and you're off and running with syntax highlighting and code help in Sublime Text.
To add a custom TypeScript build system into Sublime Text, you can perform the following steps (these steps are shown for the Windows version of Sublime Text). First, select Tools, Build System, New Build System from the Sublime Text menu. Next, copy the following code into the file that's created; then save the file.
{ "selector": "source.ts", "cmd": ["tsc ", "$file"], "file_regex": "^(.+?) \\((\\d+),(\\d+)\\): (.+)$" }
Once the new TypeScript build system is created, you can press Ctrl+B to compile a .ts file in Sublime Text to JavaScript.
Try Out TypeScript
This article has introduced you to some of the key features that the TypeScript language provides and showed how you can use TypeScript to write more modular JavaScript code. You've also seen how TypeScript can be used in different editors to help with syntax highlighting, to provide IntelliSense/code help, and to catch type errors more easily.
So is TypeScript right for you and your applications? You'll need to give it a spin to see what you think. If you're interested in learning more about different patterns that can be used to structure JavaScript code or more about the TypeScript language, check out the Structuring JavaScript Code and TypeScript Fundamentals courses available at Pluralsight.
Learn More About JavaScript:
- Calling a Server-Side Page Method from Client-Side JavaScript
- A Deeper Look at JavaScript Closures
- Explore the New World of JavaScript-Focused Client-Side Web Development
- Improve Your JavaScript Coding with Data-Oriented Programming
- JavaScript Closures
- Localize Your Client-Side JavaScript Applications
- Using MVVM in JavaScript with Knockout | https://www.itprotoday.com/web-development/build-enterprise-scale-javascript-applications-typescript | CC-MAIN-2018-51 | refinedweb | 2,255 | 53.81 |
I wanted to write a function that would take an object and convert it to an array that contains that object as a single element. It occurred to me that ...
What's the best way to convert an Object array to a Vector?
JDE < 1.5
public Vector getListElements()
{
Vector myVector = this.elements;
return myVector;
}
If i have an Array which contains the Strings 07:46:30 pm , 10:45:28 pm , 07:23:39 pm , .......
and I want to convert it into Time. How can i do ...
I am attempting to convert arrays of primitive double values to objects. As a result I am getting a "type mismatch error"
private double[]purchases;
private CreditCard[]purchases;
The library I am using is JSON.simple.
I am parsing a response from a query which succesfully returns an object containing an array of JSONObjects. I am now trying to convert ...
Hey i got myself confused with how to change this.damages = strArr[2]; to a double as i need to be able to mutiply the values within it. Any help would be great Java Code: import javax.swing.*; import java.awt.*; import java.awt.event.*; import java.io.*; import java.util.*; import java.util.Scanner; import javax.swing.JOptionPane; import javax.swing.plaf.ColorUIResource; import java.lang.*; import java.util.Collections; public class guiFinal extends JFrame { private ...
I'm currently dealing with JavaCV library, a porting (via JNI) of openCV libraries in java. The library works fine, but I've encountered an problem in void pointer conversions. I've this function cvCalcEigenObjects(int arg0, Pointer arg1, Pointer arg2, ... ) to work: Pointers required by the function are arrays of IplImage objects (the opencv image-representing objects). | http://www.java2s.com/Questions_And_Answers/Java-Collection/Array-Convert/object.htm | CC-MAIN-2013-48 | refinedweb | 283 | 60.01 |
5.16: Not Tracking the Blank Position
- Page ID
- 14507
def getBlankPosition(board): # Return the x and y of board coordinates of the blank space. for x in range(BOARDWIDTH): for y in range(BOARDHEIGHT): if board[x][y] == BLANK: return (x, y)
Whenever our code needs to find the XY coordinates of the blank space, instead of keeping track of where the blank space is after each slide, we can just create a function that goes through the entire board and finds the blank space coordinates. The
None value is used in the board data structure to represent the blank space. The code in
getBlankPosition() simply uses nested
for loops to find which space on the board is the blank space. | https://eng.libretexts.org/Bookshelves/Computer_Science/Programming_Languages/Book%3A_Making_Games_with_Python_and_Pygame_(Sweigart)/05%3A_Slide_Puzzle/5.16%3A_Not_Tracking_the_Blank_Position | CC-MAIN-2021-31 | refinedweb | 122 | 58.96 |
This is the third part of the Twitter Java client tutorial article series! In Build your own Application to access Twitter using Java and NetBeans: Part 2 we:
- Created a twitterLogin dialog to take care of the login process
- Added functionality to show your 20 most recent tweets right after logging in
- Added the functionality to update your Twitter status
Showing your Twitter friends’ timeline
- Open your NetBeans IDE along with your SwingAndTweet project, and make sure you’re in the Design View.
- Select the Tabbed Pane component from the Palette panel and drag it into the SwingAndTweetUI JFrame component:
- A new JTabbedPane1 container will appear below the JScrollPane1 control in the Inspector panel. Now drag the JScrollPane1 control into the JTabbedPane1 container:
- The jScrollPane1 control will merge with the jTabbedPane1 and a tab will appear. Double-click on the tab, replace its default name –tab1– with Home, and press Enter:
- Resize the jTabbedPane1 control so it takes all the available space from the main window:
- Now drag a Scroll Pane container from the Palette panel and drop it into the white area of the jTabbedPane1 control:
- A new tab will appear, containing the new jScrollPane2 object you’ve just dropped in. Now drag a Panel container from the Palette panel and drop it into the white area of the jTabbedPane1 control:
- A JPanel1 container will appear inside the jScrollPane2 container, as shown in the next screenshot:
- Change the name of the new tab to Friends and then click on the Source tab to change to the Source view. Once your app code shows up, locate the btnLoginActionPerformed method and type the following code at the end of this method, right below the jTextArea1.updateUI() line:
//code for the Friends timeline
try {
java.util.List<Status> statusList = twitter.getFriendsTimeline();
jPanel1.setLayout(new GridLayout(statusList.size(),1));
for (int i=0; i<statusList.size(); i++) {
statusText = new JLabel(String.valueOf(statusList.get(i).getText()));
statusUser = new JLabel(statusList.get(i).getUser().getName());
JPanel individualStatus = new JPanel(new GridLayout(2,1));
individualStatus.add(statusUser);
individualStatus.add(statusText);
jPanel1.add(individualStatus);
}
} catch (TwitterException e) {
JOptionPane.showMessageDialog (null, "A Twitter error ocurred!");}
jPanel1.updateUI();
- The next screenshot shows how the code in your btnLoginActionPerformed method should look like after adding the code:
- One important thing you should notice is that there will be 6 error icons due to the fact that we need to declare some variables and write some import statements. Scroll up the code window until you locate the import twitter4j.*; and the import javax.swing.JOptionPane; lines, and add the following lines right after them:
import java.awt.GridLayout;
import javax.swing.JLabel;
import javax.swing.JPanel;
- Now scroll down the code until you locate the Twitter twitter; line you added in Swinging and Tweeting with Java and NetBeans: Part 2 of this tutorial series and add the following lines:
JLabel statusText;
JLabel statusUser;
- If you go back to the buttonUpdateStatusActionPerformed method, you’ll notice the errors have disappeared. Now everything is ready for you to test the new functionality in your Twitter client! Press F6 to run your SwingAndTweet application and log in with your Twitter credentials. The main window will show your last 20 tweets, and if you click on the Friends tab, you will see the last 20 tweets of the people you’re following, along with your own tweets:
- Close your SwingAndTweet application to return to NetBeans.
Let’s examine what we did in the previous exercise. On steps 2-5 you added a JTabbedPane container and created a Home tab where the JScrollPane1 and JTextArea1 controls show your latest tweets, and then on steps 6-8 you added the JPanel1 container inside the JScrollPane2 container.
On step 9 you changed the name of the new tab to Friends and then added some code to show your friends’ latest tweets. As in previous exercises, we need to add the code inside a try-catch block because we are going to call the Twitter4J API to get the last 20 tweets on your friends timeline.
The first line inside the try block is:
java.util.List<Status> statusList = twitter.getFriendsTimeline();
This line gets the 20 most recent tweets from your friends’ timeline, and assigns them to the statusList variable. The next line,
jPanel1.setLayout(new GridLayout(statusList.size(),1));
sets your jPanel1 container to use a layout manager called GridLayout, so the components inside jPanel1 can be arranged into rows and columns. The GridLayout constructor requires two parameters; the first one defines the number of rows, so we use the statusList.size() function to retrieve the number of tweets obtained with the getFriendsTimeline() function in the previous line of code. The second parameter defines the number of columns, and in this case we only need 1 column.
The next line,
for (int i=0; i<statusList.size(); i++) {
starts a for loop that iterates through all the tweets obtained from your friends’ timeline. The next 6 lines are executed inside the for loop. The next line in the execution path is
statusText = new JLabel(String.valueOf(statusList.get(i).getText()));
This line assigns the text of an individual tweet to a JLabel control called statusText. You can omit the String.valueOf function in this line because the getText() already returns a string value –I used it because at first I was having trouble getting NetBeans to compile this line, I still haven’t found out why, but as soon as I have an answer, I’ll let you know. As you can see, the statusText JLabel control was created programmatically; this means we didn’t use the NetBeans GUI interface.
The next line,
statusUser = new JLabel(statusList.get(i).getUser().getName());
creates a JLabel component called statusUser, gets the name of the user that wrote the tweet through the statusList.get(i).getUser().getName() method and assigns this value to the statusUser component. The next line,
JPanel individualStatus = new JPanel(new GridLayout(2,1));
creates a JPanel container named individualStatus to contain the two JLabels we created in the last two lines of code. This panel has a GridLayout with 2 rows and one column. The first row will contain the name of the user that wrote the tweet, and the second row will contain the text of that particular tweet. The next two lines,
individualStatus.add(statusUser);
individualStatus.add(statusText);
add the name of the user (statusUser) and the text of the individual tweet (statusText) to the individualStatus container, and the next line,
jPanel1.add(individualStatus);
adds the individualStatus JPanel component – which contains the username and text of one individual tweet –to the jPanel1 container. This is the last line of code inside the for loop. The catch block shows an error message in case an error occurs when executing the getFriendsTimeline() function, and the jPanel1.updateUI(); line updates the jPanel1 container so it shows the most recent information added to it.
Now you can see your friends’ latest tweets along with your own tweets, but we need to improve the way tweets are displayed, don’t you think so?
Improving the way your friends’ tweets are displayed
For starters, let’s change some font attributes to show the user name in bold style and the text of the tweet in plain style. Then we’ll add a black border to separate each individual tweet.
- Add the following line below the other import statements in your code:
import java.awt.Font;
- Scroll down until you locate the btnLoginActionPerformed method and add the following two lines below the statusUser = new JLabel(statusList.get(i).getUser().getName()) line:
Font newLabelFont = new Font(statusUser.getFont().getName(),Font.PLAIN,statusUser.getFont().getSize());<!-- This is one line -->
statusText.setFont(newLabelFont);
- The following screenshot shows the btnLoginActionPerformed method after adding those two lines:
- Press F6 to run your SwingAndTweet application. Now you will be able to differentiate the user name from the text of your friends’ tweets:
- And now let’s add a black border to each individual tweet. Scroll up the code until you locate the import declarations and add the following lines below the import statement you added on step 1 of this exercise:
import javax.swing.BorderFactory;
import java.awt.Color;
- Scroll down to the btnLoginActionPerformed method and add the following line right after the individualStatus.add(statusText) line:
individualStatus.setBorder(BorderFactory.createLineBorder(Color.black));
- The next screenshot shows the appearance of your friends’ timeline tab with a black border separating each individual tweet:
Cool, huh? Now let’s examine the code you added in the previous exercise. On step 1 you added the import java.awt.Font line to use the Font class in your application. The next line you added,
Font newLabelFont = new Font(statusUser.getFont().getName(),Font.PLAIN,statusUser.getFont().getSize());
lets you create a new Font object with three parameters. The first parameter defines the name of the new font. Since we’re going to use the same font, we use statusUser.getFont().getName() to get its name from the statusUser component. The second parameter defines the style of the new font. In this case we’re using Font.PLAIN to define a plain style. And the third parameter is the size of the font. Since we don’t want to change the size, we use statusUser.getFont().getSize()) to get the size of the current font. So basically we just changed the style of the font, from bold to plain.
The next line, statusText.setFont(newLabelFont), changes the font of the statusText component to show the text of an individual tweet in a plain style. The user name will show in bold style, since we didn’t change the font of the statusUser component.
On step 5 you added two import statements to use the BorderFactory and Color classes in your application. Then, on step 6, the
individualStatus.setBorder(BorderFactory.createLineBorder(Color.black))
line adds a black line border to the individualStatus JPanel component using the createLineBorder method from the BorderFactory class. The black object from the Color class defines the color of the border. You can use other predefined objects like Color.yellow, Color.white, etc., depending on which color you want to use.
Adding more information to your friends’ timeline
Now let’s see how to add the date and time of each individual tweet in your friends’ timeline.
- Add the following line below the other import statements in your code:
import java.util.Date;
- Scroll down to the btnLoginActionPerformed method and add the following two lines below the statusText.setFont(newLabelFont) line:
Date statusDate = statusList.get(i).getCreatedAt();
statusUser.setText(statusUser.getText() + " - " + String.valueOf(statusDate));
- Run your application and log in. The date of creation of each tweet will show up next to the user name that wrote it:
- You can close your SwingAndTweet application now.
In this short exercise you saw how to show the creation date of each tweet from your friends’ timeline.
The first line of code we added,
import java.util.Date;
lets you use the Date class in your SwingAndTweet application. The next line,
Date statusDate = statusList.get(i).getCreatedAt();
creates a Date object named statusDate, and then gets the date of creation of each individual tweet via the getCreatedAt() method from the twitter4j.Status interface. The next line,
statusUser.setText(statusUser.getText() + " - " + String.valueOf(statusDate));
converts the date stored in the statusDate object to a string via the String.valueOf method and then adds it to the statusUser JLabel component via the setText method, along with a "-" character to separate the user name from the creation date.
That’s all for now! I hope you enjoyed this third part of the article series, and stay tuned because one of the cool tricks I’m going to show you in the next part of this series is how to enable the hyperlinks in your SwingAndTweet application, so you can click on one of them and a web browser window will open up automatically, just like when using the Twitter web interface. I’m also going to show you how to add the picture from the user’s profile for each individual tweet, along with some other cool stuff.
Summary
We added one more functionality to your SwingAndTweet application- added friends timeline. We learnt. | https://www.packtpub.com/books/content/build-your-own-application-access-twitter-using-java-and-netbeans-part-3 | CC-MAIN-2016-30 | refinedweb | 2,046 | 54.73 |
Common base class for all row-containing log events. More...
#include <log_event.h>
Common base class for all row-containing log events.
RESPONSIBILITIES
Encode the common parts of all events containing rows, which are:
Virtual inheritance is required here to handle the diamond problem in the class Write_rows_log_event, Update_rows_log_event and Delete_rows_log_event. The diamond structure is explained in
Write_rows_log_event,
Update_rows_log_event,
Delete_rows_log_event
Enumeration of the errors that can be returned.
Populates the m_distinct_keys with unique keys to be modified during HASH_SCAN over keys.
Does the cleanup.
Primitive to apply an event to the database.
This is where the change to the database is made.
When using RBR and MyISAM MERGE tables the base tables that make up the MERGE table can be appended to the list of tables to lock.
Thus, we just check compatibility for those that tables that have a correspondent table map event (ie, those that are actually going to be accessed while applying the event). That's why the loop stops at rli->tables_to_lock_count .
NOTE: The base tables are added here are removed when close_thread_tables is called.
Skip update rows events that don't have data for this slave's table.
If there are no columns marked in the read_set for this table, that means that we cannot lookup any row using the available BI in the binary log. Thence, we immediately raise an error: HA_ERR_END_OF_FILE.
Reimplemented from Log_event.
Commodity wrapper around do_exec_row(), that deals with resetting the thd reference in the table.
Populates the m_hash when using HASH_SCAN.
Thence, it:
Implementation of the hash_scan and update algorithm.
It collects rows positions in a hashtable until the last row is unpacked. Then it scans the table to update and when a record in the table matches the one in the hashtable, the update/delete is performed.
Implementation of the index scan and update algorithm.
It uses PK, UK or regular Key to search for the record to update. When found it updates it.
Private member function called after updating/deleting a row.
It performs some assertions and more importantly, it updates m_curr_row so that the next row is processed during the row execution main loop (
Rows_log_event::do_apply_event()).
This member function scans the table and applies the changes that had been previously hashed.
As such, m_hash MUST be filled by do_hash_row before calling this member function.
The do..while loop takes care of the scenario of same row being updated more than once within a single Update_rows_log_event by performing the hash lookup for the updated_row(by taking the AI stored in table->record[0] after the ha_update_row()) when table has no primary key.
This can happen when update is called from a stored function. Ex: CREATE FUNCTION f1 () RETURNS INT BEGIN UPDATE t1 SET a = 2 WHERE a = 1; UPDATE t1 SET a = 3 WHERE a = 2; RETURN 0; END
If there are collisions we need to be sure that this is indeed the record we want. Loop through all records for the given key and explicitly compare them against the record we got from the storage engine.
We found the entry we needed, just apply the changes.
At this point, both table->record[0] and table->record[1] have the SE row that matched the one in the hash table.
Thence if this is a DELETE we wouldn't need to mess around with positions anymore, but since this can be an update, we need to provide positions so that AI is unpacked correctly to table->record[0] in UPDATE implementation of do_exec_row().
if the rbr_exec_mode is set to Idempotent, we cannot expect the hash to be empty. In such cases we count the number of idempotent errors and check if it is equal to or greater than the number of rows left in the hash.
Reset the last positions, because the positions are lost while handling entries in the hash..
Implementation of the legacy table_scan and update algorithm.
For each unpacked row it scans the storage engine table for a match. When a match is found, the update/delete operations are performed.
unpack the before image
save a copy so that we can compare against it later
The method either increments the relay log position or commits the current statement and increments the master group position if the event is STMT_END_F flagged and the statement corresponds to the autocommit query (i.e replicated without wrapping in BEGIN/COMMIT)
Reimplemented from Log_event.
Private member function called while handling idempotent errors.
Helper function to check whether there is an auto increment column on the table where the event is to be applied.
Helper function to check whether the storage engine error allows for the transaction to be retried or not.
Fetches next row.
If it is a HASH_SCAN over an index, it populates table->record[0] with the next row corresponding to the index. If the indexes are in non-contigous ranges it fetches record corresponding to the key value in the next range.
Initializes scanning of rows.
Opens an index and initializes an iterator over a list of distinct keys (m_distinct_keys) if it is a HASH_SCAN over an index or the table if its a HASH_SCAN over the table.
Compares the table's read/write_set with the columns included in this event's before-image and/or after-image.
Each subclass (Write/Update/Delete) implements this function by comparing on the image(s) pertinent to the subclass.
Implemented in Write_rows_log_event, Update_rows_log_event, and Delete_rows_log_event.
Seek past the after-image of an update event, in case a row was processed without reading the after-image.
An update event may process a row without reading the after-image, e.g. in case of ignored or idempotent errors. To ensure that the read position for the next row is correct, we need to seek past the after-image.
Reimplemented in Update_rows_log_event.
Unpack the current row image from the event into m_table->record[0].
Updates the generated columns of the
TABLE object referenced by
m_table, that have an active bit in the parameter bitset
fields_to_update.
Bitmap for columns available in the after image, if present.
These fields are only available for Update_rows events. Observe that the width of both the before image COLS vector and the after image COLS vector is the same: the number of columns of the table on the master.
A spare buffer which will be used when saving the distinct keys for doing an index scan with HASH_SCAN search algorithm.
Container to hold and manage the relevant TABLE fields.
Hash table that will hold the entries for while using HASH_SCAN algorithm to search and update/delete rows.
Bitmap denoting columns available in the image as they appear in the table setup.
On some setups, the number and order of columns may differ from master to slave so, a bitmap for local available columns is computed using
Replicated_columns_view utility class.
Bitmap denoting columns available in the after-image as they appear in the table setup.
On some setups, the number and order of columns may differ from master to slave so, a bitmap for local available columns is computed using
Replicated_columns_view utility class.
The algorithm to use while searching for rows using the before image.
This bitmap is used as a backup for the write set while we calculate the values for any hidden generated columns (functional indexes).
In order to calculate the values, the columns must be marked in the write set. After the values are calculated, we set the write set back to its original value. | https://dev.mysql.com/doc/dev/mysql-server/latest/classRows__log__event.html | CC-MAIN-2022-40 | refinedweb | 1,248 | 63.7 |
Import
The
import statement is used to import functions and other definitions from another module. In the simplest case, you just write
import Data.Maybe
to import the named module (in this case
Data.Maybe).
However, in more complicated cases, the module can be imported qualified, with or without hiding, and with or without renaming. Getting all of this straight in your head is quite tricky, so here is a table (lifted directly from the language reference manual) that roughly summarises the various possibilities:
Suppose that module
Mod exports three functions named
x,
y and
z. In that case:
Note also that,. | http://www.haskell.org/haskellwiki/Import | crawl-001 | refinedweb | 102 | 54.52 |
ncl_mplndr man page
MPLNDR — Reads a specified EZMAP database and draws boundary lines from it.
Synopsis
CALL MPLNDR (FLNM,ILVL)
C-Binding Synopsis
#include <ncarg/ncargC.h>
void c_mplndr (char *flnm, int ilvl)
Description
- FLNM
(an input expression of type CHARACTER) specifies the name of the database to be used. MPLNDR will first look for the files of the specified database in the current working directory; if the files are not found there, MPLND.)
C-Binding Description
The C-binding argument description is the same as the FORTRAN argument description.
Usage
MPLNDR draws the lines defined by the map database whose name is FLNM; the EZMAP internal parameter 'DO' determines whether solid lines or dotted lines are used. If EZMAP currently needs initialization, MPLNDR does nothing except read some information from the map database, so that subsequent calls to EZMAPB functions will work properly.
The outlines are drawn using calls to MAPIT and MAPIQ. By default, MPLNDR, MAPLOT restores the original value of 'DL' and the original dash pattern. A user version of the routine MPCHLN may be supplied to change the way in which the outlines are drawn.
Examples
Use the ncargex command to see the following relevant example: tezmpb.
Access
To use MPLNDR or c_mplndr,usr,. | https://www.mankier.com/3/ncl_mplndr | CC-MAIN-2017-39 | refinedweb | 208 | 62.88 |
This is the mail archive of the libc-hacker@sources.redhat.com mailing list for the glibc project.
Note that libc-hacker is a closed list. You may look at the archives of this list, but subscription and posting are not open.
Hi! This patch prevents DT_TEXTREL PIEs just because Scrt1.o was linked in. 2003-07-07 Jakub Jelinek <jakub@redhat.com> * sysdeps/powerpc/powerpc64/elf/start.S: Put L(start_address) into .data.rel.ro.local section if PIC to avoid DT_TEXTREL. --- libc/sysdeps/powerpc/powerpc64/elf/start.S.jj 2002-12-09 15:37:21.000000000 -0500 +++ libc/sysdeps/powerpc/powerpc64/elf/start.S 2003-07-07 17:41:46.000000000 -0400 @@ -1,5 +1,5 @@ /* Startup code for programs linked with GNU libc. PowerPC64 version. - Copyright (C) 1998, 1999, 2000, 2001, 2002 Free Software Foundation, Inc. + Copyright (C) 1998,1999,2000,2001,2002,2003 Free Software Foundation, Inc. This file is part of the GNU C Library. The GNU C Library is free software; you can redistribute it and/or @@ -21,7 +21,11 @@ #include "bp-sym.h" /* These are the various addresses we require. */ +#ifdef PIC + .section ".data.rel.ro.local" +#else .section ".rodata" +#endif .align 3 L(start_addresses): .quad 0 /* was _SDA_BASE_ but not in 64-bit ABI*/ Jakub | http://www.sourceware.org/ml/libc-hacker/2003-07/msg00002.html | CC-MAIN-2019-22 | refinedweb | 214 | 63.56 |
Posted On Thursday, September 23, 2004 7:39 AM
Posted On Monday, September 20, 2004 11:43 PM
Posted On Friday, September 10, 2004 11:07 PM
So far, I like C# a lot. However, sometimes, I feel like C# is inferior to VB.NET. VB.NET provides financial library within Microsoft.VisualBasic.dll while C# doesn't have one. I wouldn't argue that you can add a reference to that dll and use it in C#. I just think that it should be available under System.Math or a more generic namespace.
Posted On Friday, September 10, 2004 11:03 PM
Apple G5 is going to release soon. I think it's a must have. It does really have a nice design.
Posted On Friday, September 10, 2004 10:33 PM | http://geekswithblogs.net/mrnat/archive/2004/09.aspx | CC-MAIN-2019-04 | refinedweb | 133 | 68.87 |
#include <primIndex.h>
Outputs of the prim indexing procedure.
Definition at line 276 of file primIndex.h.
Enumerator whose enumerants describe the payload state of this prim index. NoPayload if the index has no payload arcs, otherwise whether payloads were included or excluded, and if done so by consulting either the cache's payload include set, or determined by a payload predicate.
Definition at line 283 of file primIndex.h.
Appends the outputs from
childOutputs to this object, using
arcToParent to connect
childOutputs' prim index to this object's prim index.
Returns the node in this object's prim index corresponding to the root node of
childOutputs' prim index.
Swap content with
r.
Definition at line 305 of file primIndex.h.
List of all errors encountered during indexing.
Definition at line 292 of file primIndex.h.
A list of names of fields that were composed to generate dynamic file format arguments for a node in primIndex. These are not necessarily fields that had values, but is the list of all fields that a composed value was requested for.
Definition at line 302 of file primIndex.h.
Indicates the payload state of this index. See documentation for PayloadState enum for more information.
Definition at line 296 of file primIndex.h.
Prim index describing the composition structure for the associated prim.
Definition at line 289 of file primIndex.h. | https://www.sidefx.com/docs/hdk/class_pcp_prim_index_outputs.html | CC-MAIN-2021-17 | refinedweb | 227 | 51.44 |
void loop () { if (Serial.available() > 0) { incomingByte = Serial.read(); if (incomingByte >= 36 && incomingByte <= 96) { // note range is between 36 and 96 prevNote = incomingByte; } else if (incomingByte == 0) { blink(); output += prevNote; output += "-"; } }}
#include <MIDI.h>void setup() {MIDI.begin();MIDI.turnThruOn();}void loop() {MIDI.read();}
I had kuk's MIDI solution wired on a second hand Yamaha keyboard I bought, and was only able to receive a 144 note on signal every 4 or so seconds.I was pretty sure it had to be the same reason, as the original poster also mentioned a Yamaha PSR series keyboard.It turns out that Yamaha does kind of ignore MIDI standards.
The signals I received on playing "C D E" for example were:14420 80 20 022 80 22 024 80 24 0So the information about all the notes is being passed through, just not always starting with a 144 note on signal.
No idea why.
So my code ignores the 144 signals, but every time it detects a 0 signal, it adds the previously read byte to the output, as that one represents the note that has just been hit. This doesn't yet take velocity into account, a slightly more complex code is needed for that, but for me this was sufficient.Hope this helps!
The only restriction is: You must have RX disconnected from the midi-circuit while updating the program via the Arduino IDE and USB - it seems to interfere with the USB connection and disturb the transfer. My try to fix this will be to use other pins for the Midi-connection, will test that within the next days.
use software serial (ewww)
physically disconnect MIDI input each time you program
Please enter a valid email to subscribe
We need to confirm your email address.
To complete the subscription, please click the link in the
Thank you for subscribing!
Arduino
via Egeo 16
Torino, 10131
Italy | http://forum.arduino.cc/index.php?topic=64546.msg961101 | CC-MAIN-2015-11 | refinedweb | 318 | 62.98 |
Before Knowing about framer motion first we should know why should we need Framer motion? Framer motion improves upon and simplifies the API in a way that couldn't have been done without breaking changes and rewriting. As a react developer, I found it very exciting as I can create animations using technologies that I’m already familiar with.
What is Framer Motion?
Framer Motion is a production-ready motion library to create animations using React. It brings declarative animations, effortless layout transitions, and gestures while maintaining HTML and SVG semantics.
How to use Framer Motion in our Project
Install framer motion using in your project using
npm install framer-motion
Remember one thing Framer-Motion requires that you are using React version 16.8 or higher.
Once installed, you can import Motion into your components via framer-motion.
import { motion } from "framer-motion"
That’s it. Now you are able to use framer-motion in your Project. Let's see basic syntaxes of Framer Motion with few examples.
Animation
Animations are driven by Framer Motion’s versatile animate prop. It can cater to the very simplest, and most advanced use-cases.
Motion components are one of the core elements in Framer Motion, and most animations in Framer Motion are controlled via the motion component’s flexible animate property.*
Depending on your requirements, you can use the animate property in a variety of ways:
<motion.div animate={{ x: 200,rotate:180 }} />
In the above example, the div tag will move 200px to the right and rotate 180 degrees.
Transition
This transition can be optionally configured using Popmotion’s familiar tween, spring, and inertia animations.
Physical properties like x and scale are animated via spring by default, while values like opacity and color are animated with a tween. You can also use the transition prop to change properties like duration, delay, and stiffness of the animation.
<motion.div animate={{ x: 200,rotate:180 }} transition={{ duration: 2, repeat: Infinity }} />
In the above example, the div tag will move 200px to the right and rotate 180 degrees. the duration and repeat props are used to keep that animation in a loop with a duration of 2 seconds.
Keyframes
Values in animate can also be set as a series of keyframes. This will animate through each value in a sequence. By default, a keyframes animation will start with the first item in the array.
<motion.div animate={{ scale: [2, 2, 1] }} />
In the above example, div will scale through each value in the sequence.
Variants
Variants are a collection of pre-defined target objects passed into motion components by using the variants prop. Target objects are useful for simple, single-component animations. But sometimes we want to create animations that propagate throughout the DOM and orchestrate those animations in a declarative way.
By defining variants on a set of components, providing a variant label to animate will propagate this animation through all the children like additional transition props such as when, delayChildren, and staggerChildren.
import * as React from "react"; import { motion } from "framer-motion"; const variants = { open: { transition: { staggerChildren: 0.09, delayChildren: 0.3 } }, closed: { transition: { staggerChildren: 0.06, staggerDirection: -1 } } }; export const Navigation = () => ( <motion.ul variants={variants}> {itemIds.map(i => ( <MenuItem i={i} key={i} /> ))} </motion.ul> ); const itemIds = [ 1, 2, 3, 4];
In the above example, staggerChildren and delayChildren props are used to delay the transition of the menu items. In addition, the staggerDirection prop is used to specify the direction of the stagger.
Gestures
Motion extends the basic set of event listeners provided by React with a simple yet powerful set of UI gesture recognizers. It currently has support for hover, tap, pan, and drag gesture detection. Each gesture has a series of event listeners that you can attach to your motion component.
Hover
The hover gesture detects when a pointer hovers over or leaves a component. There are three hover props available — whileHover, onHoverStart(event, info), and onHoverEnd(event, info).
<motion.div whileHover={{ scale: 1.2 }} onHoverStart={() => console.log("starts")} onHoverEnd={() => console.log("ends")} />
n the above example, when we will hover over the div its size will increase and when we start hovering in the console it will print 'start', and when we stop hovering it will print 'end'.
Focus
The focus gesture detects when a component gains or loses focus through a click, focus, or by tabindex. the focus prop is whileFocus.
<motion.div whileFocus={{ scale: 1.5 }} />
In the above example, when we will focus on the div its size will increase.
Tap
The tap gesture detects when a pointer presses down and releases on the same component.
There are three hover props available — whiletap,onTap(event,info),onTapStart(event, info),onTapCancel(event, info)
<motion.div whileTap={{ scale: 1.2 }} />
In the above example, when we will Tap on the div its size will increase.
Pan
The pan gesture recognizes when a pointer presses down on a component and moves further than 3 pixels. The pan gesture is ended when the pointer is released.
There are three hover props available — onPanStart,onPan,onPanEnd
For pan gestures to work correctly with touch input, the element needs touch scrolling to be disabled on either x/y or both axis with the touch-action CSS rule
function onPan(event, info) { console.log(info.point.x, info.point.y) } <motion.div onPan={onPan} />
In the above example, A Info object containing x and y values for point is relative to the device or page. delta is Distance moved since the last event. offset is Offset from the original pan event. velocity is the Current velocity of the pointer.
Drag
The drag gesture follows the rules of the pan gesture but applies pointer movement to the x and/or y-axis of the component.
<motion.div drag dragTransition={{ min: 0, max: 100, bounceStiffness: 100 }} > Drag </motion.div>
In the above example, dragging is enable for both x-direction and y-directions. If you want to limit it only to the x-direction, you can set the drag property value to “x”: drag=“x”.
Hopefully, It helps you to start using framer motion in your React project for building some really cool animations. If any queries regarding this write them down in the comment below. Thank you for your time and I hoped my blog is helpful for you.
Happy Learning!!
Top comments (5)
Nice explanation.
Will be better if you have showcases for each one.
Thank you 😊
Keep up the good work, waiting for more
Showcases would've been perfect :)
I so wanted this lib for #Angular | https://practicaldev-herokuapp-com.global.ssl.fastly.net/sreashi/react-animations-using-framer-motion-eg2 | CC-MAIN-2022-40 | refinedweb | 1,095 | 57.06 |
When unit testing a MVC controller, stop reinventing the wheel and take a look at the MvcContrib TestHelper. Add a reference to the MvcContrib.TestHelper.dll and unit testing different functionality of your controller and its actions should be a walk in the park. Yes, using Moq to build up a mocked object is possible, but since MvcContrib TestHelper supports much of the needed functionality, why not start using it?
How to start using the MvcContrib TestHelper?
Download the MvcContrib TestHelper with the MvcContrib release package from this location: Mvc Contrib
Install the MVC contrib zipped file. Actually there is no installer, just a bunch of DLL files. It is the file MvcContrib.TestHelper.dll that you are interested in. Add this dll as a reference to the Visual Studio 2010 test project in your solution (remember to add this file to your solution by saying "Copy to output folder" and modify your WiX scripts, if needed), then create a new basic unit test or unit test and at the top add the using statement:
using MvcContrib.TestHelper;
Verify that the view result is returned and view name and model type matches in a unit test for a MVC action
Given that we have modified the Index mvc view to be strongly typed by a class Bird in our Models folder, the following code can verify in a single assertion that the returned ActionResult is a ViewResult, that the rendered view name is "Index" and that the model, that is the strongly type that the view is associated with is a given type as follows:
[TestMethod]
public void IndexTestViewResult()
{
var controller = new HomeController();
ActionResult result = controller.Index();
result.AssertViewRendered().ForView("Index").WithViewData
();
}
First off, I just new my controller I want to test, HomeController in this case.
Then using the extension methods in MvcContrib.TestHelper I call AsserViewRendere() which just asserts that ActionResult is of type ViewResult. Further, I check with the ForView extension method (take note of the chaining going on here) that the view name is Index. The third part of this single assert checks that the View data is of type Bird.
In this particular example, make note that my Index method looks like the following in my HomeController:
public class HomeController : Controller
{
public ActionResult Index()
{
ViewBag.Message = "Welcome to ASP.NET MVC!";
return View("Index", new Bird { SpeciesName = "Crow",
CanFly = true, WeightInGrams = 400, WingspanInCentimetres = 80 });
}
.
.
.
}
Obviously the Bird class is just a simple sample in this case.. The second argument
could be assigned to variable called model and passed in for cleaner code. The point I want to make here is that when you return a ViewResult, specifically specify the View Name explicitly, as this then can be verified with the ForView extension method in a unit test.
For more advanced controller tests there are lots of functionality in MvcTestContrib.TestHelper DLL to help out. Do not reinvent the wheel to get started with unit testing your controllers, use MvcContrib TestHelper!
Towards more complex controller action tests in MVC
Just to get started with a custom HttpContext, I wanted to test out the TestControllerBuilder class in MvcContrib.TestHelper. With this class, I adjust the unit test for the index action method a bit:
[TestMethod]
public void IndexTestViewResult()
{
TestControllerBuilder builder = new TestControllerBuilder();
var controller = builder.CreateController
();
builder.HttpContext.Cache["test"] = "test1";
builder.InitializeController(controller);
ActionResult result = controller.Index();
result.AssertViewRendered().ForView("Index").WithViewData
();
}
Now, I use the TestControllerBuilder class to create a controller of type HomeController. The builder is then tested out with setting the HttpContext.Cache to a test key-value pair. Further on, the line above instantiates the HomeController by using the CreateController static generic factory method by passing in the home controller as a generic type argument. Then the controller is initialized with the TestControllerBuilder. The unit test passes. Obviously, this just show how the HttpContext can be mocked using the TestControllerBuilder.
One important note here is the strange error I got thrown at me when running the test the first time.
Test method TestMvcContribUnitTests.Tests.Controllers.HomeControllerTest.IndexTestViewResult threw exception:
System.InvalidOperationException: Unable to create a factory. Be sure a mocking framework is available.
To avoid this error and get the test to pass, add also a reference to a mocking framework. By glancing into the code at the Mvc Contrib Codeplex page, I see that Rhino Mocks and Moq is the two mocking frameworks with support for TestHelper in MvcContrib. Since I have worked most with Moq, I just added a reference to Moq.dll (copy a reference to the Moq.dll assembly file, copy local set to true), and then the test passed.
This now, shows how to get a HttpContext which can be adjusted (Request, Response, Cache and so on), using Moq and MvcContrib TestHelper. The developer can then focus on the task at hand when it comes to unit test, namely to write efficiently unit tests that are relevant and not reinvent the wheel to get HttpContext up and running in Moq, without using MvcContrib TestHelper.
I will look into additional adventures of using MvcContrib TestHelper. If there are other use-case scenarios that should be presented, it would be nice to know.
Testing a RedirectToAction inside a mvc controller action
To test the redirection of an action inside a mvc controller action, use the
following:
Suppose we have return the following:
..
return RedirectToAction("About");
..
The test can then verify that we are returning a RedirectToActionResult and that
the controller action's name is "About" as follows:
..
result.AssertActionRedirect().ToAction("About");
..
In case we return a RedirectToActionResult with overload:
..
return RedirectToAction("About", "Home");
..
To test this we can write:
..
result.AssertActionRedirect().ToController("Home").ToAction("About");
..
Testing the routing in an MVC application
To test out the routes in your MVC application, the following unit test is served as an example using MvcContrib TestHelper:
[TestMethod]
public void RouteTestHomeIndex()
{
var routeTable = RouteTable.Routes;
routeTable.Clear();
routeTable.MapRoute(null, "{controller}/{action}/{id}",
new { controller= "Home", action= "Index",
id = UrlParameter.Optional });
"~/Home/Index".ShouldMapTo
(action => action.Index());
"~/Home/About".ShouldMapTo
(action => action.About());
}
First line of the unit tests grabs a reference to the RouteTable.Routes property, defined in the System.Web.Routing namespace. The routetable is then cleared and then
the MapRoute method adds a route definition to test. We set up the default route using the MapRoute method. Then we test the two routes "~/Home/Index" and "~/Home/About". These two values are strings and MvcContrib TestHelper has extension method which will check that the route specified in the string maps to a controller using the ShouldMapTo extension method with a generic argument specifying the controller, and in the parameter the action method of the contorller is specifed. Actually, controller => controller.Index is probably a better lambda argument here.
This shows that testing out routes are simple using MvcContrib TestHelper. Just use the string extension methods to test out which controller and which method a certain route should take.
Excellent explanation of using the TestHelper from MvcContrib here:
Looks like Google blogger is not treating Generics written in html nicely again. The WithViewData is an extension method that epxects a generic argument of the strongly typed class which the view is typed with, i.e. the model. So you must write WithViewDatA(OfType Bird) where this should be written like WithViewData "LESS than" Bird "Greater than", sorry for the inconvenience, but Google Blogger is misinterpreting generic arguments.. Use WithViewData to check the type of the model which is set with the model keyword at the top in MVC 3 Razor views, if you use this view engine..
The ShouldMapTo extension method expects a generic type argument just like the WithViewData extension method in MvcContrib TestHelper namespace. Google blogger cant interpret generic arguments correctly as it thinks this is HTML... Read ShouldMapTo "LESS Than" HomeController "Greater than" ..
Seriously handy software after the hours of time I've wasted trying to mock http stuff myself!
HttpContext.Current.Session["ValidateBy"] this kind of code is used in .cs file. I've used the solution provided by you. It works when we use Session["ValidateBy"] but not in the code mentioned at the beginning of my comment.
Can you suggest some solution to this.
- Harshul | http://toreaurstad.blogspot.com/2011/09/adventures-with-mvccontrib-testhelper.html | CC-MAIN-2017-26 | refinedweb | 1,370 | 55.24 |
From Documentation
Introduction
In this small talk, I am going to talk about how you can execute a loading or performance test on ZK applications using JMeter. It not only shows the load on first request to a ZUL page but also the load on subsequence Ajax request in that page. It is always a difficult part to test an Ajax request because of the uuid that is dynamically created. In this article, I will show you the steps of how;
- To write a simple IdGenerator
- Use JMeter to record a request
- Make JMeter testcase replayable
Implementation Steps
To Write a simple IdGenerator
It is important to write a IdGenerator in order to create a predictable desktop id and a component id when testing, then we can record and replay the testcase.
There are two types of ids that needs to be discussed:
1. desktop id - the concept to create a desktop id is that it must be unique in a session and must be passed to subsequent Ajax requests. The idea to provide a predictable desktop id is really simple - 'Server doesn't decide it, testcase do it'
2. component id - the concept to create a component id is that it is not only unique in a scope, but it is also fixed if the component is a 'replay-able target'. For example, when a button is clicked in the testcase, then, the id of the button must be fixed to be replayed. This really depends on your loading testcase target (whether the page creates dynamic number of components or not).
Here, the following code represents a simple sequence in the desktop since my target page creates fixed sequence/number components
public class SequenceIdGenerator implements IdGenerator{){ //System.out.println(" use client dtid "+dtid); to } return dtid==null?null:dtid; } public String nextPageUuid(Page page) { return null; } }
To use this generator, we have to add a configuration to zk.xml
<system-config> <id-generator-class>foo.jmtest.SequenceIdGenerator</id-generator-class> </system-config>
When an ID generator is introduced to a test environment, the next question is usually how to enable it or disable it depending on the environment. With ZK, it can be done by specifying a library property org.zkoss.zk.config.path
Use JMeter to record a request
Now, I will show you how to RECORD the testcase, please download and install JMeter (version 2.5.1 is used for this article.), then do the following steps;
- Create a Thread Group in Test Plan, name it for example 'Test Search'
- Add User Defined Variables in Test Search
- Add HTTP Cookie Manager in Test Search
- Add HTTP Request Defaults in Test Search
- Edit HTTP Request Defaults, set the Server Name and Port Number (in this case, localhost and 8080) to the application that configured already by SequenceIdGenerator
- Create a HTTP Proxy Server in WorkBench, and edit it as follows
- Set port to 9090 (or any other port in your control)
- Set Target to Controller to Test Plan > Test Search
- Add Include URL Patterns
- .*/zkau.*
- .*\.zul
- Add Exclude URL Patterns
- .*/zkau/web/.*
Now, you are ready to record the request of the browser to a test case. Please switch the proxy of your browser to host:port (in this case, proxy is localhost:9090), and link.
- Check that the uuid of zkau is created by SequenceIdGenerator (it will have prefix t_)
- Check that there is no rmDesktop command in the zkau record, if you have one, please remove it. (It is for removing previous tasks when we recorded the testcase, we don't need it anymore)
- Another idea of rmDesktop record from this discussion : thread. You could move this record to the end of records, so after each loop of the test, this desktop will be removed, it is a better solution than set max-desktop to 2.
- Make sure Server Name and Port Number are recorded in zkau since we already set them in HTTP Request Defaults.
Here are some screenshots
Make JMeter testcase Replayable
To make this testcase replayable, we have to assign the desktop id, following are the steps;
- Add dtid = 0 to User Defined Variables (remember don't add this before you record the testcase, remove this before you re-record the testcase)
- Add a parameter tdtid = ${__intSum(${dtid},1,dtid)} to zul request (in this case, /jmtest/search.zul), tdtid is the parameter that will be checked in SequenceIdGenerator
- For every zkau request, set the parameter value of dtid to ${dtid} , now the zkau request will send the dtid that we are expecting.
By doing these few steps, we can replay the testcase, but, before that, you have to add some 'Listener and tunning the thread' strategy. To simplify this case , I will just add View Results in Table to Test Search then run the test case. To check whether the testcase is running correctly or not, you can add some logs in you application to see whether or not the action is called.
- Example test result
Other loading test hints
For the loading test to run smoothly, there are some notes I think you should also know.
- Add a Uniform Random Timer to Test Case (set delay to 500ms - 2000ms) to test case, it makes the loading test like a normal access to your application , not a DOS (Deny of Service).
- Server will always crash if you allow a lot of threads to ATTACK it without tunning the server. You have to design a test plan to test your application from light loading to stress loading and analysis the limitation.
- Tune the native server or database configuration if the result is not as expected, there are too many reasons as to why application does not reach expected performance.
- Depending on the test plan, you had better set the max-desktop of ZK to two in order to reduce the memory consumption on the servers of each test thread
Download
Download the JMeter project file here and the target application here | http://books.zkoss.org/wiki/Small_Talks/2012/January/Execute_a_Loading_or_Performance_Test_on_ZK_using_JMeter | CC-MAIN-2014-52 | refinedweb | 997 | 53.65 |
Content types
ARCHIVED
This chapter has not been updated for the current version of Orchard, and has been ARCHIVED.
At the core of the idea of CMS is the ability to extend the system to new content types instead of restricting it to pre-defined ones such as blog posts.
Orchard has a rich composition model that enables new content types to be created from existing parts and extended at runtime. It also enables a development model that is code-centric rather than database-centric: for all practical purposes, database persistence will just happen without the developer having to do anything beyond defining the shape of his objects in code.
Basic Concepts
Content Item
A content item is a piece of content that you'd want to manipulate as a single entity. For example, a blog post is composed of many parts: there is a title, a body, an author, comments, tags, etc. But it is clear that the entity that you want to manipulate is the blog post as a whole. Other examples of content items are images, videos, wiki pages, products, forum threads, etc.
A content item has an integer id that is unique across the site.
The definition of a content item is voluntarily relatively vague, because we don't want to restrict what is considered a content item, so that developers can apply the same concepts to a wide array of objects.
Content Type
A content type can be seen as a category of contents; it represents what a content item is. For example, it's easy to understand that a content item is a blog post, or a photo, or a wiki page. The core of the notion here are in the words "is a": if you can say that a given content item is a "something", that "something" probably is the content type of the content item.
In developer speech, this concept is analogous to the concept of class, although the Orchard type system is established at run-time rather than statically determined from definitions in source code.
A content type in Orchard is just a name (in other words, it's identified by a simple string).
Content Part
Content items in Orchard are composed from existing nuggets of data and logic called content parts. A content part packages the set of fields and methods that are necessary to implement a particular reusable aspect of a content item.
All the parts that form a given content item share the same integer id.
For example, the "Comments" part has four properties: lists of published and pending comments, a flag that determines whether comments are shown, and a flag that determines whether the comment thread is closed. That part can be added to a content type to make it commentable. In this system, the comment aspect doesn't need to know what it's commenting on, and the commented item doesn't need to know that it has comments.).
Record
A record is a concept that only needs to be known to content-type developers and that doesn't need to surface to the end user of the application.
A record is the simple code representation of the data for a content part, as an almost Plain CLR Object that is used for persistence of the part in and out of the database. "Almost Plain CLR" because it usually derives from ContentPartRecord, which gives them an Id and a reference to the content item the part is participating in.
Content Driver
A content driver is similar to an MVC controller, except that it works at the level of the content part. It is responsible for preparing the views of the part, on the front-end, but also as an editor in the content item editor. It also handles post modifications in those editors in order to persist changes. Finally, it handles the importing and exporting of the part. In many ways, the driver is the brain of your part.
Content Handler
A content handler is the object that manages content items and parts. It is a set of services that will chime in whenever the application needs to create parts or items, among other tasks. Content providers are built around the idea of a filter, where the application calls into all filters when it needs something, and each filter can decide whether to participate or not, depending on the parameters communicated by the application.
For example, a content handler often manages the persistence of a part into a repository.
Most handlers are implemented as a simple assemblage of pre-defined filters.
Shapes and Templates
Drivers create dynamic objects that represent the data to be rendered to the browser. Those objects are called shapes and correspond to a view model in MVC, except that they represent a single part instead of the whole model for the complete view.
When the time comes to render those shapes and transform them into HTML, Orchard looks in the file system
for templates and in special code constructs called shape methods for the most relevant way to handle
that specific shape. Templates are typically what you will use. They are typically
.cshtml files found in the Views
folder of a theme or module.
Building a new content type
New content types, and even new content parts, can be built from the admin UI. For more details about this scenario, please read creating custom content types.
Of course, content parts and types can also be built from code. Read the Getting Started with Modules course or the writing a content part guide for examples.
Composing types from parts
On its own, a part doesn't do much. To make it useful, we need compose multiple parts into a content type. Here are a few examples of the most frequently used parts:
- CommonPart gives an owner, created and last modified dates as well as an optional container (useful for hierarchies of content items such as a book containing chapters that contain pages that contain paragraphs for example).
- TitlePart gives the item a title
- AutoroutePart gives the item a path, making it possible to navigate to the item as a page in the front-end.
- BodyPart adds a body field and a format for that body.
Settings
Content parts can have settings that define the behavior of the part for all items of a certain type. For example, a map part can have settings for the default location to map, or for whether the map should be interactive or not. These part settings can be modified from the admin UI by going into Content/Content Types, choosing the content type to configure and then by deploying the section of the part.
Admin Menu
Modules can plug into the admin system. This can be accomplished by using the INavigationProvider interface in the Orchard.UI.Navigation namespace.
Here is an example of an admin menu item:
public class AdminMenu : INavigationProvider { public string MenuName { get { return "admin"; } } public void GetNavigation(NavigationBuilder builder) { builder.Add("Sandbox", "1", menu => menu .Add("Manage Something", "1.0", item => item .Action("List", "Admin", new { area = "Orchard.Sandbox" }) .Permission(StandardPermissions.AccessAdminPanel))); } }
Note that we're using Orchard.Security.StandardPermissions.AccessAdminPanel here. You can just as easily define your own permissions using the IPermissionProvider interface.
Creating items of a custom type
Once you have created your own content type, you can create items of this type from the admin UI by clicking New/YourContentType if the type has been marked "creatable".
Items can also be created from code:
var custom = _contentManager.Create<CustomPart>("CustomType", part => { part.Record.SomeProperty = "My property value"; });
Querying the catalog
To get a specific content item of which you know the id, you can call:
var page = _contentManager.Get<CustomPart>(id);
It is also possible to create more general queries against the content manager, that can return lists of items:
var items = _contentManager.Query<TitlePart, TitlePartRecord>() .Where(t => t.Title.Contains("foo")) .OrderBy(r => r.Title) .Slice(10, 5);
The code above will get the items 10 to 15 in the list of items that have "foo" in their titles, when ordered by title.
Accessing the parts of the item
To access the different parts of a content item, you can call the As method on any of the parts:
var body = mypage.As<BodyPart>().Text
Revisions
06-10-2012: rewrite for current version of Orchard (1.4) | http://docs.orchardproject.net/en/latest/Documentation/Content-types/ | CC-MAIN-2017-26 | refinedweb | 1,398 | 60.24 |
The QAbstractSpinBox class provides a spinbox and a line edit to display values. More...
#include <QAbstractSpinBox>
Inherits QWidget..
This enum type describes the symbols that can be displayed on the buttons in a spin box.
See also QAbstractSpinBox::buttonSymbols.
The StepEnabled type is a typedef for QFlags<StepEnabledFlag>. It stores an OR combination of StepEnabledFlag values..
Access functions:
See also ButtonSymbols.
This property holds whether the spin box draws itself with a frame.
If enabled (the default) the spin box draws itself inside a frame, otherwise the spin box draws itself without any frame.:
QSpinBox marginBox(-1, 20, 1, parent); marginBox.setSuffix(" mm"); marginBox.setSpecialValueText("Auto");
The user will then be able to choose a margin width from 0-20 millimeters or select "Auto" to leave it to the application to choose. Your code must then interpret the spin box value of -1 as the user requesting automatic margin width.); sb->setValue(100); sb->stepBy(1); // value is 0
By default, wrapping is turned off.
Access functions:
See also QSpinBox::minimum() and QSpinBox::maximum().
Constructs an abstract spinbox with the given parent with default wrapping, and alignment properties.
Called when the QAbstractSpinBox is destroyed.
Clears the lineedit of all text but prefix and suffix.
This signal is emitted editing is finished. This happens when the spinbox looses focus and when enter is pressed..
This function interprets the text of the spin box. If the value has changed since last interpretation it will emit signals..
See also lineEdit().().
This virtual function is called by the QAbstractSpinBox to determine whether input is valid. The pos parameter indicates the position in the string. Reimplemented in the various subclasses. | http://doc.trolltech.com/4.0/qabstractspinbox.html | crawl-001 | refinedweb | 275 | 61.22 |
Opened 6 years ago
Closed 6 years ago
#15353 closed (worksforme)
problem with django.middleware.gzip.GZipMiddleware
Description (last modified by )
In django.middleware.gzip.GZipMiddleware have a bug:
def process_response(self, request, response): # It's not worth compressing non-OK or really short responses. if response.status_code != 200 or len(response.content) < 200: return response
exist response without response.content!
when I use: (in url.py )
(r'^media/(?P<path>.*)$', 'django.views.static.serve', {'document_root': settings.MEDIA_ROOT}),
to binary files have a problem, do not exist response.content.
to solver the bug I add 'try' on code: try: if response.status_code != 200 or len(response.content) < 200: return response except: return response
In my vision the middlewere gzip don't have compress binary, because the browser is incompatible
Change History (1)
comment:1 Changed 6 years ago by
Note: See TracTickets for help on using tickets.
Please use preview when submitting code samples.
As for the bug itself: response *always* has a 'content' attribute. That's part of the basic contract of HttpResponse. The static serve view works fine for me with binary content -- because it doesn't just dump a file, it constructs a HttpResponse to contain the file content. I can't reproduce the problem as described, and what you're describing doesn't make a whole lot of sense. | https://code.djangoproject.com/ticket/15353 | CC-MAIN-2017-13 | refinedweb | 223 | 50.94 |
21067/how-does-avro-schema-evolution-work
I am new to Hadoop and programming, and I am a little confused about Avro schema evolution. I will explain what I understand about Avro so far.
Avro is a serialization tool that stores binary data with its json schema at the top. The schema looks like this.
{
"namespace":"com.trese.db.model",
"type":"record",
"doc":"This Schema describes about Product",
"name":"Product",
"fields":[
{"name":"product_id","type": "long"},
{"name":"product_name","type": "string","doc":"This is the name of the product"},
{"name":"cost","type": "float", "aliases":["price"]},
{"name":"discount","type": "float", "default":5}
]
}
Now my question is why we need evolution? I have read that we can use default in the schema for new fields; but if we add a new schema in the file, that earlier schema will be overwritten. We cannot have two schema's for a single file.
Another question is, what are reader and writer schema's and how do they help?
If you have one avro file and you want to change its schema, you can rewrite that file with a new schema inside. But what if you have terabytes of avro files and you want to change their schema? Will you rewrite all of the data, every time the schema changes?
Schema evolution allows you to update the schema used to write new data, while maintaining backwards compatibility with the schema(s) of your old data. Then you can read it all together, as if all of the data has one schema. Of course there are precise rules governing the changes allowed, to maintain compatibility. Those rules are listed under Schema Resolution.
There are other use cases for reader and writer schemas, beyond evolution. You can use a reader as a filter. Imagine data with hundreds of fields, of which you are only interested in a handful. You can create a schema for that handful of fields, to read only the data you need. You can go the other way and create a reader schema which adds default data, or use a schema to join the schemas of two different datasets.
Or you can just use one schema, which never changes, for both reading and writing. That's the simplest case.
In your case, One copy of the ...READ MORE
HDFS is designed in a way where ...READ MORE
The problem with your code is that ...READ MORE
Let me try to explain you
The best possible framework for this task ...READ MORE
It basically depends on the file type ...READ MORE
OR
Already have an account? Sign in. | https://www.edureka.co/community/21067/how-does-avro-schema-evolution-work | CC-MAIN-2020-34 | refinedweb | 431 | 73.78 |
Object Comparison Using XmlNameTable with XmlReader
The is an abstract base class with NameTable as an implementation. The NameTable contains atomized version of element and attribute names, along with namespace URIs and prefixes. If the application is doing a lot of comparison on names, the application can create a NameTable. The user can get the NameTable that the reader is using from the NameTable property of the reader. For a description of atomization, see XmlNameTable Class. [C#] object cust = reader.NameTable.Add("Customer"); while (reader.Read()) { // The "if" uses efficient pointer comparison. if (cust == reader.Name) { ... } }
For more information on the methods, see XmlNameTable Members.
See Also
Reading XML with the XmlReader | Current Node Position in XmlReader | Property Settings | https://msdn.microsoft.com/en-us/library/hbfeb09f(v=vs.71).aspx | CC-MAIN-2015-14 | refinedweb | 119 | 52.46 |
Houston is a simple, lightweight logging library for iOS. It is meant to allow easy logging of application data to multiple endpoints (console, stdout, http, etc).
Inspired by corey-rb
Features
- [x] Single Setup
- [x] Log Strings & Objects
- [x] Multiple Output Destinations
- [x] Formatting Customization
- [x] iOS, watchOS, tvOS, macOS compatibility
- [x] Log to File
- [ ] Log to HTTP endpoint
- [ ] Complete Documentation
Requirements
- Xcode 8.3+
- iOS 8.0+
- watchOS 2.0+
- macOS 10.10+
- Swift 4.0+
Installation
To integrate Houston into your project, add the following to your project’s Podfile
pod 'Houston'
Carthage
Swift Package Manager
Basic Usage (Quick Start)
In each source file,
import Houston
In your AppDelegate (or other global file), configure log destinations
let consoleDestination = ConsoleDestination() Logger.add(destination: consoleDestination)
Basic Logging
You can log just about anything.
You can log simple strings:
Logger.verbose("View Loaded") Logger.warning("Yikes something is not right") Logger.error("Fatal Error")
Or you can log objects:
Logger.info(Date()) Logger.debug(["Yellow", "Blue", 3])
Output
Contribute
Want to learn Swift and help contribute? Read Here
License
Houston is released under the MIT license. See LICENSE for details.
Latest podspec
{ "name": "Houston", "version": "0.2.1", "summary": "A lightweight logger in Swift for iOS", "description": "Houston is a lightweight logger in Swift.", "homepage": "", "license": { "type": "MIT", "file": "LICENSE" }, "authors": { "RudyB": "[email protected]" }, "source": { "git": "", "tag": "0.2.1" }, "social_media_url": "", "platforms": { "ios": "8.0", "watchos": "2.0", "tvos": "9.0", "osx": "10.10" }, "source_files": "Source/*.swift", "pushed_with_swift_version": "4.0" }
Wed, 17 Jan 2018 23:20:04 +0000 | https://tryexcept.com/articles/cocoapod/houston | CC-MAIN-2019-39 | refinedweb | 257 | 54.18 |
I saw many articles on CodeProject on using DLLs the easiest way, but none of them seemed very easy to me. All of them had some weird function that did all the GetProcAddress and LoadLibrary stuff. I didn't want that, I just wanted to make a DLL, link to it, include a header, and start calling functions from anywhere within my project. Finally, I found out how to do this.
GetProcAddress
LoadLibrary
First, you just open Visual Studio and click File... New... and select Win32 Dynamic-Link Library. Then, when the next screen comes up, select A Simple DLL project and click Finish.
Now you have your base DLL project setup. Now, in order for us to be able to use our DLL functions in other projects, we must export them. So, first we have to add a .def file. Click File... New... and then select Text File and give it the name EasyDLL.def. Now, we are only going to add one function to our DLL for simplicity. So double click the EasyDLL.def file in your project and add the following text:
LIBRARY "EASYDLL.dll"
EXPORTS
EASYDLL_GetVersion
Next, you will need to double click the EasyDLL.cpp file and remove the following code:
BOOL APIENTRY DllMain( HANDLE hModule,
DWORD ul_reason_for_call,
LPVOID lpReserved
)
{
return TRUE;
}
Now, we should have a blank EasyDLL.cpp file except for the comment and the #include "stdafx.h" line. So now, we add a new header file by clicking File... New... and selecting C/C++ Header File, and give it the name EasyDLL.h and click OK. Now, double click EasyDLL.h file and add the following code:
#include "stdafx.h"
#ifdef __cplusplus
extern "C" {
#endif
#define WINEXPORT WINAPI
int WINEXPORT EASYDLL_GetVersion();
#ifdef __cplusplus
}
#endif
Now, we need to double click EasyDLL.cpp and add the following code:
#include "EasyDLL.h"
int WINEXPORT EASYDLL_GetVersion()
{
return 1;
}
Now your DLL is ready to go! All you have to do is build your project.
This is where the "Super Easy" part comes in. All you have to do to use your DLL now is link to your .lib file by going to Project.... Settings... and the Link tab. Add in EasyDLL.lib into the Object/Library modules text box. Then, just include EasyDLL.h wherever you want to use the functions from the DLL, and then you can just start calling functions.
#include <iostream.h>
#include <windows.h>
#include "EasyDLL.h"
int main()
{
int nVersion = EASYDLL_GetVersion();
return 0;
}
We need to include windows.h because of the WINAPI macro in the EasyDLL.h file.
typedef
I hope this helped some of you out. I know it makes it much easier for me to create a re-usable library of functions without having to worry about all those LoadLibrary and GetProcAddress calls. Not to mention I can just include the header file in the StdAfx.h file of my project and use the DLL functions anywhere in. | https://www.codeproject.com/Articles/8946/Super-Easy-DLL?msg=986271 | CC-MAIN-2017-47 | refinedweb | 491 | 77.13 |
C:Custom Resource Files
Do you demand more out of life? Are you tired of having your BMPs and WAVs flapping naked in the wind, for all to see? You, my friend, need to gird your precious assets in a custom resource file!
Disgusting imagery aside, custom resource files are an essential part of any professional game. When was the last time you purchased a game and found all of the sprites, textures, sound, or music files plainly visible within the game's directory tree? Never! Or at least, hardly ever!
So, what is a custom resource file? It is a simple repository, containing the various media files needed by your game. Say you have 100 bitmaps representing your game's tiles and sprites, and 50 wave files representing your sound effects and music; all of these files can be lumped into a single resource file, hiding them from the prying eyes of users.
Contents
File Format
The format you use for your custom resource file is up to you; encryption and compression algorithms can easily be incorporated. For the purposes of this tutorial however, I'll keep things simple. Here's a byte-by-byte outline of my simple resource file format:
The Header
The header contains information describing the contents of the resource, and indicating where the individual files stored within the resource can be located.
- First 4 bytes
- An int value, indicating how many files are stored within the resource.
- Next 4n bytes
- Where n is the number of files stored within the resource. Each 4 byte segment houses an int which points to the storage location of a file within the body of the resource. For example, a value of 1234 would indicate that a file is stored beginning at the resource's 1234th byte.
The Body
The body contains filename strings for each of the files stored within the resource, and the actual file data. Each body entry is pointed to by a header entry, as mentioned above. What follows is a description of a single body entry.
- First 4 bytes
- An int value, indicating how many bytes of data the stored file contains.
- Next 4 bytes
- An int value, indicating how many characters comprise the filename string.
- Next n bytes
- Each byte contains a single filename character, where n is the number of characters in the filename string.
- Next n bytes
- The stored file's data, where n is the file size.
Example Resource File
Examples tend to make things clearer, so here we go. Numbers on the left indicate location within the file (each segment is one byte), while data on the right indicates the values stored at the given location.
BYTELOC DATA EXPLANATION ******* **** *********** 0-3 3 (Integer indicating that 3 files are stored in this resource) 4-7 16 (Integer indicating that the first file is stored from the 16th byte onward) 8-11 40 (Integer indicating that the second file is stored from the 40th byte onward) 12-15 10056 (Integer indicating that the third file is stored from the 10056th byte onward) 16-19 9 (Integer indicating that the first stored file contains 9 bytes of data) 20-23 8 (Integer indicating that the first stored file's name is 8 characters in length) 24-31 TEST.TXT (7 bytes, each encoding one character of the first stored file's filename) 32-40 Testing12 (9 bytes, containing the first stored file's data, which happens to be some text) 41-44 10000 (Integer indicating that the second stored file contains 10000 bytes of data) 45-48 9 (Integer indicating that the second stored file's name is 9 characters in length) 49-57 TEST2.BMP (8 bytes, each encoding one character of the second stored file's filename) 58-10057 ... (10000 bytes, representing the data stored within TEST2.BMP. Data not shown!) 10058-10061 20000 (Integer indicating that the third stored file contains 20000 bytes of data) 10062-10065 9 (Integer indicating that the third stored file's name is 9 characters in length) 10066-10074 TEST3.WAV (8 bytes, each encoding one character of the third stored file's filename) 10075-30074 ... (20000 bytes, representing the data stored within TEST3.WAV. Data not shown!)
If we had a copy of the file described above it would be 30074 bytes in size, and it would contain all of the data represented by the files TEST.TXT, TEST2.BMP and TEST3.WAV. Of course, this file format allows for arbitrarily large files; all we need now is a handy-dandy program that can be used to store files in this format for us!
Resource Creator Source
In order to create a tool capable of storing files in our simple custom format, we need a few utility functions. We'll start off slow.
int getfilesize(char *filename) { struct stat file; //This structure will be used to query file status //Extract the file status info if(!stat(filename, &file)) { //Return the file size return file.st_size; } //ERROR! Couldn't get the filesize. printf("getfilesize: Couldn't get filesize of '%s'.", filename); exit(1); }
The getfilesize function accepts a pointer to a filename string, and uses that pointer to populate a stat struct. If the stat struct is not NULL, we'll be able to return an int containing the file size, in bytes. We'll need this function later on!
int countfiles(char *path) { int count = 0; //This integer will count up all the files we encounter countfiles again (recursion) and add the result to the count total count += countfiles(entry->d_name); chdir(".."); } else { //We've found a file, increment the count count++; } } } } //Make sure we close the directory stream if (closedir(dir) == -1) { perror("closedir failure"); exit(1); } //Return the file count return count; }
Things get interesting now. The code above describes a handy little countfiles function, which will recurse through the subdirectories of a given path, and count all of the files it encounters along the way. To do this, a DIR directory stream structure is initialized with a given path value. This directory stream can be exploited repeatedly by the readdir function to obtain pointers to dirent structures, which contain information on a given file within the directory. As we loop, the readdir function will fill the dirent structure with data describing a different file within the directory until all files have been exausted. When no files are left to describe, readdir will return NULL and the while loop will cease!
Now, if we look within the while loop, some cool stuff is going on. First, strcmp is being used to compare the name of a given file entry to the strings "." and ".."; this is necessary, as otherwise the "." and ".." values will be recognized as directories and recursed into, creating a nasty infinite loop!
If the entry->d_name value passes the test, it is then passed to the stat function, in order to fill out the stat structure, called file_status. If a value of zero is returned, something must be wrong with the file, and it is simply skipped. On a non-zero result, execution continues, and S_ISDIR is employed, allowing us to check if the file in question is a directory, or not. If it is a directory, the countfiles function is called recursively. If it is not a directory, then the count variable is simply incremented, and the loop moves on the the next file!
void findfiles(char *path, int fd) { findfiles again (recursion), passing the new directory's path findfiles(entry->d_name, fd); chdir(".."); } else { //We've found a file, pack it into the resource file packfile(entry->d_name, fd); } } } } //Make sure we close the directory stream if (closedir(dir) == -1) { perror("closedir failure"); exit(1); } return; }
You may notice that the code above is quite similar to that contained within the countfiles function. I could have removed the code duplication through the use of function pointers, or various other means, but I believe code readability would have suffered; and this is meant to be a quick and dirty resource file creator. Nothing fancy! Besides, the upside is that most of this code is already familiar to us.
Basically, the findfiles routine loops recursively through the subdirectories of a given path (just like countfiles), but instead of counting the files, it determines their filename strings and passes them to the packfile function.
So, bring on the packfile function:
void packfile(char *filename, int fd) { int totalsize = 0; //This integer will be used to track the total number of bytes written to file //Handy little output printf("PACKING: '%s' SIZE: %i\n", filename, getfilesize(filename)); //In the 'header' area of the resource, write the location of the file about to be added lseek(fd, currentfile * sizeof(int), SEEK_SET); write(fd, ¤tloc, sizeof(int)); //Seek to the location where we'll be storing this new file info lseek(fd, currentloc, SEEK_SET); //Write the size of the file int filesize = getfilesize(filename); write(fd, &filesize, sizeof(filesize)); totalsize += sizeof(int); //Write the LENGTH of the NAME of the file int filenamelen = strlen(filename); write(fd, &filenamelen, sizeof(int)); totalsize += sizeof(int); //Write the name of the file write(fd, filename, strlen(filename)); totalsize += strlen(filename); //Write the file contents int fd_read = open(filename, O_RDONLY); //Open the file char *buffer = (char *) malloc(filesize); //Create a buffer for its contents read(fd_read, buffer, filesize); //Read the contents into the buffer write(fd, buffer, filesize); //Write the buffer to the resource file close(fd_read); //Close the file free(buffer); //Free the buffer totalsize += filesize; //Add the file size to the total number of bytes written //Increment the currentloc and current file values currentfile++; currentloc += totalsize; }
This function is really the heart of the program; it takes a file and stores it within the resource as a body entry (which we described above, in the file format section). packfile accepts a filename pointer and an integer file descriptor fd as arguments. It then goes on to store file size, filename, and file data within the resource file (which is referenced with the fd file descriptor). The variables currentfile and currentloc are globals, described in the next segment of code. Basically, they contain values which instruct the packfile function where to create and store this new body entry's data.
Putting these utility functions together is now fairly simple. We just need a Main function, and some includes:
#include "stdio.h" #include "dirent.h" #include "sys/stat.h" #include "unistd.h" #include "fcntl.h" #include "sys/param.h" //Function prototypes int getfilesize(char *filename); int countfiles(char *path); void packfile(char *filename, int fd); void findfiles(char *path, int fd); int currentfile = 1; //This integer indicates what file we're currently adding to the resource. int currentloc = 0; //This integer references the current write-location within the resource file int main(int argc, char *argv[]) { char pathname[MAXPATHLEN+1]; //This character array will hold the app's working directory path int filecount; //How many files are we adding to the resource? int fd; //The file descriptor for the new resource //Store the current path getcwd(pathname, sizeof(pathname)); //How many files are there? filecount = countfiles(argv[1]); printf("NUMBER OF FILES: %i\n", filecount); //Go back to the original path chdir(pathname); //How many arguments did the user pass? if (argc < 3) { //The user didn't specify a resource file name, go with the default fd = open("resource.dat", O_WRONLY | O_EXCL | O_CREAT | O_BINARY, S_IRUSR); } else { //Use the filename specified by the user fd = open(argv[2], O_WRONLY | O_EXCL | O_CREAT | O_BINARY, S_IRUSR); } //Did we get a valid file descriptor? if (fd < 0) { //Can't create the file for some reason (possibly because the file already exists) perror("Cannot create resource file"); exit(1); } //Write the total number of files as the first integer write(fd, &filecount, sizeof(int)); //Set the current conditions currentfile = 1; //Start off by storing the first file, obviously! currentloc = (sizeof(int) * filecount) + sizeof(int); //Leave space at the begining for the header info //Use the findfiles routine to pack in all the files findfiles(argv[1], fd); //Close the file close(fd); return 0; }
The code in function main is primarily concerned with creating the resource file (either giving it the name "resource.dat", or using a string passed as a command-line argument), counting up the files and storing this value in the header, and then calling the findfiles function which loops through all subdirectories and makes use of packfile to pack them into the resource. The user must specify a path command-line argument when executing the program, as this path value will be passed as the initial argument to the findfiles routine. All files within the given path will be found and packed into the resource file! A sample execution:
UNIX: ./customresource resource myresource.dat WINDOWS: customresource resource myresource.dat
Calling the program with the command-line arguments shown above would result in a resource file called myresource.dat being created, containing all files found within the directory called "resource" (including its subdirectories).
Source code
- To download the sample source code (and some media files to play with), click here.
- NOTE: The above source code will not work with MSVC++, as the dirent.h header file is not included with VC++! If you are using VC++, please download this source code instead (provided by Drew Benton).
Related tutorials
So, we have the power to create custom resource files on a whim... now what? Try one of these lovely tutorials: | http://content.gpwiki.org/index.php/C:Custom_Resource_Files | CC-MAIN-2014-49 | refinedweb | 2,250 | 58.82 |
lockf(), lockf64()
Lock or unlock a section of a file
Synopsis:
#include <unistd.h> int lockf( int filedes, int function, off_t size ); int lockf64( int filedes, int function, off64_t size );
Since:
BlackBerry 10.0.0
Arguments:
- fildes
- The file descriptor for the file that you want to lock. Open the file with write-only permission ( O_WRONLY ) or with read/write permission ( O_RDWR ).
- function
- A control value that specifies the action to be taken. The permissible values (defined in <unistd.h>) are as follows:
- F_LOCK
- Lock a section for exclusive use if the section is available. A read-only lock is one of O_RDONLY , O_WRONLY , or O_RDWR . An exclusive lock is one of O_WRONLY , or O_RDWR . (For descriptions of the locks, see open()).
- F_TEST
- Test a specified section for locks obtained by other processes.
- F_TLOCK
- Test and lock a section for exclusive use if the section is available.
- F_ULOCK
- Remove locks from a specified section of the file.
- size
- The number of contiguous bytes that you want to lock or unlock. to the present or any future end-of-file). An area need not be allocated to the file to be locked because locks may exist past the end-of-file.
Library:
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
Description:
You can use the lockf() function to lock a section of a file, using advisory-mode locks. If other threads call lockf() to try to lock the locked file section, those calls either return an error value or block until the section becomes unlocked. The lockf64() function is a large-file version of lockf(). (or off64_t for lockf64()).
Errors:
- EACCES or EAGAIN
- The function argument is F_TLOCK or F_TEST and the section is already locked by another process.
- EAGAIN
- The function argument is F_LOCK or F_TLOCK and the file is mapped with mmap().
- EBADF
- The fildes argument isn't a valid open file descriptor; or function is F_LOCK or F_TLOCK and fildes isn't a valid file descriptor open for writing.
- EDEADLK
- The function argument is F_LOCK and a deadlock is detected.
- EINTR
- A signal was caught during execution of the function.
- EINVAL
- The function argument isn't one of F_LOCK , F_TLOCK , F_TEST or F_ULOCK ; or size plus the current file offset is less than 0.
- ENOMEM
- The system can't allocate sufficient memory to store lock resources.
- ENOSYS
- The filesystem doesn't support the operation.
- EOPNOTSUPP or EINVAL
- The implementation doesn't support the locking of files of the type indicated by fildes.
- EOVERFLOW
- The offset of the first, or if size isn't 0 then the last, byte in the requested section can't be represented correctly in an object of type off_t.
Classification:
Last modified: 2014-11-17
Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus | http://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/l/lockf.html | CC-MAIN-2015-22 | refinedweb | 478 | 66.74 |
Question:
Euler discovered the remarkable quadratic formula::
where |n| is the modulus/absolute value of ne.g. |11| = 11 and |−4| = 4
Find the product of the coefficients, a and b, for the quadratic expression that produces the maximum number of primes for consecutive values of n, starting with n = 0.
When approaching a problem like this, you should start by talking out loud. Keep brainstorming ideas so that you can get your brain going. If nothing appears in your head, write down the brute force approach.
Let's go through this step-by-step.
Perhaps one way of brainstorming is to notice the quadratic equation and think of the quadratic formula. Would the formula help in this case? In this case, we aren't looking for n, and the equation isn't set to 0 so the quadratic equation is irrelevant. However, it's still good to keep your brain rolling by thinking of things like this.
We'll definitely need a way to check if the value is actually prime, so let's make sure we have that helper method.
We've covered several ways to check if a value is prime earlier, so let's use this method.; }
Basically this uses the property that if our input value is composite (not prime), then there exists one factor between 0 and sqrt(input) and a corresponding factor between sqrt(input) and input
There comes a time when you need to start jotting down the brute force method just to gauge your understanding and help brainstorm even more. If we write this out, we have values from -999 to 999 for a and the same for b. We calculate each value starting from n = 1 until we hit a non-prime.
We have to do this for all values so we have O(a*b*n) time, where the n here is the value that n can go to without making our equation be composite.
Let's try plugging in numbers to see if we can find any special methods. We start with n = 0.
We find that b itself must be prime in order for n to pass n = 0. Since b has to be prime, we can say that b must be a positive odd number.
Let's try n = 1.
We learned earlier that b must be odd. If we plug in any odd numbers for b into the equation above, we must have an odd value for a to make the entire sum prime. This is because all primes except 2 are odd.
So now we have that all values of a and b must be odd for n to be greater than 1.
Here's the full code in Java. We broke it down into three methods: one that finds the maximum quadratic prime values for a and b, which calls on another function that finds the maximum value of n, which calls another to check if the value is prime.
public class QuadraticPrimes { public static void main(String[] args) { System.out.println(maxQuadPrime()); } public static int maxQuadPrime() { int maxN = 0; int maxA = 0; int maxB = 0; int currentMax = 0; for (int a = -999; a < 1000; a += 2) { for (int b = 3; b < 1000; b += 2) { currentMax = findMaxN(a,b); if (currentMax > maxN) { maxN = currentMax; maxA = a; maxB = b; } } } return maxA * maxB; } public static int findMaxN(int a, int b) { int max = 0; int n = 2; while (isPrime(n*n + a*n + b)) { if (n > max) max = n++; } return max; } public static boolean isPrime; } }
Project Euler - Question 27
Came up with a better solution or have a question? Comment below!
Next Challenge: Rotate a 2d array in place | https://code.snipcademy.com/challenges/problems/quadratic-primes | CC-MAIN-2019-47 | refinedweb | 614 | 68.7 |
Tiger12506 wrote: >> Based on your guidance, I figured it out. I need to use a return >> statement, which I had not encountered before. Now I wrote my >> definitions in this way: >> >> def collided(): >> if player_x == robot_x+0.5 and player_y == robot_y+0.5: >> return True >> > > This could be simplified more. > Here's an example as a hint. These two functions are the same. > > def f(): > if a == b and c == d: > return True > > def g(): > return (a==b and c == d) > > I got it. I will do it like this: def collided(): return (player_x == robot_x+0.5 and player_y == robot_y+0.5) Thank you, Tonu -- Tonu Mikk Educational Technology Consultant Digital Media Center - dmc.umn.edu tmikk at umn.edu 612 625-9221 | https://mail.python.org/pipermail/tutor/2007-July/055827.html | CC-MAIN-2016-30 | refinedweb | 123 | 79.87 |
Answered by:
How to call RegisterObjectParam of IBindCtx?
Question
I met a problem when developing an folder-browsing app by shell. The sub folder of network folder can not be retrieved sometimes. According to debug result I find that it failed to get the IShellFolder of network folder.
And here is my code:
Int hres = desktopShellFolders.BindToObject(idList, IntPtr.Zero, ref iIdIShellFolder, out iNewShell);
Here, idList is the PIDL of network folder.
I searched from msdn and luckily got some info which seemed useful to my problem.
Here is the link :
---------------------------------------------------------------------------------------------------------------
Bind Context String Keys
Used by method IBindCtx::RegisterObjectParam to specify a bind context.
STR_BIND_FORCE_FOLDER_SHORTCUT_RESOLVE
A folder shortcut is a folder item in the namespace that points to another folder item in the namespace using a link (shortcut) to hold the IDList of the target. The shortcut is resolved to track the target in case it is moved or renamed. For instance, the Windows XP My Network Places folder and the Windows Vista Computer folder can contain folder shortcuts created with the Add Network Location wizard. The normal IShellFolder::BindToObject operations for folder shortcuts do not resolve links to network folders to improve performance. The presence of this bind context enables the folder shortcut to resolve the link that points to its target and is useful where being able to track the target is worth the performance implications of resolving the target.
---------------------------------------------------------------------------------------------------------------
I think I should create an IBindCtx instance and set it to the second parameter of BintToObject instead of IntPtr.Zero.
But, I really can not get any info about how to call RegisterObjectParam and how to use STR_BIND_FORCE_FOLDER_SHORTCUT_RESOLVE.
Is there anyone who can help me? Thank you very much
Answers
- Teh IBindCtx interface is defined in the System.Runtime.InteropServices.ComTypes namespace. You can create an object that implements it using CreateBindCtx.
All replies
- Teh IBindCtx interface is defined in the System.Runtime.InteropServices.ComTypes namespace. You can create an object that implements it using CreateBindCtx.
Thank you very much for your reply.
I think I did not describe what I realy want to know. Actually I have known the way to create IBindCtx. But how to set the parameter? read the summary of RegisterObjectParam but still not understand. According to my context, could you please show me what parameter should be set? Thank you very much.
In ShObjIdl.idl from the Windows SDK you can find that the STR_BIND_FORCE_FOLDER_SHORTCUT_RESOLVE constant is defined as "Force Folder Shortcut Resolve". You can also see in the comments that the object doesn't have to support any particular interface (i.e. you can pass in any object, it's the presence of the key that matters).
So I guess you'd call it like this
const string STR_BIND_FORCE_FOLDER_SHORTCUT_RESOLVE = "Force Folder Shortcut Resolve";
...
yourBindCtx.RegisterObjectParam(STR_BIND_FORCE_FOLDER_SHORTCUT_RESOLVE, new object());
You then have to change your declaration of IShellFolder.BindToObject so that the second parameter is of type IBindCtx instead of IntPtr.
Thank you very much for your reply. But it does not work
I still can not get the iNewShell, the return result of BindToObject is always 1, which flags that the method failed. And the out value iNewShell is NULL.
Do you have any idea about it? Thank you very much. | https://social.msdn.microsoft.com/Forums/en-US/41f3c06f-d037-4697-8620-b637ed6272fd/how-to-call-registerobjectparam-of-ibindctx?forum=clr | CC-MAIN-2020-45 | refinedweb | 541 | 57.57 |
Is there a quick method to find the shortest repeated substring and how many times it occurs? If there is non you only need to return the actual string ( last case ).
>>> repeated('CTCTCTCTCTCTCTCTCTCTCTCT')
('CT', 12)
>>> repeated('GATCGATCGATCGATC')
('GATC', 4)
>>> repeated('GATCGATCGATCGATCG')
('GATCGATCGATCGATCG', 1)
def repeated(sequentie):
string = ''
for i in sequentie:
if i not in string:
string += i
items = sequentie.count(string)
if items * len(string) == len(sequentie):
return (string, items)
else:
return (sequentie, 1)
Your method unfortunately won't work, since it assumes that the repeating substring will have unique characters. This may not be the case:
abaabaabaabaabaaba
You were somewhat on the right track, though. The shortest way that I can think of is to just try and check over and over if some prefix indeed makes up the entire string:
def find_shorted_substring(s): for i in range(1, len(s) + 1): substring = s[:i] repeats = len(s) // len(substring) if substring * repeats == s: return (substring, repeats)
It's not very efficient, but it works. There are better ways of doing it. | https://codedump.io/share/pmOqZM7KCjSP/1/shortest-repeated-substring-python | CC-MAIN-2018-22 | refinedweb | 174 | 57 |
#include <signal.h>
void (
signal(int sig, void (signal(int sig, void (
disp)(int)))(int);disp)(int)))(int);
void (
sigset(int sig, void (sigset(int sig, void (
disp)(int)))(int);disp)(int)))(int);
int sighold(int sig);
int sigrelse(int sig);
int sigignore(int sig);
int sigpause(int sig);
signal and sigset are used to modify signal dispositions. sig specifies the signal, which may be any signal except SIGKILL and SIGSTOP.. However, if sigset is used and disp is not equal to SIG_HOLD, sig will be removed from the calling process's signal mask.
sighold adds sig to the calling process's signal mask.
sigrelse removes sig from the calling process's signal mask.
sigignore sets the disposition of sig to SIG_IGN.
sigpause removes sig from the calling process's signal mask and suspends the calling process until a signal is received.
On success, sigset returns SIG_HOLD if the signal had been blocked or the signal's previous disposition if it had not been blocked. On failure, sigset returns SIG_ERR and sets errno to identify the error.
All other functions return zero on success. On failure, they return -1 and set errno to identify the error.(S)]. If the calling process subsequently waits for its children, it blocks until all of its children terminate; it then returns a value of -1 with errno set to ECHILD. [see wait(S), waitid(S)].
See signal(M) for further details. | http://osr600doc.xinuos.com/en/man/html.S/signal.S.html | CC-MAIN-2020-45 | refinedweb | 237 | 73.27 |
Python weere structured like this:
<speaker>: <sentence>.
The first problem with this approach was that I didn't have any labelled data to work with so I wrote a little web application that made it easy for me to train chunks of sentences at a time:

I stored the trained words in a JSON file. Each entry looks like this:
import json with open("data/import/trained_sentences.json", "r") as json_file: json_data = json.load(json_file) >>> json_data[0] {u'words': [{u'word': u'You', u'speaker': False}, {u'word': u'ca', u'speaker': False}, {u'word': u"n't", u'speaker': False}, {u'word': u'be', u'speaker': False}, {u'word': u'friends', u'speaker': False}, {u'word': u'with', u'speaker': False}, {u'word': u'Robin', u'speaker': False}, {u'word': u'.', u'speaker': False}]} >>> json_data[1] {u'words': [{u'word': u'Robin', u'speaker': True}, {u'word': u':', u'speaker': False}, {u'word': u'Well', u'speaker': False}, {u'word': u'...', u'speaker': False}, {u'word': u'it', u'speaker': False}, {u'word': u"'s", u'speaker': False}, {u'word': u'a', u'speaker': False}, {u'word': u'bit', u'speaker': False}, {u'word': u'early', u'speaker': False}, {u'word': u'...', u'speaker': False}, {u'word': u'but', u'speaker': False}, {u'word': u'...', u'speaker': False}, {u'word': u'of', u'speaker': False}, {u'word': u'course', u'speaker': False}, {u'word': u',', u'speaker': False}, {u'word': u'I', u'speaker': False}, {u'word': u'might', u'speaker': False}, {u'word': u'consider', u'speaker': False}, {u'word': u'...', u'speaker': False}, {u'word': u'I', u'speaker': False}, {u'word': u'moved', u'speaker': False}, {u'word': u'here', u'speaker': False}, {u'word': u',', u'speaker': False}, {u'word': u'let', u'speaker': False}, {u'word': u'me', u'speaker': False}, {u'word': u'think', u'speaker': False}, {u'word': u'.', u'speaker': False}]} ':' so 'next word' can be a feature. I also went with 'previous word' and the word itself for my first cut.
This is the function I wrote to convert a word in a sentence into a set of features:
def pos_features(sentence, i): features = {} features["word"] = sentence[i] if i == 0: features["prev-word"] = "<START>" else: features["prev-word"] = sentence[i-1] if i == len(sentence) - 1: features["next-word"] = "<END>" else: features["next-word"] = sentence[i+1] return features
Let's try a couple of examples:
import nltk >>> pos_features(nltk.word_tokenize("Robin: Hi Ted, how are you?"), 0) {'prev-word': '<START>', 'word': 'Robin', 'next-word': ':'} >>> pos_features(nltk.word_tokenize("Robin: Hi Ted, how are you?"), 5) {'prev-word': ',', 'word': 'how', 'next-word': 'are'}
Now let's run that function over our full set of labelled data:) for i, (word, tag) in enumerate(tagged_sent): featuresets.append( (pos_features(untagged_sent, i), tag) )
Here's a sample of the contents of featuresets:
>>> featuresets[:5] [({'prev-word': '<START>', 'word': u'You', 'next-word': u'ca'}, False), ({'prev-word': u'You', 'word': u'ca', 'next-word': u"n't"}, False), ({'prev-word': u'ca', 'word': u"n't", 'next-word': u'be'}, False), ({'prev-word': u"n't", 'word': u'be', 'next-word': u'friends'}, False), ({'prev-word': u'be', 'word': u'friends', 'next-word': u'with'}, False)]
It's nearly time to train our model, but first we need to split out labelled data into training and test sets so we can see how well our model performs on data it hasn't seen before. sci-kit learn has a function that does this for us:
from sklearn.cross_validation import train_test_split train_data,test_data = train_test_split(featuresets, test_size=0.20, train_size=0.80) >>> len(train_data) 9480 >>> len(test_data) 2370
Now let's train our model. I decided to try out Naive Bayes and Decision tree models to see how they got on:
>>> classifier = nltk.NaiveBayesClassifier.train(train_data) >>> print nltk.classify.accuracy(classifier, test_data) 0.977215189873 >>> classifier = nltk.DecisionTreeClassifier.train(train_data) >>> print nltk.classify.accuracy(classifier, test_data) 0.997046413502
It looks like both are doing a good job here with the decision tree doing slightly better. One thing to keep in mind is that most of the sentences we've trained at in the form '
If we explore the internals of the decision tree we'll see that it's massively overfitting which makes sense given our small training data set and the repetitiveness of the data:
>>> print(classifier.pseudocode(depth=2)) if next-word == u'!': return False if next-word == u'$': return False ... if next-word == u"'s": return False if next-word == u"'ve": return False if next-word == u'(': if word == u'!': return False ... if next-word == u'*': return False if next-word == u'*****': return False if next-word == u',': if word == u"''": return False ... if next-word == u'--': return False if next-word == u'.': return False if next-word == u'...': ... if word == u'who': return False if word == u'you': return False if next-word == u'/i': return False if next-word == u'1': return True ... if next-word == u':': if prev-word == u"'s": return True if prev-word == u',': return False if prev-word == u'...': return False if prev-word == u'2030': return True if prev-word == '<START>': return True if prev-word == u'?': return False ... if next-word == u'\u266a\u266a': return False 'speaker. | https://markhneedham.com/blog/2015/02/20/pythonscikit-learn-detecting-which-sentences-in-a-transcript-contain-a-speaker/ | CC-MAIN-2020-24 | refinedweb | 904 | 63.7 |
#include <Epetra_OskiUtils.h>
List of all members.
The Epetra_OskiUtils class is a helper class used to call OSKI functions that do not use matrix, vector, error or permutation objects. It provides an interface to access the initialization and finalize routines of OSKI.
All functions are public to allow access to methods needed by programs using OSKI. There are no data members of the class as all data is kept in the matrix, vector, multi-vector, error and permutation classes.
Finalizes the use of OSKI.
When done using OSKI this routine performs cleanup operations. While not strictly required it is highly recommended to be called when OSKI is no longer being used.
Initializes OSKI.
Calls the OSKI routine to initialize the use of OSKI. This routine is required before OSKI can be used. | http://trilinos.sandia.gov/packages/docs/r10.0/packages/epetra/doc/html/classEpetra__OskiUtils.html | crawl-003 | refinedweb | 132 | 58.28 |
This is your resource to discuss support topics with your peers, and learn from each other.
05-16-2012 04:28 PM
I have read some blog posts and forum entries about trying to implement a QNXApplicationEvent.SWIPE_DOWN listener, but somehow none work for me.
I have tried signals:
_swipedDown = newNativeSignal(QNXApplication.qnxApplication, QNXApplicationEvent.SWIPE_DOWN, QNXApplicationEvent);
_swipedDown.add(onSwipedDown);
And I have tried with regular event handlers:
QNXApplication.qnxApplication.addEventListener(QNX
But the handler never traces anything:
private function onSwipedDown(_:QNXApplicationEvent):void
{
trace("\n", this, "--- onSwipedDown ---");
}
The class holding this code extends a Sprite. Anything I should know about?
Solved! Go to Solution.
05-16-2012 04:36 PM
Hi,
What version of the SDK are you using? I've seen the same thing with bb10 SDK when trying to run on the PlayBook. I think it comes from the fact that some of the qnx packages are now implemented as ANEs...
05-16-2012 04:46 PM
I'm using these:
Yea, there are some ANEs linked in there.
05-16-2012 04:52 PM
My code was working well when I built it against tablet sdk 2.0. But since I build it on tablet sdk 3.0 (I called it bb10 SDK in the previous post ) the event is not working anymore. I don't now if this is because sdk 3 is only supported on bb10, or if it is a bug... I only tested on the PlayBook as I don't have a bb10 yet.
05-16-2012 04:54 PM
Yea, feels like a bug to me then. A functionality cannot be taken away in the hopes that users are necessarily up to date. Especially if an update is not available to their device.
05-17-2012 12:15 AM
I just tried again and this works for me:
import qnx.events.QNXApplicationEvent; import qnx.system.QNXApplication;
QNXApplication.qnxApplication.addEventListener(QNX
ApplicationEvent.SWIPE_DOWN, showSwipeUI);ApplicationEvent.SWIPE_DOWN, showSwipeUI);
private function showSwipeUI(e:QNXApplicationEvent):void { trace("zomg swipez!"); }
05-17-2012 09:51 AM
Are you testing this on a PlayBook or using the BlackBerry Dev Alpha Simulator? Applications produced in the BB10 SDKs require BlackBerry 10+ to run. They will not run correctly on Tablet OS 2.x.
05-17-2012 09:54 AM
The code I pasted above Mark's post worked for me with the 2.0 SDK on the PlayBook and the 3.0 SDK on the Dev Alpha. I didn't try it with the 3.0 on the PlayBook for the reasons Mark stated. Just thought I'd clear that up quick
05-17-2012 10:08 AM
I'm having the same problem.
My exisitng app with SDK 2.0, swipe down works fine on PlayBook.
Updated app to SDK 3.0, running on the BB10 simulator, swipe down won't work.
05-17-2012 10:18 AM
Thanks for clarification Mark, let's hope a bb10 dev Beta will soon be available on the PlayBook, as was the case with 2.0. I'm already upgrading all my apps to bb10 ;-) | http://supportforums.blackberry.com/t5/Adobe-AIR-Development/QNXApplicationEvent-SWIPE-DOWN-not-firing-at-all-in-an-AS3/m-p/1720639/highlight/true | CC-MAIN-2016-07 | refinedweb | 514 | 68.57 |
putc() prototype
int putc(int ch, FILE* stream);
The
putc() function takes a output file stream and an integer as its arguments. The integer is converted to unsigned char and written to the file.
putc() and fputc() are similar in terms of functionality. However, a major difference between
fputc() and
putc() is that
putc() can be implemented as a macro.
It is defined in <cstdio> header file.
putc() Parameters
- ch: The character to be written.
- stream: The file stream to write the character.
putc() Return value
- On success, the putc() function returns the written character.
- On failure, it returns EOF and sets the error indicator on stream.
Example: How putc() function works
#include <cstdio> #include <cstring> int main() { char str[] = "Testing putc() function"; FILE *fp; fp = fopen("file.txt","w"); if (fp) { for(int i=0; i<strlen(str); i++) { putc(str[i],fp); } } else perror("File opening failed"); fclose(fp); return 0; }
When you run the program, the string "Testing putc() function" will be written to file.txt file. | https://www.programiz.com/cpp-programming/library-function/cstdio/putc | CC-MAIN-2020-16 | refinedweb | 170 | 65.52 |
Insert an output value step
Relevant for: GUI tests and components
Output values are usually best defined after creating an initial test or component.
Tips before you start
File Content output values
Source files for File Content output values must be located on the file system.
File Content output values are not supported for use with business components, or for files stored in an ALM project.
Text / Text Area output values
Ensure that you configure the required capture settings in the Text Recognition pane of the Options dialog box (Tools > Options > GUI Testing tab > Text Recognition node).
When you use text-area selection to capture text displayed in a Windows application, it is often advisable to define a text area larger than the actual text you want UFT to use as an output value.
During the run session, UFT outputs the selected text, within the defined area, according to the settings you configured.
Because text may change its position during run sessions, make sure that the area defined is large enough so that the output text is always within its boundaries. For details, see Text recognition in run-time.
XML Output Values
The XML Output Value (From Application) option is available only when the Web Add-in is installed and loaded.
You can also insert a Web page or frame output value step using the XML (From Resource) option by selecting an existing WebXML test object.
XML Output Values are compatible with namespace standards.
A difference in namespaces between nodes stored in the Output Properties dialog box XML tree and the actual values will result in a failed output value step.
For more details, see:
XML standards.
Namespace standards.
Output value object options
Insert an output value step while recording
Start a recording session before inserting an output value step. Output values can be viewed in the following recording modes:
Select the type of output value you want to add from one of the following:
The pointer changes into a pointing hand.
If necessary, select the object or object sections you want to include in the output value. The specific dialog that opens differs depends on the type of output value selected.
Insert a new output value from the editor
This task is relevant for tests and scripted components only.
Make sure the object is visible in your application before inserting an output value step on it.
Right-click the step and select Insert Output Value.
If the location you clicked is associated with multiple objects, select the object or object sections you want to include from the Object Selection Dialog box.
The dialog box displayed may differ depending on the type of output value you are inserting.
Insert a new output value from the Active Screen
This task is relevant for tests and scripted components only.
Select View > Active Screen.
Make sure that the Active Screen contains sufficient data for the object for which you want to define an output value.
Click a step whose Active Screen contains the object for which you want to specify an output value.
Then, right-click the object, and select Insert Output Value.
Insert an existing output step from the Editor
This task is relevant for tests and scripted components only.
Ensure that the object is visible in your application.
Select the step after which you want to insert the output value.
Select Design > Output Value > Existing Output Value. The Add Existing Output Value Dialog Box opens, enabling you to select the test object from which you want to output values.
See also:
- Output Values in GUI Testing
- Output value types
- Storing output values
- Text recognition in run-time
- Tips for using the pointing hand
- Connect to Database Using ODBC Page (Database Query Wizard)
- Define/Modify Row Range Dialog Box
- Object Selection Dialog Box
- Output Value Properties Dialog Box
- Text Recognition Pane (Options Dialog Box > GUI Testing Tab) <![CDATA[ ]]>
- XML Source Selection - Checkpoint / Output Value Properties Dialog Box | https://admhelp.microfocus.com/uft/en/14.51/UFT_Help/Content/User_Guide/Task_Add_output_Values_Print.htm | CC-MAIN-2018-51 | refinedweb | 655 | 52.09 |
import "github.com/fluhus/godoc-tricks"
Package godoctricks is a tutorial that deals with tricks for making your godoc organized and neat. This is a compilation of tricks I've collected and couldn't find a comprehensive guide for.
Notice that this doc is written in godoc itself as package documentation. The defined types are just for making the table of contents at the head of the page; they have no meanings as types.
If you have any suggestion or comment, please feel free to open an issue on this tutorial's GitHub page!
By Amit Lavon
You can embed blocks of code in your godoc, such as this:
fmt.Println("Hello")
To do that, simply add an extra indent to your comment's text.
For example, the code of the first lines of this section looks like this:
// You can embed blocks of code in your godoc, such as this: // fmt.Println("Hello") // To do that, simply add an extra indent to your comment's text.
You can place usage examples in your godoc.
Examples should be placed in a file with a _test suffix. For example, the examples in this guide are in a file called doc_test.go .
The example functions should be called Example() for package examples, ExampleTypename() for a specific type or ExampleFuncname() for a specific function. For multiple examples for the same entity (like same function), you can add a suffix like ExampleFoo_suffix1, ExampleFoo_suffix2.
You can document an example's output, by adding an output comment at its end. The output comment must begin with "Output:", as shown below:
func ExampleExamples_output() { fmt.Println("Hello") // Output: Hello }
Notice that the tricks brought here (titles, code blocks, links etc.) don't work in example documentation.
For full documentation of examples, see:
This function is named ExampleExamples(), this way godoc knows to associate it with the Examples type.
Code:
fmt.Println("Hello")
This function is named ExampleExamples_other(), it is associated with Examples type and has the label "Other".
Code:
fmt.Println("Hello")
This is how godoc parsed ExampleExamples_output() that was shown above.
Code:
fmt.Println("Hello")
Output:
Hello
Go code that you upload to public repositories on github appears automatically on the godoc website. Just like this tutorial! Just check in your code and watch as it appears. Use this page's URL as reference.
You can see your godoc rendered as HTML by running a local godoc server. This is great for previewing your godoc before committing changes. To do that, Make sure your code is in GOPATH and run:
godoc -http ":8080"
Go to and you should see your packages on the list.
If you want the raw HTML, you can run:
godoc -url=/pkg/your_package > your_page.html
Web addresses will automatically generate actual links in the HTML output, like this:
Methods are functions with receivers. Godoc associates methods with their receivers and attaches their documentation. See below.
Functions that construct an instance of a type (or a pointer to it) are associated with the returned type.
Methods are attached to their receiver type in the godoc, regardless of their physical location in the code.
Pointer receivers are also associated in the same way.
While there are no built in enums in go, you can use types and constants to mock them (documentation-wise). Take this Mock_enums type for example, if you have a constant clause where all the constants are of the same type, it will be attached to that type's godoc. See below.
const ( A Mock_enums = 1 B Mock_enums = 2 )
To start a new paragraph, add an empty line in the comment between the 2 paragraphs.
For example:
// Paragraph 1. // Still paragraph 1. // // Paragraph 2. // Still Paragraph 2.
will yield:
Paragraph 1. Still paragraph 1.
Paragraph 2. Still Paragraph 2.
You can make titles in your godoc. A title is a line that is separated from its following line by an empty line, begins with a capital letter and doesn't end with punctuation.
For example, the code:
// Sentence 1 // // Sentence 2
will yield:
Sentence 2
While this code:
// Sentence 1. // // Sentence 2.
will yield:
Sentence 1.
Sentence 2.
See documentation here:
Updated 2019-10-03. Refresh now. Tools for package owners. | https://godoc.org/github.com/fluhus/godoc-tricks | CC-MAIN-2019-43 | refinedweb | 699 | 66.13 |
How to: Use Wizards with Project Templates
The new home for Visual Studio documentation is Visual Studio 2017 Documentation on docs.microsoft.com.
Visual Studio provides the IWizard interface that, when implemented, enables you to run custom code when a user creates a project from a template.
Project template customization can be used to display custom UI that collects user input to customize the template, add additional files to the template, or any other action allowed.
You start creating a custom template with the project template project., which is part of the Visual Studio SDK. In this procedure we will use a C# project template project, but there is also a Visual Basic project template project. Then you add a VSIX project to the solution that contains the project template project.
Create a C# project template project (in Visual Studio, File / New / Project / Visual C# / Extensibility / C# Project Template). Name it MyProjectTemplate.
Add a new VSIX project (File / New / Project / Visual C# / Extensibility / VSIX Project) in the same solution as the project template project (in the Solution Explorer, select the solution node, right-click, and select Add / New Project). Name it MyProjectWizard.
Set the VSIX project as the startup project. In the Solution Explorer, select the solution node, right-click, and select Set as Startup Project.
Add the template project as an asset of the VSIX project. In the Solution Explorer, under the VSIX project node, find the source.extension.vsixmanifest file. Double-click it to open it in the manifest editor.
In the manifest editor, select the Assets tab on the left side of the window.
In the Assets tab, select New. In the Add New Asset window, for the Type field, select Microsoft.VisualStudio.ProjectTemplate. In the Source field, select A project in current solution. In the Project field, select MyProjectTemplate. Then click OK.
Build the solution and start debugging. A second instance of Visual Studio appears. (This may take a few minutes.)
In the second instance of Visual Studio, try to create a new project with your new template. (File / New / Project / Visual C# / MyProject Template). The new project should appear with a class named Class1. You have now created a custom project template! Stop debugging now.
This topic shows how to create a custom wizard that opens a Windows Form before the project is created. The form allows users to add a custom parameter value that is added to the source code during project creation.
Set up the VSIX project to allow it to create an assembly.
In the Solution Explorer, select the VSIX project node. Below the Solution Explorer, you should see the Properties window. If you do not, select View / Properties Window, or press F4. In the Properties window, select the following fields to
true:
IncludeAssemblyInVSIXContainer
IncludeDebugSymbolsInVSIXContainer
IncludeDebugSymbolsInLocalVSIXDeployment
Add the assembly as an asset to the VSIX project. Open the source.extension.vsixmanifest file and select the Assets tab. In the Add New Asset window, for Type select Microsoft.VisualStudio.Assembly, for Source select A project in current solution, and for Project select MyTemplateWizard.
Add the following references to the VSIX project. (In the Solution Explorer, under the VSIX project node select References, right-click, and select Add Reference.) In the Add Reference dialog, in the Framework tab, find the System.Windows Forms assembly and select it. Now select the Extensions tab. find the EnvDTE assembly and select it. Also find the Microsoft.VisualStudio.TemplateWizardInterface assembly and select it. Click OK.
Add a class for the wizard implementation to the VSIX project. (In the Solution Explorer, right-click the VSIX project node and select Add, then New Item, then Class.) Name the class WizardImplementation.
Replace the code in the WizardImplementationClass.cs file with the following code:
using System; using System.Collections.Generic; using Microsoft.VisualStudio.TemplateWizard; using System.Windows.Forms; using EnvDTE; namespace MyProjectWizard { public class WizardImplementation:IWizard { private UserInputForm inputForm; private string customMessage; //. inputForm = new UserInputForm(); inputForm.ShowDialog(); customMessage = UserInputForm.CustomMessage; // Add custom parameters. replacementsDictionary.Add("$custommessage$", customMessage); } catch (Exception ex) { MessageBox.Show(ex.ToString()); } } // This method is only called for item templates, // not for project templates. public bool ShouldAddProjectItem(string filePath) { return true; } } }
The UserInputForm referenced in this code will be implemented later.
The
WizardImplementationclass. You must add the following assemblies to your project:
Now create the UserInputForm. In the WizardImplementation.cs file, add the following code after the end of the WizardImplementation class.
public partial class UserInputForm : Form { private static string customMessage; private TextBox textBox1; private Button button1; public UserInputForm() { this.Size = new System.Drawing.Size(155, 265); button1 = new Button(); button1.Location = new System.Drawing.Point(90, 25); button1.Size = new System.Drawing.Size(50, 25); button1.Click += button1_Click; this.Controls.Add(button1); textBox1 = new TextBox(); textBox1.Location = new System.Drawing.Point(10, 25); textBox1.Size = new System.Drawing.Size(70, 20); this.Controls.Add(textBox1); } public static string CustomMessage { get { return customMessage; } set { customMessage = value; } } private void button1_Click(object sender, EventArgs e) { customMessage = textBox1.Text; } }
The user input form provides a simple form for entering a custom parameter. The form contains a text box named
textBox1and a button named
button1. When the button is clicked, the text from the text box is stored in the
customMessageparameter.
In order for your custom project template to use your custom wizard, you need to sign the wizard assembly and add some lines to your custom project template to let it know where to find the wizard implementation when a new project is created.
Sign the assembly. In the Solution Explorer, select the VSIX project, right-click, and select Project Properties.
In the Project Properties window, select the Signing tab. in the Signing tab, check Sign the assembly. In the Choose a strong name key file field, select <New>. In the Create Strong Name Key window, in the Key file name field, type key.snk. Uncheck the Protect my key file with a password field.
In the Solution Explorer, select the VSIX project and find the Properties window.
Set the Copy Build Output to Output Directory field to true. This allows the assembly to be copied into the output directory when the solution is rebuilt. It is still contained in the .vsix file. You need to see the assembly in order to find out its signing key.
Rebuild the solution.
You can now find the key.snk file in the MyProjectWizard project directory (<your disk location>\MyProjectTemplate\MyProjectWizard\key.snk). Copy the key.snk file.
Go to the output directory and find the assembly (<your disk location>\MyProjectTemplate/MyProjectWizard\bin\Debug\MyProjectWizard.dll). Paste the key.snk file here. (This isn't absolutely necessary, but it will make the following steps easier.)
Open a command window, and change to the directory in which the assembly has been created.
Find the sn.exe signing tool. For example, on a Windows 10 64-bit operating system, a typical path would be the following:
C:\Program Files (x86)\Microsoft SDKs\Windows\v10.0A\bin\NETFX 4.6.1 Tools
If you can't find the tool, try running where /R . sn.exe in the command window. Make a note of the path.
Extract the public key from the key.snk file. In the command window, type
<location of sn.exe>\sn.exe - p key.snk outfile.key.
Don't forget to surround the path of sn.exe with quotation marks if there are spaces in the directory names!
Get the public key token from the outfile:
<location of sn.exe>\sn.exe - t outfile.key.
Again, don't forget the quotation marks. You should see a line in the output like this
Public key token is
Make a note of this value.
Add the reference to the custom wizard to the .vstemplate file of the project template. In the Solution Explorer, find the file named MyProjectTemplate.vstemplate, and open it. After the end of the <TemplateContent> section, add the following section:
Where MyProjectWizard is the name of the assembly, and token is the token you copied in the previous step.
Save all the files in the project and rebuild.
In this example, the project used as the template displays the message specified in the user input form of the custom wizard.
In the Solution Explorer, go to the MyProjectTemplate project and open Class1.cs.
In the
Mainmethod of the application, add the following line of code.
The parameter
$custommessage$is replaced with the text entered in the user input form when a project is created from the template.
Here is the full code file before it has been exported to a template.
Now you can create a project from your template and use the custom wizard.
Rebuild the solution and start debugging. A second instance of Visual Studio should appear.
Create a new MyProjectTemplate project. (File / New / Project / Visual C# / MyProjectTemplate).
IWizard
Customizing Templates
WizardExtension Element (Visual Studio Templates) | https://msdn.microsoft.com/en-us/library/ms185301.aspx | CC-MAIN-2018-34 | refinedweb | 1,473 | 61.33 |
I created this code snippet, which helps us to mail-disable in the Exchange Server 2003 SP2 (Native mode) – public folders. For this i tried this, i used C#.Net & CDOEXM. In this below code, i try to mail-disable the public folder named “publicfolder1”. I used Visual Studio.Net 2008, C#.Net and CDOEXM – Collaboration Data Objects for Exchange Management – 2003 to do this:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using CDO;
using CDOEXM;
using System.Collections;
namespace MailDisableCSharp
{
class Program
{
static void Main(string[] args)
{
try
{
CDO.Folder objFolder = new CDO.Folder();
CDOEXM.IMailRecipient objRecip;
string fullurl;
fullurl = "";
objFolder.DataSource.Open(
fullurl,
null,
ADODB.ConnectModeEnum.adModeReadWrite,
ADODB.RecordCreateOptionsEnum.adFailIfNotExists,
ADODB.RecordOpenOptionsEnum.adOpenExecuteCommand,
"Administrator",
objRecip = (CDOEXM.IMailRecipient)objFolder;
objRecip.MailDisable();
objFolder.DataSource.Save();
Console.Write("Success");
}
catch (Exception e1)
Console.WriteLine(e1.Message);
}
}
}
}
Note:
+ To execute this code, you may try with VS.Net 2005 or VS.Net 2008, C#.Net.
+ Make sure you need to use the following references: CDOEXM – Microsoft CDO for Exchange Management 2000 & CDOEX - Microsoft CDO for Exchange 2000.
+ This code helps us to mail-disable the public folder “public folder1”.
+ Also we need to make sure to pass valid credentials username(Administrator) & password(Password) to execute this, along with valid fullURL – the public folder needs to be mail-disabled & domain – exchange domain.
Hope this helps. Happy programming!!
Hey, i found this webcast, which can help us to use the Outlook 2007 categories and flags.
During this Support WebCast we will provide information about Microsoft Office Outlook 2007. Specifically, we will provide the following information:, please use the PowerPoint Viewer () (1,911 KB).
Read the Transcript from this event ( )
Happy learning!!: systems.
Reference info: ============= Windows Vista Inside Out by Ed Bott, Carl Siechert, and Craig Stinson (Windows Vista Inside Out © 2007 Microsoft Corporation. To learn more about this book, visit the Microsoft Learning website.)
When.
ASP error code
Description
ASP 0100
Out of memory
ASP 0101
Unexpected error
ASP 0102
Expecting string input
ASP 0103
Expecting numeric input
ASP 0104
Operation not Allowed
ASP 0105
Index out of range
ASP 0106
Type Mismatch
ASP 0107
Stack Overflow
ASP 0108
Create object failed
ASP 0109
Member not found
ASP 0110
Unknown name
ASP 0111
Unknown interface
ASP 0112
Missing parameter
ASP 0113
Script timed out
ASP 0114
Object not free threaded
ASP 0115
ASP 0116
Missing close of script delimiter
ASP 0117
Missing close of script tag
ASP 0118
Missing close of object tag
ASP 0119
Missing Classid or Progid attribute
ASP 0120
Invalid Runat attribute
ASP 0121
Invalid Scope in object tag
ASP 0122
ASP 0123
Missing Id attribute
ASP 0124
Missing Language attribute
ASP 0125
Missing close of attribute
ASP 0126
Include file not found
ASP 0127
Missing close of HTML comment
ASP 0128
Missing File or Virtual attribute
ASP 0129
Unknown scripting language
ASP 0130
Invalid File attribute
ASP 0131
Disallowed Parent Path
ASP 0132
Compilation Error
ASP 0133
Invalid ClassID attribute
ASP 0134
Invalid ProgID attribute
ASP 0135
Cyclic Include
ASP 0136
Invalid object instance name
ASP 0137
Invalid Global Script
ASP 0138
Nested Script Block
ASP 0139
Nested Object
ASP 0140
Page Command Out Of Order
ASP 0141
Page Command Repeated
ASP 0142
Thread token error
ASP 0143
Invalid Application Name
ASP 0144
Initialization Error
ASP 0145
New Application Failed
ASP 0146
New Session Failed
ASP 0147
500 Server Error
ASP 0148
Server Too Busy
ASP 0149
Application Restarting
ASP 0150
Application Directory Error
ASP 0151
Change Notification Error
ASP 0152
Security Error
ASP 0153
Thread Error
ASP 0154
Write HTTP Header Error
ASP 0155
Write Page Content Error
ASP 0156
Header Error
ASP 0157
Buffering On
ASP 0158
Missing URL
ASP 0159
Buffering Off
ASP 0160
Logging Failure
ASP 0161
Data Type Error
ASP 0162
Cannot Modify Cookie
ASP 0163
Invalid Comma Use
ASP 0164
Invalid TimeOut Value
ASP 0165
SessionID Error
ASP 0166
Uninitialized Object
ASP 0167
Session Initialization Error
ASP 0168
Disallowed object use
ASP 0169
Missing object information
ASP 0170
Delete Session Error
ASP 0171
Missing Path
ASP 0172
Invalid Path
ASP 0173
Invalid Path Character
ASP 0174
Invalid Path Character(s)
ASP 0175
Disallowed Path Characters
ASP 0176
Path Not Found
ASP 0177
Server.CreateObject Failed
ASP 0178
Server.CreateObject Access Error
ASP 0179
Application Initialization Error
ASP 0180
ASP 0181
Invalid threading model
ASP 0182
ASP 0183
Empty Cookie Key
ASP 0184
Missing Cookie Name
ASP 0185
Missing Default Property
ASP 0186
Error parsing certificate
ASP 0187
Object addition conflict
ASP 0188
ASP 0189
ASP 0190
ASP 0191
ASP 0192
ASP 0193
OnStartPage Failed
ASP 0194
OnEndPage Failed
ASP 0195
Invalid Server Method Call
ASP 0196
Cannot launch out of process component
ASP 0197
ASP 0198
Server shutting down
ASP 0199
ASP 0200
Out of Range 'Expires' attribute
ASP 0201
Invalid Default Script Language
ASP 0202
Missing Code Page
ASP 0203
Invalid Code Page
ASP 0204
Invalid CodePage Value
ASP 0205
Change Notification
ASP 0206
Cannot call BinaryRead
ASP 0207
Cannot use Request.Form
ASP 0208
Cannot use generic Request collection
ASP 0209
Illegal value for TRANSACTION property
ASP 0210
Method not implemented
ASP 0211
Object out of scope
ASP 0212
Cannot Clear Buffer
ASP 0214
Invalid Path parameter
ASP 0215
Illegal value for ENABLESESSIONSTATE property
ASP 0216
MSDTC Service not running
ASP 0217
ASP 0218
Missing LCID
ASP 0219
Invalid LCID
ASP 0220
Requests for GLOBAL.ASA Not Allowed
ASP 0221
Invalid @ Command directive
ASP 0222
Invalid TypeLib Specification
ASP 0223
TypeLib Not Found
ASP 0224
Cannot load TypeLib
ASP 0225
Cannot wrap TypeLibs
ASP 0226
Cannot modify StaticObjects
ASP 0227
Server.Execute Failed
ASP 0228
Server.Execute Error
ASP 0229
Server.Transfer Failed
ASP 0230
Server.Transfer Error
ASP 0231
ASP 0232
Invalid Cookie Specification
ASP 0233
Cannot load cookie script source
ASP 0234
Invalid include directive
ASP 0235
ASP 0236
ASP 0237
ASP 0238
Missing attribute value
ASP 0239
Cannot process file
ASP 0240
Script Engine Exception
ASP 0241
CreateObject Exception
ASP 0242
Query OnStartPage Interface Exception
ASP 0243
Invalid METADATA tag in Global.asa
ASP 0244
Cannot Enable Session State
ASP 0245
Mixed usage of Code Page values
ASP 0246
Too many concurrent users. Please try again later.
ASP 0247
Bad Argument to BinaryRead.
ASP 0248
Script isn't transacted. This ASP file must be transacted in order to use the ObjectContext object.
ASP 0249
Cannot use IStream on Request. Cannot use IStream on Request object after using Request.Form collection or Request.BinaryRead.
ASP 0250
Invalid Default Code Page. The default code page specified for this application is invalid.
ASP 0251
Response Buffer Limit Exceeded. Execution of the ASP page caused the Response Buffer to exceed its configured limit.!!
The Exchange Server 2007 SP1 Help can help you in the day-to-day administration of Exchange. Use this information to guide you through Exchange Server 2007 SP1 features, tasks, and administration procedures. This download contains a standalone version of Microsoft Exchange Server 2007 SP1 Help.
Note:).:
If you’re a Windows Mobile developer, then this is for you.
While (both Standard & Professional Emulators available). Developers will still need to install Visual Studio and the Windows Mobile 6 SDK prior to running the tool kit installer.
The Windows Mobile 6.5 Developer Tool Kit adds documentation, sample code, header and library files, emulator images and tools to Visual Studio that let you build applications for Windows Mobile 6.5. This document contains important information about this package. The Windows Mobile 6 SDK must also be installed in order to use any of the Windows Mobile 6.5 Gesture API or samples. Windows Mobile 6.5 Developer Tool Kit comes with the following Emulator Images:
Available locales: 0804 CHS Chinese Simplified 0409 USA English 0407 GER German 040c FRA French 0410 ITA Italian 0c0a ESN Spanish A..
Each
....
Trademarks |
Privacy Statement | http://blogs.msdn.com/deva/default.aspx | crawl-002 | refinedweb | 1,305 | 51.38 |
It is used to set error state flags. The current value of the flags is overwritten: All bits are replaced by those in state; If state is goodbit (which is zero) all error flags are cleared.
In the case that no stream buffer is associated with the stream when this function is called, the badbit flag is automatically set (no matter the value for that bit passed in argument state).
Following is the declaration for ios::clear function.
void clear (iostate state = goodbit);
state − An object of type ios_base::iostate that can take as value any combination of the following state flag member constants −
none.
In below example explains about ios::clear function.
#include <iostream> #include <fstream> int main () { char buffer [80]; std::fstream myfile; myfile.open ("test.txt",std::fstream::in); myfile << "test"; if (myfile.fail()) { std::cout << "Error writing to test.txt\n"; myfile.clear(); } myfile.getline (buffer,80); std::cout << buffer << " successfully read from file.\n"; return 0; } | https://www.tutorialspoint.com/cpp_standard_library/cpp_ios_clear.htm | CC-MAIN-2020-16 | refinedweb | 161 | 66.74 |
java.lang.StringBuffer class comes with many helper methods to manipulate string. One of the methods is overloaded lastsIndexOf() method used to search for the last occurrence of a stirng in the buffer.
What Java API says abou lastIndexOf() StringBuffer:
- public int lastsIndexOf(String searchString): Returns an int value of the last occurrence of the index number of searchString in the buffer. Returns -1 if the searchString is not found.
- public synchronized int indexOf(String searchString, int searchIndex): Returns an int value of the index number of the nearest occurrence of searchString in the buffer where search starts from searchIndex. Returns -1 if the searchString is not found.
Example on lastIndexOf() StringBuffer uses both overloaded methods
public class Demo { public static void main(String args[]) { // hello comes in 3 places StringBuffer sb1 = new StringBuffer("123hello456hello789hello"); // to find last occurrence of hello int x = sb1.lastIndexOf("hello"); System.out.println("Last Occurrence of hello: " + x); // to find last occurrence of hello starting from 9th index number x = sb1.lastIndexOf("hello", 9); System.out.println("Occurrence of hello from index 9: " + x); x = sb1.lastIndexOf("world"); // world does not exist System.out.println("Occurrence of world which does not exist: " + x); } } }
Output screenshot on lastIndexOf() StringBuffer
Also know the usage of overloaded IndexOf() method which checks the starting index in the string. | https://way2java.com/java-lang/class-stringbuffer/java-stringbuffer-methods-lastsindexof-example/ | CC-MAIN-2022-33 | refinedweb | 220 | 54.63 |
:
Perhaps surprisingly, when you run this program, it will ask you to enter your name, but then skips waiting for you to enter your name!′;
Hi!
I wrote the following main part from my code:
But when i try to run it, the compiler gives me the Error: "no match for call …. ", pointing out on the string line.
Then I changed the name of the string parameter:
With these names, the code is working perfectly, but the question is why?
I mean, why we can’t put the same name "choice" there, since the string choice is a variable, and the choice() is a function?
If I’m not wrong, this kind of naming is valid for int and others types.
I tried “string choice = choice(number);” with both string and int and got the same result in Visual Studio 2017 -- the compiler complained that choice wasn’t a pointer to a function, which means that it’s interpreting the rightmost choice as the variable choice, not the function choice. Changing string to int didn’t make a difference for me.
The short answer is: you have a naming conflict, and the compiler can’t or won’t disambiguate. Changing the name of either the variable or the function resolves the ambiguity.
cout << "How many students do you want to enter?" << endl;
int numberOfStudents;
cin >> numberOfStudents;
string *names = new string[numberOfStudents];
for (int x = 0; x < numberOfStudents; ++x)
{
cout << "Enter the name of student number " << x + 1 << "." << endl;
getline(cin, names[x]);
}
When I run this code (it compiles fine) it prints:
Enter the name of student number 1
Enter the name of student number 2
I don’t get a chance to enter a student name for the first iteration… However, each iteration after that is fine.
When you enter the number of students and hit enter, the number of students gets extracted to the numberOfStudents variable, but the newline is still queued for extraction. Then when you go to get the first student name, it just extracts the newline.
You need to add the following line after asking for the numberofStudents:
This will get rid of the pesky newline.
See for more information.
Can you show tell me what lesson I can learn static_cast so that it will output a float instead of an integer?
I’m not clear on what you’re wanting to output as as float instead of an integer.
"To use strings in C++, we first need to #include the <string> header to bring in the declarations for std::string."
I noticed that this is not needed for GCC 6.3.0 (C++11), the code below compiles, links, and runs without including the <string> header.
main.cpp
Enter your full name: Peter Griffin
Enter your age: 42
Your name is Peter Griffin and your age is 42
Just a note, the code compiles & links with
g++ -Wall -std=c++98 -pedantic-errors main.cpp
i.e. after the C++98 standard as well…
Your compiler may be including the string header for you, or iostream may be including it. Regardless, it is proper form to include it yourself, as you should always explicitly include all dependencies.
From what I tested, uninitialized strings don’t carry garbage data like other variables.
Is this correct? If so, any simple explanation why?
Generally speaking, fundamental types in C++ are not initialized by default. However, non-fundamental types generally _are_ initialized by default if you do not provide an initializer. std::string is a non-fundamental type, so it gets default initialized.
Well my code was
But how to count the letters only, because with .lenght we get the space between the name also counted and if that happen practically our program calculates a wrong value.
for example, my name: Jack Richards, the length of that name is 12
but the program counts the space between Jack and Richards and get’s length 13.
so how we get length without the spaces between?
If you wanted to remove spaces, you’d need to identify how many spaces were in the string, and subtract those from the string length. This is possible to do like this:
When I made my program for the quiz I initialized the ‘age’ variable as a double, and I did not static cast when making the division happen. Is it better practice to initialize my numerical variables as int and static cast them later than to initialize variables simply how you’d use them?
It’s better to use whatever data type best represents your data. If you’re storing only integer ages, then you should use an integer (and static_cast to a floating point if you need to do floating point division). If you need to store fractional ages (e.g. 40.5) then you should use a floating point number.
"Here’s the results from a sample run of this program:
Enter your full name: John Doe
Enter your age: Your name is John and your age is Doe"
I think this should say:
Enter your full name: John Doe
Enter your age: 23
Your name is John and your age is Doe
May you please explain more about how this works: “std::cin.ignore(32767, ‘n’);? What does it mean: “ignore up to 32767 characters until a n is removed”?
No, the sample program is correct -- the program does not work as expected.
std::cin will remember any characters that were input that could not be extracted to a variable, so they can be subsequently extracted. If we want to get rid of those, we can use std::cin.ignore to do so. The 32767 tells the ignore function to ignore up to 32767 characters (more than we’re likely to ever need to ignore), and the ‘\n’ tells ignore to stop ignoring characters if it reaches a ‘\n’ character. Since all user input should be ‘\n’ terminated, this essentially clears out everything.
OK, I tested the code myself and I see it runs as you say. Confusing, but I think I understand. Ray gets stored in the next cin for age, so it skips reading the input from the user, and goes straight to printing the output.
OK, that explanation about ignore makes sense, I guessed that was what it meant but it was good to have that clarification.
Hi Alex,
Thank you for all your helpful tutorial!
However, I want to ask how to put and return std::string in the formula.
And I know char can be used to store a character, but not characters. Do we have anything to hold characters?
Thank you!
> However, I want to ask how to put and return std::string in the formula.
I’m not sure I understand what you’re asking here.
char is used to store a character. std::string can be used to store a collection of sequential characters.
I accidently deleted my first comment
Here is the code:
I don’t understand what is wrong with it.
when I compile it I get these errors
|11|error: no matching function for call to ‘getline(std::istream&, std::string (&)())’|
|19|error: request for member ‘length’ in ‘name’, which is of non-class type ‘std::string() {aka std::basic_string<char>()}’|
i am using codeblocks for this.
I tried your code in the solution and it worked perfectly but yours and mine seem pretty much the same to me
so what is wrong with it??
You’ve defined variable name incorrectly, and this is confusing this compiler. Instead of:
Do:
That did it! Thanks a lot!
but i still don’t understand the difference between name() and name.
name is just an identifier that we can give to a variable or function. name() means execute the function named name.
So string name() doesn’t make any sense, because we’re telling the compiler “define a new string variable and give it the identifier of the function call of the function name”. What?! Similarly, std::cout << name doesn’t make sense if name is actually a function, because what does it mean to print the value of a function’s identifier, rather than the value returned by the function call?
oh, I got it now. Thanks a lot again Alex!
Your tutorials are great.
I have a question: when mixing std::cin and std::getline, you have to remove the newline from the stream after using std::cin, by making use of std::cin.ignore. But why not avoid the issue altogether and use std::getline exclusively? Would there be a disadvantage to using only std::getline instead of std::cin?
You should use the right tool for the job at hand. If you want to read up to the end of a line (including whitespace), use std::getline. If you want to read up until the first whitespace, use std::cin and manually remove any ‘\n’ you don’t want via std::cin.ignore().
Seeing that std::cin causes all sorts of problems surely it is not the ‘right’ tool to use. On the other hand, getline() seems to be able to avoid the problems of cin, so shouldn’t it be used as pointed out by OP? Unless there’s disadvantages which I don’t know with my limited C++ knowledge.
Hi. I kind of dont get the cin.ignore() function. I have this code and when I use cin.ignore() right after entering name I have to press Enter again to evaluate to next line. Why is that?
cin::ignore has two parameters: the first is the number of characters to get from the input, and the second is the special character (called a delimiter) that will cause the function to stop early.
When you do a getline to a std::string, all of the input you type (ending with a ‘\n’) is extracted to the string. getline() also discards the ‘\n’, which is key to the next step.
Then you do cin.ignore(999, ‘\n’). This tells the compiler, “start discarding characters from std::cin until you hit 999 character or a ‘\n’, whichever comes first”. If there were characters in std::cin waiting to be extracted, it would process through those first. But because getline() extracted everything, there’s no input waiting.
Because there’s no input in std::cin already, std::cin will ask the user for input. It will continue to do this until the user types 999 characters, or until you hit enter.
So, in short, cin.ignore() makes you hit ‘\n’ again because getline() discarded the ‘\n’ from the previous string.
If it had not done so, the ‘\n’would have been left inside std::cin. Then when cin.ignore() ran, it would have extracted ‘\n’ from std::cin and discarded it. Since that matches the second parameter of ignore(), it would have stopped (not asking the user for any input).
So, in other words, cin.ignore() is useful if you know that some particular input you want to get rid of is inside std::cin, or if you want to ensure the user has to hit enter.
Thank you Alex. This totally make sense now 🙂
In the following snippet it complains about converting from unsigned int to int[uniform initialisation using the length of string function ] but it works fine with direct initialisation and copy initialisation. I was wondering why this happens??
Error 1 error C2397: conversion from ‘unsigned int’ to ‘int’ requires a narrowing conversion
I am using VSExpress 2013 and have used uniform initialisation before too.
The length() function returns an unsigned integer, and you’re using that value to initialize a signed integer. As long as the length of your string falls within the range of a signed integer, this will be fine. But if it ever does not, your code will break (as length will end up negative).
Thank you for your reply. But i was wondering how a string could have negative length. Could you give an example?
Also Thank You very much for creating/maintaining this site. It is very helpful to learn a language from home. I have recommended this site to my friends too.
Also why is it not a problem with direct initialisation and copy initialisation? how do they defer from uniform initialisation?
It should be a problem with all three. It’s just that uniform initialization is less permissive about what kind of implicit conversions it will do (which is a good thing).
A string can never have a (valid) negative length. Let’s consider a simple example where an integer is only 8 bytes. In this case, an unsigned integer would have range 0 to 255, and a signed integer from -128 to 127. If the length function returned an unsigned integer between the range 128 to 255 (which is a valid string length), then when you converted that number to a signed integer, it falls outside the range of your signed integer. Your signed integer will interpret the unsigned integer as a signed integer, and the result will be negative (due to the way unsigned and signed integers are stored in binary).
Thank you Alex. This explains things
Alex please help
Not sure. It seems weird to me to have the same statement in the while condition and in the subsequent if statement condition. I think that will cause your program to skip characters.
Hi Alex
Is it possible that you slipped pointers in on us quietly without warning? The answer to my question might
then be that the long long int returned by the sizeof function refers to an array(?) of pointers and that sufficient memory for 32767 bytes would be allocated when the program was compiled triggered by the keyword
"string". Guessing that the C++ ignore() function in this case corresponds to the C free() function.
Kind of. 🙂 The address of the dynamic memory allocated by the std::string is held in a pointer. But you don’t need to worry about that at all to use a std::string. It’s just an internal detail of how std::string works.
The ignore() function doesn’t have anything to do with strings -- it has to do with ignoring one or more characters that have been buffered for input but not extracted yet. The analog in C++ to C’s free() function is the delete keyword. We’ll also cover this in chapter 6.
Hi Alex- To get a better idea of how strings work I wrote the following program:
#include <iostream>
#include <string>
int main()
{
std::cout << " Enter a string of letters: ";
std::string name;
std:: getline(std::cin, name);
std::cout << " Your string is: " << name <<"n";
std::cout << " Your string has " << name.length() << " letters. " <<"n";
std::cout << " Your string occupied " << sizeof(name) << " bytes of storage space. " <<"n";
return 0;
}
I found that no matter how many characters (up to a thousand or so) I entered the sizeof function always returned a size of 8 bytes.I get it that the max string can be 32767 chars long. All this is well and
good but I am confused, because if each char occupies 1 byte does the max string occupy 32767 bytes?
How is the string stored?
Garry
It’s hard to answer this question at this point, but I’ll try. There are two ways to allocate memory -- statically (which is what we’ve been doing so far), and dynamically (which we’ll cover in chapter 6). std::string uses dynamic memory to hold the chars.
sizeof() has a limitation in that it will only return the size of statically allocated memory. For this reason, a std::string will always return the same size no matter how much dynamic memory it’s allocated to hold the chars.
Alex please help
code]
#include "stdafx.h"
#include <iostream>
#include <cctype>
#include <fstream>
using namespace std;
const int siz = 2000;
int main()
{
char ch[siz];
ifstream infile;
cout << "Enter name of data file :\n";
cin.getline(ch, siz);
infile.open(ch);
if (!infile.is_open())
{
cout << "progam terminating \n";
exit(EXIT_FAILURE);
}
char co[siz];
infile.getline(co, siz);
int count(0);
for (int x(0); siz > x; ++x)
{
if ( isalnum(co[x]))
{
++count;
}
}
cout << "number of char = " << count;
// this was supposed to count the number of chars in the text file but it cant open the text file why ??
return 0;
}
[/code]
Not sure. It opened a file fine for me, and then began counting chars before it crashed. It’s probably looking for the text file in a different place than you think it is. Try either putting the text file in the same directory as the executable file, or typing in a full pathname to the file rather than a relative path.
Thanks Alex
Alex please help.
It’s crashing because you didn’t allocate an array of spr, you only allocated one.
Instead of:
It should be:
[code]
#include <string>
#include <iostream>
int main()
{
std::cout << "Enter your full name.";
std::string name;
std::cout << "Enter your age.";
int age;
int letters = name.length();
int finale = static_cast<double>(age) / letters;
std::cout << "Your name is " << name;
std::cout << "Your age is " << age;
std::cout << "You’ve lived " << finale << "years per letter.";
return 0;
}
[code]
So I tried doing this quiz without looking too much at the answer, just for the format of static_cast and name.length(). Obviously, it didn’t go well. What exactly did I do wrong and how can it be improved to do what I intended? My error list says that age is an uninitialized local variable.
You’re not too far off. The main thing is you forgot to use std::cin to allow the user to input values for name and age. This means name and age have no values (which is what the compiler is complaining about).
Another error I see is that you’re using static_cast to convert age to a double so you can do a double division, but then you’re assigning the result back to an integer variable (losing any decimal points you just worked to preserve).
#include <string>
#include <iostream>
int main()
{
std::cout << "Pick 1 or 2: ";
int choice;
std::cin >> choice;
std::cout << "Now enter your name: ";
std::string name;
std::getline(std::cin, name);
std::cout << "Hello, " << name << ", you picked " << choice << ‘\n’;
return 0;
}
i did not understand why will it not ask us to enter the name during compilation!
Not sure what you mean. Compilation just turns your code into something you can execute. It doesn’t actually execute anything. You have to run your program after compiling it for it to actually produce results.
Note at solution code: even though std::getline() isn’t used after, to make the program expandable it’s still usually advicable to still add std::cin.ignore(xxbignumberxx, ‘\n’) at the end.
This way when in let’s say 12 years you wanna reuse the snippet in a program that uses std::getline again later, you don’t have to deal with suprises.
For the rest my version of it does the exact same thing so let’s move on to 4.4c 🙂
Hi Alex, I see that you said std::cin.ignore(32767, ‘n’); is meant to ignore up to 32767 characters until a n is removed. What is the significance of the number 32767? How do you know to use that number? Could you use some other number instead? Thanks.
32767 is the largest 2-byte signed integer. The first parameter to std::cin.ignore() is an object of type std::streamsize, which is an implementation-defined signed integer type. Given that it’s implementation defined, we don’t know how big it is, but it’s reasonable to assume it’s at least 2 bytes.
You could use some other number, but if you pick a number any larger you risk overflowing the integer on some machines.
It’s actually best/technically correct to #include and use std::numeric_limits::max() instead of 32767, but that’s such an ridiculously unmemorable constant. 32767 is easier.
Hi Alex ,
std::cout << "Pick 1 or 2: ";
int choice;
std::cin >> choice;
std::cout << "Now enter your name: ";
std::string u;
cin >> u;
std::cout << "Hello, " << u << ", you picked " << choice << ‘\n’;
-----------------------------------------------
output :
Pick 1 or 2: 3
Now enter your name: asdd
Hello, asdd, you picked 3
-----------------------------------------------
Ques: Why did "cin" not take newline character "/n" after ‘3’ ?
operator>> breaks on whitespace, so it stops extracting characters to the string when it reaches the ‘\n’.
Can I simply use "cin.ignore() after cin >>; ? I’ve used it and it seems to clear the input completely. If I am going to use cin.ignore(32767, ‘\n’), can I substitute the ‘\n’ with endl?
No, you shouldn’t just use cin.ignore() with no parameters. By default it only ignores one character. You can’t substitute ‘\n’ with endl.
“Reading inputs with both std::cin and std::getline may cause some unexpected behavior. Consider the following:
Perhaps surprisingly, when you run this program, it will ask you to pick 1 or 2, but then skip asking for your name! What happened? "
This simple inaccuracy got me stuck for a few minutes.I just couldn’t figure out why would it not Ask for name - there are no conditionals or anything like that. I even had to run it myself to make sure that’s not the case. So, I believe, what you actually meant here was "the program skips reading your name" or something like that?
Just thought this might help improve the article.
Otherwise, these lessons are great, thank you kindly for the good work!
Yes, thank you. I can see how this would be confusing. I’ve updated the wording accordingly.
Thanks again for these tutorials and your swift replies to some of my questions so far.
My question here is in regards to the following - "std::getline() takes two parameters: the first is std::cin, and the second is your string variable".
I would like to understand this better just for my own understanding, but at the moment am struggling. Not sure the best way to phrase this question, but could you please explain exactly what std::getline() does with the two parameters.
Thanks!
I don’t know how std::getline() is implemented internally. But the end result is that std::cin is used to read characters from the user, which are placed in the string variable passed in as the second parameter. Unlike the extraction operator (>>), which stops extracting at any whitespace character, std::getline() only breaks on newlines, allowing you to enter text with spaces or tabs in it.
Little typo: "Why happened?" was supposed to be "What happened?" after the code example of "Mixing std::cin and std::getline()"
Hah, thanks. Fixed.
While reading 7.12, I realized that I had this question since std::array, when you said they’re slower than array[] (not std::array) because of the way they’re interpreted under the hood. So, the similar question comes: is std::string slower than C-style strings? Even if it’s the preferred way? If it is so, should it still be the preferred way only because it’s C++ native (did I get this right?) ?
Actually, that result is shown after
straight from the console, but codeblocks shows the same result.
Hello Alex
I tried running your program where you say std::cin yields a surprise (the result was "Your name is John and your age is Doe"), and, after I enter the two names, then enter, the program displays "Enter your age: Your name is bla and bla bla", then exits. It seems to print the asking age line, but it doesn’t wait for any input, it jumps straight to the end.
I thought maybe it’s some typo from my part so I copy-pasted your example and I get the same results. I have codeblocks on archlinux x64. Is this some compiler problem? (it shows success, no warnings/errors).
std::cin breaks on whitespace, so if you enter “John Doe”, it will only extract “John” and leave “Doe” in the input stream waiting for a future extraction. If you want to get a full line of text with spaces in it, you can use std::getline() instead.
Yes, but then how did you achieve your result? That’s what I should have gotten, too, no? Just to clarify, this is the code:
Your result is this:
And here’s mine (code is copy-pasted):
I really hope it’s a quirk of codeblocks, my confidence feels threatened. 🙂
No, you are quite correct. I just made a mistake in my output. Thanks for noticing and pointing it out. I’ve updated the lesson.
Your confidence may resume. 🙂 ‘ ‘and’ and ‘or’ with strings
take for instance when I try to do this ‘the’.
std::cout << "Hello, you picked " << choice << ‘n’;
Thank you so much for posting this tutorial online, it is amazing and I appreciate your time effort to help coding noobs like me.
My question is in the statement: [std::cin.ignore(32767, ‘n’) /code]
what does the "32767" refer to?
Thanks, Shane
Hi, Shane M
32767 means the maximum number of characters to be discarded from ‘cin’ stream
until the delimiter character(‘n’) ‘<iostream>’ and ‘<string>’ are missing in your reply to Mr. Ank above.
2) Expected ‘}’ ‘only ‘, ‘n’);
? ‘std::string {aka std::basic_string<char>}’ to type ‘double’| ‘}’ | http://www.learncpp.com/cpp-tutorial/4-4b-an-introduction-to-stdstring/ | CC-MAIN-2017-26 | refinedweb | 4,253 | 72.66 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.