Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
So basically I am storing a bunch of calendar events in a mysql table. And I store the day of the week they are supposed to happen, as well as the time. Right now the time is stored as the number of seconds from midnight calculated in GMT. But when I have people log on and check the calendar and they are from a different timezone other than GMT I'm having problems with calculating which events are within the day in their timezone. Any suggestions?
When you are dealing with timezones, you **do not** want to do offset calculations yourself, especially if you deal with units of time smaller than 1 day. One simple reason: Daylight-saving. Both MySQL and PHP have a multitude of date/time functions for manipulating dates. They know about daylight-saving and other quirks of moving dates and times around. And they get it correct more often than you will if you try to hand-roll a solution. At the very least, use (or create) a date object that includes the timezone. The PHP functions can be a bit tricky and are not as well documented, unfortunately, but all the capability is there. You can still use unix timestamps as your 'canonical' format, but all code that works with it has to know or understand what timezone it is 'in'. In a past job, I had to deal with a school time-table and converting that format (term, week, day, period...) to and from real dates. We built an object that did any necessary conversions on demand and cached results. It was an effort, but having the PHP date functions do *all* conversions and adjustments saved a lot of problems with timezones and daylight-saving. Another caveat with calenders: you may have to distinguish between timezones of *events* and timezones of *users*. And when events' timezones stay fixed and when they move with users'. Failure to handle all the nuances of this is why the calendar in Exchange/Outlook still has problems with daylight saving changes. :-)
You can convert the current date in their timezone to GMT before doing the comparison (not necessarily within MySQL, but you can probably use [CONVERT\_TZ](http://dev.mysql.com/doc/refman/5.1/en/date-and-time-functions.html#function_convert-tz) for this). As for finding their timezone, bullet-proof method is probably to ask user and save his choice.
Dealing with dates
[ "", "php", "mysql", "date", "" ]
I work on two related web applications, developed in PHP, that started independently. Over time, we have started to more tightly integrate their functionality for business reasons. To accomplish this, we setup a directory for common code in a seperate SVN repository so that we can deploy different versions of this code when we deploy each application separately. A text file in either application repository indicates which revision of the common code should be deployed. If has worked fairly well so far, but there are some issues and considerations that have arisen: * All of the code that is used by either application in this common interface needs to be 'visible' to both applications. For example, if an item in application #1 is saved through application #2, then all of the supporting code, ORM classes, supporting classes, etc., need to be in the common area. * If application #1 is making method calls to application #2's code, then it is bypassing all of application #2's startup code - authentication, framework, etc. This could lead to un-desirable results and unexpected dependencies. At this point, we prefer to keep the applications separate. What would be considered the best way for these two applications to communicate with each other? We could use an http communication that ensures a reliable interface and that each application is handling processes through it's own application processes. We worry about incurring overhead from http, but the trade-off of a more loosely coupled system would probably surpass that concern. At this time, both applications are running on the same set of servers, and, as mentioned, using PHP almost exclusively. **EDIT:** The communication will be private and authenticated within the apps, meaning no plans for public API.
It's a tradeoff you're making; A shared code base is - all considered - simpler and more efficient. At least in the short run. An xml-rpc or plain http-based api can work well, and is good for separating responsibilities, since you get explicit boundaries. For the sake of completeness, SOAP is also an option, but one I wouldn't pick unless forced to.
In my opinion, use an HTTP-based API. I've had to deal with this previously with a set of applications and common-code ended up coming back to bite me. Decouple them now if you can. A good implementation of caching data, using pub/sub instead of constant polling, etc should keep the overhead to a minimum.
How should I integrate two PHP applications - via common code, REST, RPC?
[ "", "php", "web-applications", "" ]
It's been a long running issue that I've come across in many-a-hot-and-steamy coding sessions. One person codes this way another codes that way. So after much push and pull I'm curious... Is there any correct way of phrasing a PHP 'IF ELSE' statement? Personally I use the: ``` if ($variable == 'setvalue') { $variable = executefunctiononvariable($variable); } else { $variable = executedifferentfunctiononvariable($variable); } ``` After many arguments though I've been presented with other options such as: ``` if ($variable == 'setvalue') { $variable = executefunctiononvariable($variable); } else { $variable = executedifferentfunctiononvariable($variable); } ``` OR ``` if ($variable == 'setvalue') $variable = executefunctiononvariable($variable); else $variable = executedifferentfunctiononvariable($variable); ``` OR ``` if ($variable == 'setvalue') { $variable = executefunctiononvariable($variable); } else { $variable = executedifferentfunctiononvariable($variable); } ```
I personally format my if/else like the last one: ``` if ($variable == 'setvalue') { $variable = executefunctiononvariable($variable); } else { $variable = executedifferentfunctiononvariable($variable); } ``` Your version is kind a mixture of 1 and 3, in my mind. I have also worked with coders that do all of them and have never heard of a standard one. The php website uses the last one: <https://www.php.net/manual/en/control-structures.elseif.php> I also use the second example in some cases when the if statement will always be very short. If there's ever a possibiltiy of it getting longer (more than 1 line each) I'll do #1. I try to avoid #2 when possible cause it's hard to add the {} later.
I use the last one: ``` if ($variable == 'setvalue') { $variable = executefunctiononvariable($variable); } else { $variable = executedifferentfunctiononvariable($variable); } ``` That being said, it is pretty unimportant which one you go with, just make sure you are ***consistent***.
How to correctly format PHP 'IF ELSE' statements?
[ "", "php", "code-formatting", "readability", "if-statement", "semantics", "" ]
How can I get the HTML source for a given web address in C#?
You can download files with the [`WebClient` class](https://learn.microsoft.com/en-us/dotnet/api/system.net.webclient): ``` using System.Net; using (WebClient client = new WebClient ()) // WebClient class inherits IDisposable { client.DownloadFile("http://yoursite.com/page.html", @"C:\localfile.html"); // Or you can get the file content without saving it string htmlCode = client.DownloadString("http://yoursite.com/page.html"); } ```
Basically: ``` using System.Net; using System.Net.Http; // in LINQPad, also add a reference to System.Net.Http.dll WebRequest req = HttpWebRequest.Create("http://google.com"); req.Method = "GET"; string source; using (StreamReader reader = new StreamReader(req.GetResponse().GetResponseStream())) { source = reader.ReadToEnd(); } Console.WriteLine(source); ```
How can I download HTML source in C#
[ "", "c#", "" ]
I am starting a winform application[.NET 3.5, C#], where in the the main form of the application starts at a particular specified location. Am calling the following code in the constructor for this ``` private void SetFormPosition() { this.StartPosition = FormStartPosition.Manual; this.Left = Screen.PrimaryScreen.WorkingArea.Right - this.Width; this.Top = Screen.PrimaryScreen.WorkingArea.Bottom - this.Height; } ``` After the application starts, I would like to keep the location of the form fixed throughout the application lifetime. Perhaps, I could 'tap' the Location event changed but am not sure if that would be very elegant. Please suggest. Thanks.
I agree with others that you probably shouldn't be doing this, but if you must, read on. You can override the SetBoundsCore method and prevent any movement. We use this to prevent vertical resizing on some UserControl implementations (such as those that contain a ComboBox or other fixed height control), but it is also responsible for the location changing. The following should get you started: ``` protected override void SetBoundsCore( int x, int y, int width, int height, BoundsSpecified specified) { x = this.Location.X; y = this.Location.Y; //...etc... base.SetBoundsCore(x, y, width, height, specified); } ```
You could set the FormBorderStyle to None. This has the added benefit of removing the bar at the top of the window that would give users a false sense that they should be able to move the window.
Fixing the position of a form
[ "", "c#", ".net", "winforms", "" ]
I just started making a PHP/AJAX site and I was wondering where I should add the functionality to **"only include the content"**. Should I have if statements in my header and footer scripts that check for a **"ContentOnly"** get parameter? And then append that parameter to the query strings in my AJAX calls? Or should I use a filter on the AJAX side to only grab the main content Div. Like in jquery: ``` $(this).load("/myPhpPage.php #mainContentDiv") ``` The first method would be faster right? But the second method would keep the AJAX functionality on the presentation side, and out og my PHP. Or is there a third method? I want to keep my PHP scripts intact, so that if oyu go to them directly, the header and footer are there. Thanks in advance!
Please don't use AJAX for the sake of using AJAX. That said, most AJAX libraries pass a `X-Requested-With` HTTP header indicating that it's an AJAX request. You can use that header to decide whether to include your header/footer etc.
Adding to whats been said, your application shouldn't be aware of the rendered view. If you are making a call from javascript - javascript should know the context in which the call was made. What this means is that the return handlers should know what they are handling during callback. The best method I've found for these types of transactions is to package objects in JSON that describe whats being called and who called it. Then when things are returned you can append some of these properties to the returned object. For instance if the same callback handler is used for everything you can simply pass the reference of the context back in this returned object. Again though, don't use AJAX unless it serves a real purpose.
Should I add AJAX logic to my PHP classes/scripts?
[ "", "php", "ajax", "jquery", "html", "" ]
Here's my attempt at it: ``` $query = $database->prepare('SELECT * FROM table WHERE column LIKE "?%"'); $query->execute(array('value')); while ($results = $query->fetch()) { echo $results['column']; } ```
Figured it out right after I posted: ``` $query = $database->prepare('SELECT * FROM table WHERE column LIKE ?'); $query->execute(array('value%')); while ($results = $query->fetch()) { echo $results['column']; } ```
For those using named parameters, here's how to use `LIKE` with `%` partial matching for **MySQL databases**: ``` WHERE column_name LIKE CONCAT('%', :dangerousstring, '%') ``` where the named parameter is `:dangerousstring`. In other words, use explicitly unescaped `%` signs in your own query that are separated and definitely not the user input. **Edit:** Concatenation syntax for **Oracle databases** uses the concatenation operator: `||`, so it'll simply become: ``` WHERE column_name LIKE '%' || :dangerousstring || '%' ``` However there are caveats as @bobince mentions [here](https://stackoverflow.com/questions/4015198/how-to-do-like-search-with-pdo) that: > The > [difficulty](https://stackoverflow.com/questions/2106207/escape-sql-like-value-for-postgres-with-psycopg2/2106443#2106443) > comes when you want to allow a literal `%` or `_` character in the > search string, without having it act as a wildcard. So that's something else to watch out for when combining like and parameterization.
How do I create a PDO parameterized query with a LIKE statement?
[ "", "php", "pdo", "" ]
I'm new to LINQ, I've used LINQ to SQL to link to two tables, it does return data, which is cool. What I'm trying to understand is what datatype is being returned and how do I work with this datatype? I'm used to dealing with datatables. Are we throwing out datatables (and all the other ADO.Net object like rows, datasets etc.) now if using LINQ? If so, what are we replacing that with and how can I use it to do everything I did before with datatables? Also--does it make sense to replace datables, was there a deficiency with them? Here is some code: ``` protected IEnumerable<string> GetMarketCodes() { LINQOmniDataContext db = new LINQOmniDataContext(); var mcodes = from p in db.lkpMarketCodes orderby 0 select p; return (IEnumerable<string>) mcodes; } ``` This code does currently return data (I can see it in debug), but errors at the "return" line, because apparently my datatype is not IEnumerables, which was my best guess. So, one thing I'd like to understand as well is what datatype is my data being put into and how to return it to the calling function.
It is returning an `IQueryable<lkpMarketCode>`, assuming that that `lkpMarketCode` is the type of data in `db.lkpMarketCodes`. If you want the strings, you need to `select p.SomeProperty;`, not just `select p;`. You shouldn't need to cast (since `IQueryable<T>` implements `IEnumerable<T>`); it should also tell you this if you hover on `mcodes`.
I find it more convenient to return List<>'s so I know what I'm dealing with. So your code would be: ``` protected List<string> GetMarketCodes() { LINQOmniDataContext db = new LINQOmniDataContext(); var mcodes = from p in db.lkpMarketCodes orderby 0 select p.SomeProperty; return mcodes.ToList(); } ``` Having said that, I've hardly used LINQ-to-SQL so there are probably better ways around..
How to deal with datatypes returned by LINQ
[ "", "c#", "asp.net", "linq", "ado.net", "" ]
I would like to call a method from an dll, but i don't have the source neither the header file. I tried to use the dumpbin /exports to see the name of the method, but i can found the methods signature? Is there any way to call this method? Thanks,
It is possible to figure out a C function signature by analysing beginnig of its disassembly. The function arguments will be on the stack and the function will do some "pops" to read them in reverse order. You will not find the argument names, but you should be able to find out their number and the types. Things may get more difficult with return value - it may be via 'eax' register or via a special pointer passed to the function as the last pseudo-argument (on the top of the stack).
If the function is a C++ one, you may be able to derive the function signature from the mangled name. [Dependency Walker](http://www.dependencywalker.com/) is one tool that will do this for you. However, if the DLL was created with C linkage (Dependency Walker will tell you this), then you are out of luck.
Call function in c++ dll without header
[ "", "c++", "dll", "signature", "method-signature", "dumpbin", "" ]
I have a url like <http://www.example.com/blah/th.html> I need a javascript function to give me the 'th' value from that. All my urls have the same format (2 letter filenames, with .html extension). I want it to be a safe function, so if someone passes in an empty url it doesn't break. I know how to check for length, but I should be checking for null to right?
``` var filename = url.split('/').pop() ```
Why so difficult? ``` = url.split('#')[0].split('?')[0].split('/').pop(); ``` RegEx below would result same as the above's normally, but will return empty string if the URL was significantly malformed. ``` = (url.match(/^\w+:(\/+([^\/#?\s]+)){2,}(#|\?|$)/)||[])[2]||''; // Returns empty string for relative URLs unlike the original approach = (url.match(/^\w+:(\/+([^\/#?\s]+)){2,}/)||[])[2]||''; // Ignores trailing slash (e.g., ".../posts/?a#b" results "posts") ``` --- All three of them would return `file.name` from: * `my://host/path=&%dir///file.name?q=1/2&3=&#a/?b//?` The third one would also return `dir.name` from: * `my://host/path=&%dir///dir.name/?not/these/#/ones//?` --- **First Edit** (Sep 2023) **:** * Changed processing order because RFC 3986 explicitly allows fragments to having unencoded slashes. Now it would handles fine even the query("search" in JS) includes unencoded slashes(which obviously had to be encoded). * This means it might properly handles any form of URLs that most browsers could recognize. * Using RegEx method was added which includes very basic validity test. Similar concept to the original one but using different approach. --- Try to test **your** URL by enter below. The third one would ignores trailing slash as described the above code. Please comment if the RegEx ones wouldn't properly handle any valid URL ``` <style>*{margin:0;padding:0;box-sizing:border-box;border:none;}label{font-size:90%}input{display:block;margin:.2em auto 0;padding:.8em 1.2em;width:calc(100% - 5px);background:#f8f8f8;border-radius:1em;}p{margin-top:1.4em;padding-left:.5em;font-size:95%;color:#0008}p::after{content:attr(title);margin-left:.5em;padding:.4em .2em;background:#fe8;border-radius:.5em;font-weight:800;color:#000;}</style> <label for="input">Enter your URL here:</label> <input value="my://host/path=&%dir///file.name?q=1/2&3=&#a/?b//?" id="input"> <p>A</p> <p>B</p> <p>C</p> <script> ((input = document.querySelector('input'), output = document.querySelectorAll('p')) => { const getFilename = _=> { output[0].title = input.value.split('#')[0].split('?')[0].split('/').pop(); output[1].title = (input.value.match(/^\w+:(\/+([^\/#?\s]+)){2,}(#|\?|$)/)||[])[2]||''; output[2].title = (input.value.match(/^\w+:(\/+([^\/#?\s]+)){2,}/)||[])[2]||''; }; getFilename(); input.addEventListener('input', getFilename); input.select(); })(); </script> ```
js function to get filename from url
[ "", "javascript", "function", "" ]
I'm a student of webdevelopment and I've got a question about the business (after school, finding a job and such). Now, my college has taught me several things and I've learned several things on my own. We constantly use XHTML 1.1, CSS 2.1, Javascript (mostly formchecking) and PHP, all our code must be W3C compliant (the online validators), or we don't even get a grade. On my own I've messed around with jQuery a bit and next year we'll be getting ASP.NET and LINQ. I've also seen MySQL to a decent degree, but I wouldn't consider myself a specialist. The thing is, while I love coding websites/applications and finding out new stuff and ways to do so, my skills at designing/"making it look pretty" are beyond bad. What I'm wondering is, in the business, how does this affect job-chances? I mean, give me a PSD file and the necessary images and I'll give you your working site, no problem. I'll document my code and everything. But to come up with the look&feel on my own... I find it so damn hard to decide which color to use for something, should the hover effect look like this or that, should I have a little arrow appear when I do this, etc etc etc... Is it expected of a developer to be a designer at the same time? Or do some companies separate it while others don't? The thing is, I have no experience with companies whatsoever. I have no idea how they implement the creation of their websites. Because as students, we don't get a team of designers who tell us what to make, we have to do everything ourselves. Luckily the dean has already said design won't influence grades.
Don't worry about design. It comes next. What you do is actually more important; functionality always supercedes visual presence. I don't believe you will have any troubles finding a job due to lack of design skills. They'll hire somebody else to whip up the graphics. I also believe it's common knowledge that the vast majority of good programmers are bad designers. Hell, just look at the design of [Jon Skeet's web site](http://www.yoda.arachsys.com/csharp/)! What more evidence do you need?! Best regards...
I would describe you as a Web Developer. Site layout falls into the graphic design realm. This isn't always the case of course. There are some talented people who are good at both, but they are rare. I would argue these types of people could be masters at either, but not both. It sounds like you are talented at Web Development. Stick with it. Pick up what you can when it comes to graphic design and site layout. It will help you out when you need to work with a graphic designer. During an interview for a Software Engineer, if I was asked about graphic design it would raise a red flag.
Separating webdevelopment from webdesign
[ "", "php", "" ]
I'm trying to open a new form from within an event handler in the main form in a C# program. The event handler gets called properly and it opens the new form, but the new form is frozen and doesn't even initially populate. I can create a button on the main form and have the new form created after the button is clicked, but it is not working properly when done from the event handler. The event handler doesn't need to know the results of anything done on the form it creates - it just needs to create it and get out of the way. What do I need to do? The new form needs to operate independently of the main form. Here's where I define the event handler: ``` ibclient.RealTimeBar += new EventHandler<RealTimeBarEventArgs>(ibclient_RealTimeBar); ``` Here's the event handler code: ``` void ibclient_RealTimeBar(object sender, RealTimeBarEventArgs e) { FancyForm a_fancy_form = new FancyForm(); a_fancy_form.Show(); } ``` Creating a new form via a button click works fine: ``` private void button7_Click(object sender, EventArgs e) { FancyForm a_fancy_form = new FancyForm(); a_fancy_form.Show(); } ```
Can you post the event handler code?.. Also is the Event being raised in a seperate thread then the main ui? Edit: Not sure what the realtime bar does, but try checking for an invokerequired on your form so you can create the secondary form on the same thread as the main UI.. ``` void ibclient_RealTimeBar(object sender, RealTimeBarEventArgs e) { if(this.InvokeRequired) { this.Invoke(((Action)() => ShowFancyForm())); } ShowFancyForm(); } FancyForm a_fancy_form; private void ShowFancyForm() { if(null != a_fancy_form)return; a_fancy_form = new FancyForm(); a_fancy_form.Show(); } ``` Of course this uses some dirty shortcuts and assumes 3.5 but you can modify to your needs. Also I moved the FancyForm decleration outside of the scope of the method.. again adjust to your needs.
i had the exactly same problem, i used Quintin's code and it works fine now.... I made some changes in order to work with the framework 2.0, here is what i did: First, i created a delegate, pointing at the method that opens the form: ``` public delegate void pantallazo(); pantallazo obj=new pantallazo(this.ShowFancyForm); ``` The method that opens the form is the same one provided by Quintin: ``` smatiCliente a_fancy_form; //smatiCliente is the name of my new form class... private void ShowFancyForm() { if (null != a_fancy_form) return; a_fancy_form = new smatiCliente(); this.Hide(); a_fancy_form.Show(); } ``` And inside my programm's event handler, y made some simple changes: ``` if(this.InvokeRequired) { this.Invoke(obj); } ShowFancyForm(); ``` And thats it, it works great now. The Invoke method executes the appropiate delegate, so the form is now created under the main UI. Hope it works, and thanks a lot Quintin!
Opening New Form From Event Handler in Main Form Hangs App
[ "", "c#", "winforms", "" ]
I'm trying to write a stored procedure that will return two calculated values for each record according to the rules below, but I haven't figured out how to structure the SQL to make it happen. I'm using SQL Server 2008. First, the relevant tables, and the fields that matter to the problem. ProductionRuns ``` RunID (key, and RunID is given to the stored proc as its parameter) ContainerName ProductName TemplateID ``` TemplateMeasurements ``` MeasurementTypeID TemplateID ``` SimpleBounds ``` MeasurementTypeID TemplateID UpperBound LowerBound ``` ContainerBounds ``` MeasurementTypeID TemplateID UpperBound LowerBound ContainerName ``` ProductBounds ``` MeasurementTypeID TemplateID UpperBound LowerBound ProductName ``` And this is what I'm trying to return. I want to return a calculated upper bound and lower bound value for each TemplateMeasurements record that has a matching TemplateID with the ProductionRuns record that has the supplied runID. The calculated upper and lower bounds basically get the tightest bound that can be obtained as a result of the simple, container and product bounds, if they qualify. If a SimpleBounds record exists with the correct MeasurementTypeID and TemplateID, then that becomes one of the qualifying bounds for a particular MeasurementTypeID and record of TemplateMeasurements. For a ContainerBound record to qualify, the TemplateID and MeasurementTypeID must match, but also the ContainerName must match the value for ContainerName in the ProductionRuns record. Also for ProductBounds, the same is true, but for ProductName. For a particular MeasurementTypeID, take all the qualifying bounds, and find the least Upper Bound, and that will be the calculated Upper Bound that is to be returned. Find the greatest Lower Bound of the qualifiers and that will be the returned Lower Bound. I have no idea how to put together SQL to do this however. Also, if none of the three bound tables qualify for a particular MeasurementTypeID, then null could be returned. My thought would be some kind of left outer join, but I'm not sure how to extend that to three tables that could all have null in the results. Thanks for the help.
I don't have time to test this right now, but hopefully this will get you pretty close: ``` SELECT PR.RunID, PR.TemplateID, CASE WHEN MAX(SB.LowerBound) > MAX(CB.LowerBound) AND MAX(SB.LowerBound) > MAX(PB.LowerBound) THEN MAX(SB.LowerBound) WHEN MAX(CB.LowerBound) > MAX(PB.LowerBound) THEN MAX(CB.LowerBound) ELSE MAX(PB.LowerBound) END AS LowerBound, CASE WHEN MIN(SB.UpperBound) < MIN(CB. UpperBound) AND MIN(SB. UpperBound) < MIN(PB. UpperBound) THEN MIN(SB. UpperBound) WHEN MIN(CB. UpperBound) < MIN(PB. UpperBound) THEN MIN(CB. UpperBound) ELSE MIN(PB. UpperBound) END FROM ProductionRuns PR INNER JOIN TemplateMeasurements TM ON TM.TemplateID = PR.TemplateID LEFT OUTER JOIN SimpleBounds SB ON SB.TemplateID = PR.TemplateID AND SB.MeasurementTypeID = TM.MeasurementTypeID LEFT OUTER JOIN ContainerBounds CB ON CB.TemplateID = PR.TemplateID AND CB.MeasurementTypeID = TM.MeasurementTypeID AND CB.ContainerName = PR.ContainerName LEFT OUTER JOIN ProductBounds PB ON PB.TemplateID = PR.TemplateID AND PB.MeasurementTypeID = TM.MeasurementTypeID AND PB.ProductName = PR.ProductName GROUP BY PR.RunID, PR.TemplateID ```
Not to take away from Tom H.'s answer, but you might also consider approaching this problem with unions instead of joins to help split up the different upper/lower rules. It depends on how you think the queries will need to change (if at all) in the future. The query ends up looking cleaner, especially without all the CASE rules, but it might not be as useful in cases when TemplateMeasurement rows don't exist. ``` SELECT RunID, TemplateID, MIN(UpperBound), MAX(LowerBound) FROM (SELECT PR.RunID, SB.TemplateID, SB.UpperBound, SB.LowerBound FROM SimpleBounds SB INNER JOIN TemplateMeasurements TM ON SB.TemplateID = TM.TemplateID AND SB.MeasurementTypeID = TM.MeasurementTypeID INNER JOIN ProductionRuns PR ON TM.TemplateID = PR.TemplateID) UNION (SELECT PR.RunID, CB.TemplateID, CB.UpperBound, CB.LowerBound FROM ContainerBounds CB INNER JOIN TemplateMeasurements TM ON CB.TemplateID = TM.TemplateID AND CB.MeasurementTypeID = TM.MeasurementTypeID INNER JOIN ProductionRuns PR ON TM.TemplateID = PR.TemplateID AND CB.ContainerName = PR.ContainerName) UNION (SELECT PR.RunID, PB.TemplateID, PB.UpperBound, PB.LowerBound FROM ProductBounds PB INNER JOIN TemplateMeasurements TM ON PB.TemplateID = TM.TemplateID AND PB.MeasurementTypeID = TM.MeasurementTypeID INNER JOIN ProductionRuns PR ON TM.TemplateID = PR.TemplateID AND PB.ProductName = PR.ProductName) GROUP BY RunID, TemplateID ```
Finding the T-SQL to return these values
[ "", "sql", "sql-server", "t-sql", "join", "" ]
What is the correct (modern) method for tapping into the window resize event that works in Firefox, [WebKit](http://en.wikipedia.org/wiki/WebKit), and Internet Explorer? And can you turn both scrollbars on/off?
jQuery has a [built-in method](http://api.jquery.com/resize/) for this: ``` $(window).resize(function () { /* do something */ }); ``` For the sake of UI responsiveness, you might consider using a setTimeout to call your code only after some number of milliseconds, as shown in the following example, inspired by [this](http://snipplr.com/view/6284/jquery--window-on-resize-event/): ``` function doSomething() { alert("I'm done resizing for the moment"); }; var resizeTimer; $(window).resize(function() { clearTimeout(resizeTimer); resizeTimer = setTimeout(doSomething, 100); }); ```
``` $(window).bind('resize', function () { alert('resize'); }); ```
Cross-browser window resize event - JavaScript / jQuery
[ "", "javascript", "jquery", "cross-browser", "resize", "" ]
I don't really get it: what's the purpose of a new line character? If I do this: ``` <?php echo "This is a test. \n"; echo "This is another test."; ?> ``` Code results in both sentences being in the same line. Why doesn't the \n causes the second sentence being in second line? The sentences are each in it's own line, if I do: ``` <?php echo "This is a test. <br>"; echo "This is another test."; ?> ``` But I have also seen people do this: ``` <?php echo "This is a test. <br>\n"; echo "This is another test."; ?> ``` Which essentially results in the same output as the second code snippet. Someone care to explain this?
The [HTML standard](http://www.w3.org/TR/html401/struct/text.html#h-9.1) treats a line break as just another white space character, which is why the `<br>` tag exists. Note however a line break will work within a `<pre>` tag, or an element with the [white-space:pre](https://developer.mozilla.org/en/CSS/white-space) CSS style. The third example is just to make "pretty" HTML: it makes it easier to "view source" and check it by eye. Otherwise, you have a big long string of HTML.
as you have observed there are different ways to create a new line. ## **`<br />`** this is not a new line character, this is an **XHTML** tag which means, it works in XHTML. correctly speaking it is not a new line character but the tag makes sure, one is inserted = it forces a line break. closing tag is mandatory. [XHTML specs](http://www.w3.org/TR/xhtml1/#diffs) --- ## **`<br>`** this is a HTML tag which forces a line break. closing tag is prohibited. [HTML 4.1 specs](http://www.w3.org/TR/1999/REC-html401-19991224/struct/text.html#h-9.3.2) --- ## **`\n`** is an escape sequence for the ASCII new line char LF. A common problem is the use of '\n' when communicating using an Internet protocol that mandates the use of ASCII CR+LF for ending lines. Writing '\n' to a text mode stream works correctly on Windows systems, but produces only LF on Unix, and something completely different on more exotic systems. Using "\r\n" in binary mode is slightly better, as it works on many ASCII-compatible systems, but still fails in the general case. One approach is to use binary mode and specify the numeric values of the control sequence directly, "\x0D\x0A". [read more](http://en.wikipedia.org/wiki/Newline) --- ## **PHP\_EOL** is a php new line constant which is replaced by the correct system dependent new line. so the message is, use everything in it's right place.
What good is new line character?
[ "", "php", "html", "" ]
I am fairly new to using separate layers for the business logic (Domain) and database access logic, but in the course of working things out I've come across a problem to which I still feel I haven't found a great solution. **Clarification** My existing solution uses Data Mappers to deal with the database interactions directly. However, as I've further investigated this issue many people have suggested that the Domain layer should not directly communicate with nor contain the Data Mappers that actually perform the database interaction. This is why I placed the Repository objects between the Domain and the necessary Data Mappers but this doesn't feel quite natural or correct. So the real question is what layer naturally exists to handle communication between the Domain and the Data Mappers? Any examples of how to structure it would be appreciated. For example: * How do I properly handle retrieving a collection of domain objects within the context of another domain object? * How do I force the insertion of a single domain object or collection of objects based on an action performed against another object. The case I'm facing currently is that when a Person is attached to a Campaign, then I need to insert all of the Events that need to be executed for that Person for that Campaign.
Gabriel, this is called the "[impedance matching problem](http://visualwikipedia.com/en/Object-relational_impedance_mismatch)." There are many solutions around, from heavyweight ones like J2EE entity beans to Ruby ActiveRecord to simply coding a hand connection. ### Update Okay, well, its hard to see exactly how to attack this without a lot more information, but here's the basic approach. Any of these sorts of architectural issues are driven by non-functional requirements like performance; in addition, there is a correctness issue here, in that you want to make sure updates are done in the correct order. So, you're going to need to think about the *workload*, which is to say the pattern of usage in real-world application. With that in mind, you basically have a couple of issues: first, the base data types in your application may not map correctly to the data base (eg, what's a VARCHAR property represented as in your code?), and second your domain model may not map cleanly to your database model. What you would like is to have the database and the dmain model work out so that one instance of a domain object is exactly a row of a table in your database model; in large-scale applications you can rarely do this because of either performance constraints or constraints imposed by a pre-existing database model. Now, if you completely control your database model, it simplifies things somewhat, because then you can make your database model more closely resemble the domain. This might mean the database model is somewhat denormalized, but if so, you can (depending on your database) handle that with views, or just not have a completely normalized database. Normalization is a useful theoretical construct, but that doesn't mean you can't relax it in a real system. If you *don't* completely control your database model, then you need a layer of objects that make the mapping. You've got a bunch of options to choose from in implementing that: you can build views or denormalized tables in the database, you can build intermediate objects, or you can do some of both, or even have several steps of both (ie, an intermediate object that accesses a denormalizaed table.) At that point, though, you run into issues with "don't repeat yourself" and "do the simplest thing that will possibly work." Think about what is most likely to change? Your domain model? If you've got a strong domain model, that's less likely --- the business changes relatively rarely. The exact representation of data in the database? A little more common. Or, most commonly, the exact patterns of use (like discovering a need to handle concurrent updates.) So, when you think about that, what do you need to do to make it as easy as possible to deal with the most common changes. I realize this isn't giving you very precise instructions, but I don't think we can offer precise instructions without knowing a whole lot about your applicaiton. But then I also kind of get the impression you're wondering about what the "right" way of handling this would be, while you are already working with something that more or less does the job. So, I'd end up by asking "what are you unhappy with now?" and "How would you like to solve that?"
There is a distinction between a domain model and the implementation of it. Just because your model shows a relationship `Person ---> Campaign ---> Event` does not mean that you have to implement it in this way. IOW, your model shows your analysis and design in an object-oriented way, yet you implement that model in OOP which is limited in how well it can replicate that model in code. Consider the following. A `Person` is not defined by its ownership of a `Campaign`, so campaign can be left out of its knowledge responsibities. On the other hand, a `Campaign` is defined by the `Event`s that occur as part of its execution, so it is fair to have a collection of events within a campaign. The point that I am making is that each class should have just enough behaviour and knowledge to make it whole. As for communication between the domain and the persistence layers, consider them as two very distinct systems that are not concerned with the other. All each of them knows is what its responsiblities are and what announcements it makes. For example, the persistence layer knows how to persist data passed to it and to announce that data have been saved. However, the persistence layer does not necessarily need to understand the domain objects. Similarly, the domain layer understands `Person`, `Campaign`, and `Event` but knows nothing about persistence. The implication of the above is that the domain layer needs to be a whole by itself and should not be dependent on the persistence layer for its data. However, it still needs to be supplied with data to perform its responsibilities. That data can come from either the user interface or the database and is passed to it via a third-party that knows about both domain and persistence layers. So, in code (pseudo-C#)... ``` namespace DomainLayer { interface IDomainListener { void PersonCreated(Person person); } class Person { private string name; public Person(string name) { this.name = name; } public string Name { get { return name; } } } class Domain { private IDomainListener listener; public Domain(IDomainListener listener) { this.listener = listener; } public void CreatePerson(string name) { Person person = new Person(name); listener.PersonCreated(person); } } } namespace PersistenceLayer { interface IPersistenceListener { void PersonDataSaved(int id, object data); } class Persistence { private IPersistenceListener listener; public Persistence(IPersistenceListener listener) { this.listener = listener; } public void SaveData(object data) { int id = ...; // save data and return identifier listener.DataSaved(id, data); } } } namespace MyApplication { class MyController : IDomainListener, IPersistenceListener { public void CreatePersonButton_Clicked() { Domain domain = new Domain(this); domain.CreatePerson(NameTextbox.Text); } public void PersonCreated(Person person) { Persistence persistence = new Persistence(this); persistence.SavePersonData(person.Name); } public void DataSaved(int id, object data) { // display data on UI } } } ``` As you can see, the namespaces represent the different tiers. The `XYZListener` interfaces define the announcements that are made by the `XYZ` tier. Any other tiers that are interested in these announcements and will respond to them need to implement these interfaces, as does our `MyApplication` tier. When the "create button" is clicked, the controller creates the `Domain` facade object for the domain layer and registers itself as a listener. It then calls the `CreatePerson` method which instantiates a `Person` then announces that this has been done, passing the new instance. The controller responds to this announcement in the `PersonCreated` implementation where it spawns a facade of the persistence layer and registers itself as the listener again. It then calls the `SaveData` method whichannounces `DataSaved` when completed. The implementation of that method then displays the data on the UI. As you can see, the domain layer and the persistence layer are each aware of only tmemselves and are not concerned with the responsibilities of the other. It is the application logic, manifested here as the controller, that wires the two together. Back to your specific problem, you could have a method `FindPerson` on the persistence, which would announce `PersonFound(int id)`. The response by the controller would be to call the persistence layer to retrieve data about campaign and events, then call the domain layer with that data to build the `Person`. Sorry for the long answer...
How To Handle Communication Between the Domain and Database Layers?
[ "", "php", "oop", "dns", "database-abstraction", "" ]
Is there a way in Eclipse to view all recent changes of a project? Like in SVN, you can just go "view log" on a right-click menu of a folder. In CVS/Eclipse, I can only view history on a certain file. I am not able to see what files were changed in the last commit. >.<'' Thank you!
I don't think it's supported through the standard Eclipse plugin. There was a [sourceforge project](http://cvschangelog.sourceforge.net/) that tried to augment this functionality, not sure if it's still actively being maintained.
You could do diff based on time. See what changes have been committed during last couple days, for example. It's rather awkward in Eclipse, but you can go into CVS Repositories view, click "Dates", enter the "from" date, and then navigate to project you want to find changes, choose "Compare With" and choose "to" date. And then you will see changes that happened during that time.
How to view recent CVS project changes in Eclipse?
[ "", "java", "eclipse", "svn", "version-control", "cvs", "" ]
I've been programming for a while but It's been mostly Java and C#. I've never actually had to manage memory on my own. I recently began programming in C++ and I'm a little confused as to when I should store things on the stack and when to store them on the heap. My understanding is that variables which are accessed very frequently should be stored on the stack and objects, rarely used variables, and large data structures should all be stored on the heap. Is this correct or am I incorrect?
No, the difference between stack and heap isn't performance. It's lifespan: any local variable inside a function (anything you do not malloc() or new) lives on the stack. It goes away when you return from the function. If you want something to live longer than the function that declared it, you must allocate it on the heap. ``` class Thingy; Thingy* foo( ) { int a; // this int lives on the stack Thingy B; // this thingy lives on the stack and will be deleted when we return from foo Thingy *pointerToB = &B; // this points to an address on the stack Thingy *pointerToC = new Thingy(); // this makes a Thingy on the heap. // pointerToC contains its address. // this is safe: C lives on the heap and outlives foo(). // Whoever you pass this to must remember to delete it! return pointerToC; // this is NOT SAFE: B lives on the stack and will be deleted when foo() returns. // whoever uses this returned pointer will probably cause a crash! return pointerToB; } ``` For a clearer understanding of what the stack is, come at it from the other end -- rather than try to understand what the stack does in terms of a high level language, look up "call stack" and "calling convention" and see what the machine really does when you call a function. Computer memory is just a series of addresses; "heap" and "stack" are inventions of the compiler.
I would say: Store it on the stack, if you CAN. Store it on the heap, if you NEED TO. Therefore, prefer the stack to the heap. Some possible reasons that you can't store something on the stack are: * It's too big - on multithreaded programs on 32-bit OS, the stack has a small and fixed (at thread-creation time at least) size (typically just a few megs. This is so that you can create lots of threads without exhausting address space. For 64-bit programs, or single threaded (Linux anyway) programs, this is not a major issue. Under 32-bit Linux, single threaded programs usually use dynamic stacks which can keep growing until they reach the top of the heap. * You need to access it outside the scope of the original stack frame - this is really the main reason. It is possible, with sensible compilers, to allocate non-fixed size objects on the heap (usually arrays whose size is not known at compile time).
Proper stack and heap usage in C++?
[ "", "c++", "memory-management", "stack", "heap-memory", "c++-faq", "" ]
I have `MyClass`, which stores two integers, and I have a `Vector<MyClass>` called `myVector`. Now, I do this: ``` ... myVector.add(new MyClass(1,1)); for(MyClass m : myVector) System.out.println(m); System.out.println(myVector.size()); myVector.remove(new MyClass(1,1)); for(MyClass m : myVector) System.out.println(m); System.out.println(myVector.size()); ... ``` The problem here is that the object isn't being removed, as I see when I print the Vector and its size. How could I fix that? **Edit:** I can see that it isn't finding the object and I've tested with `contains()` to be sure. What I need is to compare it by value. If I could overload the `==` operator I could do it, but I have no idea how to fix this. **Edit 2:** Okay, `equals()` will do what I want it to. But I'm not sure what to put in the hashCode method. **Edit 3:** I can find it with contains(), but remove doesn't remove it. I can use this to remove it though: ``` int position = myVector.indexOf(new MyClass(1,1)); myVector.remove(position); ``` Which is the same as remove(new MyClass(1,1), except the above code works and remove() doesn't. Any thoughts?
There are few errors in [`grammar31`'s answer;](https://stackoverflow.com/questions/588503/removing-a-custom-object-from-a-java-vector/588522#588522) fix them up here: ``` @Override public boolean equals(Object o) { if ((o == null) || (o.getClass() != myClass.class)) { return false; } myClass other = (myClass) o; return (other.a == this.a) && (other.b == this.b); } @Override public int hashCode() { return a * 31 + b; } ``` This assumes `myClass` has `int` members `a` and `b`.
You have two choices: * implement `equals()` for MyClass, or * remove the same object instance you added Your code modified to remove the same instance that you added: ``` MyClass myClass = new myClass(1,1); myVector.add(myClass ); for(myClass m : myVector) System.out.println(m); System.out.println(myVector.size()); myVector.remove(myClass ); for(myClass m : myVector) System.out.println(m); System.out.println(myVector.size()); ``` I see that others gave a sample implementation for `equals()` -- but you have to decide is that what you want? It very well may be. You get to pick which of the above two choices is right for this object. As others said, if you implement `equals()` then you **must** implement `hashCode()`. Eclipse has a nice right-click option to implement both `equals()` and `hashCode()`. I have seen code problems where someone implemented only `equals()` so although this seems like a theoretical issue, it's really not.
Java: Removing a custom object from a Vector
[ "", "java", "collections", "vector", "" ]
I have a loop written in C++ which is executed for each element of a big integer array. Inside the loop, I mask some bits of the integer and then find the min and max values. I heard that if I use SSE instructions for these operations it will run much faster compared to a normal loop written using bitwise AND , and if-else conditions. My question is should I go for these SSE instructions? Also, what happens if my code runs on a different processor? Will it still work or these instructions are processor specific?
1. SSE instructions are processor specific. You can look up which processor supports which SSE version on wikipedia. 2. If SSE code will be faster or not depends on many factors: The first is of course whether the problem is memory-bound or CPU-bound. If the memory bus is the bottleneck SSE will not help much. Try simplifying your integer calculations, if that makes the code faster, it's probably CPU-bound, and you have a good chance of speeding it up. 3. Be aware that writing SIMD-code is a lot harder than writing C++-code, and that the resulting code is much harder to change. Always keep the C++ code up to date, you'll want it as a comment and to check the correctness of your assembler code. 4. Think about using a library like the IPP, that implements common low-level SIMD operations optimized for various processors.
SIMD, of which SSE is an example, allows you to do the same operation on multiple chunks of data. So, you won't get any advantage to using SSE as a straight replacement for the integer operations, you will only get advantages if you can do the operations on multiple data items at once. This involves loading some data values that are contiguous in memory, doing the required processing and then stepping to the next set of values in the array. Problems: 1 If the code path is dependant on the data being processed, SIMD becomes much harder to implement. For example: ``` a = array [index]; a &= mask; a >>= shift; if (a < somevalue) { a += 2; array [index] = a; } ++index; ``` is not easy to do as SIMD: ``` a1 = array [index] a2 = array [index+1] a3 = array [index+2] a4 = array [index+3] a1 &= mask a2 &= mask a3 &= mask a4 &= mask a1 >>= shift a2 >>= shift a3 >>= shift a4 >>= shift if (a1<somevalue) if (a2<somevalue) if (a3<somevalue) if (a4<somevalue) // help! can't conditionally perform this on each column, all columns must do the same thing index += 4 ``` 2 If the data is not contigous then loading the data into the SIMD instructions is cumbersome 3 The code is processor specific. SSE is only on IA32 (Intel/AMD) and not all IA32 cpus support SSE. You need to analyse the algorithm and the data to see if it can be SSE'd and that requires knowing how SSE works. There's plenty of documentation on Intel's website.
Using SSE instructions
[ "", "c++", "optimization", "assembly", "processor", "sse", "" ]
I'm currently involved in developing a product (developed in C#) that'll be available for downloading and installing for free but in a very limited version. To get access to all the features the user has to pay a license fee and receive a key. That key will then be entered into the application to "unlock" the full version. As using a license key like that is kind of usual I'm wondering : 1. How's that usually solved? 2. How can I generate the key and how can it be validated by the application? 3. How can I also avoid having a key getting published on the Internet and used by others that haven't paid the license (a key that basically isn't "theirs"). I guess I should also tie the key to the version of application somehow so it'll be possible to charge for new keys in feature versions. Anything else I should think about in this scenario?
Caveat: you can't prevent users from pirating, but only make it easier for honest users to do the right thing. Assuming you don't want to do a special build for each user, then: * Generate yourself a secret key for the product * Take the user's name * Concatentate the users name and the secret key and hash with (for example) SHA1 * Unpack the SHA1 hash as an alphanumeric string. This is the individual user's "Product Key" * Within the program, do the same hash, and compare with the product key. If equal, OK. But, I repeat: **this won't prevent piracy** --- I have recently read that this approach is not cryptographically very sound. But this solution is already weak (**as the software itself has to include the secret key somewhere**), so I don't think this discovery invalidates the solution as far as it goes. Just thought I really ought to mention this, though; if you're planning to derive something else from this, beware.
There are many ways to generate license keys, but very few of those ways are truly secure. And it's a pity, because for companies, license keys have almost the same value as real cash. Ideally, you would want your license keys to have the following properties: 1. Only your company should be able to generate license keys for your products, even if someone completely reverse engineers your products (which WILL happen, I speak from experience). Obfuscating the algorithm or hiding an encryption key within your software is really out of the question if you are serious about controlling licensing. If your product is successful, someone will make a key generator in a matter of days from release. 2. A license key should be useable on only one computer (or at least you should be able to control this very tightly) 3. A license key should be short and easy to type or dictate over the phone. You don't want every customer calling the technical support because they don't understand if the key contains a "l" or a "1". Your support department would thank you for this, and you will have lower costs in this area. So how do you solve these challenges ? 1. The answer is simple but technically challenging: digital signatures using public key cryptography. Your license keys should be in fact signed "documents", containing some useful data, signed with your company's private key. The signatures should be part of the license key. The product should validate the license keys with the corresponding public key. This way, even if someone has full access to your product's logic, they cannot generate license keys because they don't have the private key. A license key would look like this: BASE32(CONCAT(DATA, PRIVATE\_KEY\_ENCRYPTED(HASH(DATA)))) The biggest challenge here is that the classical public key algorithms have large signature sizes. RSA512 has an 1024-bit signature. You don't want your license keys to have hundreds of characters. One of the most powerful approaches is to use elliptic curve cryptography (with careful implementations to avoid the existing patents). ECC keys are like 6 times shorter than RSA keys, for the same strength. You can further reduce the signature sizes using algorithms like the Schnorr digital signature algorithm (patent expired in 2008 - good :) ) 2. This is achievable by product activation (Windows is a good example). Basically, for a customer with a valid license key, you need to generate some "activation data" which is a signed message embedding the computer's hardware id as the signed data. This is usually done over the internet, but only ONCE: the product sends the license key and the computer hardware id to an activation server, and the activation server sends back the signed message (which can also be made short and easy to dictate over the phone). From that moment on, the product does not check the license key at startup, but the activation data, which needs the computer to be the same in order to validate (otherwise, the DATA would be different and the digital signature would not validate). Note that the activation data checking do not require verification over the Internet: it is sufficient to verify the digital signature of the activation data with the public key already embedded in the product. 3. Well, just eliminate redundant characters like "1", "l", "0", "o" from your keys. Split the license key string into groups of characters.
How to generate and validate a software license key?
[ "", "c#", "license-key", "" ]
I'm running [JSLint](http://www.jslint.com)'s [Rhino version](http://www.jslint.com/rhino/index.html) from the Ubuntu command line like so: ``` $ rhino jslint.js myScript.js ``` While the web interface offers various options, I couldn't figure out how to invoke those via the command line. Am I overlooking anything in the documentation?
Yes! You did miss it. You can specify the options for jslint at the top of your .js file. See [the doc page](http://www.jslint.com/lint.html) and read the options section for an example.
My [jslint4java](https://github.com/happygiraffe/jslint4java) project includes a command line tool that allows you to add in flags like `--undef`. This may be more useful than running it by hand.
JSLint (CLI): options?
[ "", "javascript", "jslint", "lint", "" ]
In fact i am trying to calculate the time a function takes to complete in my program. So i am using the logic to get system time when i call the function and time when the function returns a value then by subtracting the values i get time it took to complete. So if anyone can tell me some better approach or just how to get system time at an instance it would be quite a help
Your question is totally dependant on WHICH system you are using. Each system has its own functions for getting the current time. For finding out how long the system has been running, you'd want to access one of the "high resolution performance counters". If you don't use a performance counter, you are usually limited to microsecond accuracy (or worse) which is almost useless in profiling the speed of a function. In Windows, you can access the counter via the 'QueryPerformanceCounter()' function. This returns an arbitrary number that is different on each processor. To find out how many ticks in the counter == 1 second, call 'QueryPerformanceFrequency()'. If you're coding under a platform other than windows, just google performance counter and the system you are coding under, and it should tell you how you can access the counter. ***Edit (clarification)*** This is c++, just include windows.h and import the "Kernel32.lib" (seems to have removed my hyperlink, check out the documentation at: <http://msdn.microsoft.com/en-us/library/ms644904.aspx>). For C#, you can use the "System.Diagnostics.PerformanceCounter" class.
The approach I use when timing my code is the time() function. It returns a single numeric value to you representing the [epoch](http://en.wikipedia.org/wiki/Unix_time) which makes the subtraction part easier for calculation. Relevant code: ``` #include <time.h> #include <iostream> int main (int argc, char *argv[]) { int startTime, endTime, totalTime; startTime = time(NULL); /* relevant code to benchmark in here */ endTime = time(NULL); totalTime = endTime - startTime; std::cout << "Runtime: " << totalTime << " seconds."; return 0; } ``` Keep in mind this is user time. For CPU, time see Ben's reply.
How to get system time in C++?
[ "", "c++", "" ]
I'm trying to build a project in Visual Studio 2008. I'm getting a bunch of linker errors that are really bothering me. My application is a Win32 console application using only native ANSI C++. They are all linker errors of the same pattern. Linker errors are related to every single private static data member of classes I have defined in my own header files. I'm guessing this is probably a simple fact of c++ I'm not already aware of? Example: I refer to the members of SingleDelay within function definitions of SingleDelay's member classes in a file Delays.cpp. ie: ``` SingleDelay::tick(void *output, void *input, int nbufferFrames)<br>{ //.. code here<br> x = dry * castInput + wet * castInput;<br> } ``` Error 38 error LNK2001: unresolved external symbol "private: static double SingleDelay::dry" (?dry@SingleDelay@@0NA) Delays.obj testall Definition of SingleDelay in Delays.h: ``` class SingleDelay{ private: static double dry; //% of dry signal<br> static double wet; //% of wet signal<br> static unsigned int delay; //Delay in milliseconds<br> static int delayCell; //Index in the delayBuffer of the delay to add<br> static double *delayBuffer; //Delay buffer is 1 second long at sample rate SAMPLE_RATE<br> static unsigned int bufferCell; //Pointer to the current delay buffer cell<br> public: //Tick function static void tick(void *output, void *input,int nBufferFrames); //Set and Get functions static void setSingleDelay(double tDry, double tWet, unsigned int tDelay); static void setSingleDelay(void); static void setDry(double tDry); static void setWet(double tWet); static void setDelay(unsigned int tDelay); static double getDry(){ return dry;} static double getWet(){ return wet;} static unsigned int getDelay(){ return delay;} static void initializeDelayBuffer(){ destroyDelayBuffer(); delayBuffer = new double[bufferLength]; } static void destroyDelayBuffer(){ delete[ ] delayBuffer; } }; ```
1. Maybe you haven't added the library and include paths of the library you use to the project definitions? 2. C++ error are always fun to look at. Or not. In any case, do you initialize your static variables anywhere? You need to do this in a .cpp file somewhere. And remember to use static variables with care. They are actually global variables in disguise, and can make future changes, such as multi-threading, more difficult.
> They are all linker errors of the same pattern. Linker errors are related to every single private static data member of classes I have defined in my own header files. **All** static data members **must** have a definition in a .cpp file *somewhere*. > Error 38 error LNK2001: unresolved external symbol "private: static double SingleDelay::dry" (?dry@SingleDelay@@0NA) Delays.obj testall The linker is telling you that there is no defined storage for that variable. This line **must** appear somewhere in exactly one .cpp file: ``` double SingleDelay::dry = 0.0; ```
Linker errors with private members of class in header file
[ "", "c++", "include", "header", "linker", "" ]
I'm spending a lot of time to learn how OOP is implemented in Javascript as in ECMA 262 of 1999: now I want to know if someone think that the new JS 2.0 will arrive soon and I am studying uselessly because with this new version will be the OOP implemented in a classical way (Java etc.) and will be interfaces, generics and other classical languages features... So do I must stop and wait?
Javascript is a dynamically typed script language that uses prototype based inheritance. It is exactly these attributes that differentiate it from Java, C# and make it so applicable for web development in particular. Why would anyone want or need to turn it into another Java or C#, they already do that job quite well. So no, learning Javascript now is very worthwhile. Learning Javascript more deeply has actually helped me to better understand dynamic languages in general (coming from C#,C++) and even has some [Functional aspects](https://stackoverflow.com/questions/145053/javascript-as-a-functional-language) to get to grips with.
JavaScript 2 is dead. There'll be ECMAScript 3.1, which will feature mostly clarifications, security enhancements and library updates, and ECMAScript Harmony, the replacement for ECMAScript 4 (aka JavaScript 2). A lot of the things planned for ES4 are no longer under discussion for Harmony, though.
javascript 2.0 specs
[ "", "javascript", "" ]
i am going crazy, i have tried everything, urlencode, html encode, but urlrewriting.net is reading Arabic querystring as ?? charecters totally appreciate the help
After long tedious trials My advise to anyone who is using URL Rewriting with IIS 7 in integrated mode, stop wasting your time and either use MVC routing or forget about Arabic Strings in URL
If you have .NET 3.5 SP1 installed, I'd recommend you to switch to `System.Web.Routing` instead of the UrlRewriting.net library. I've recently switched from UrlRewriting.net to `System.Web.Routing` myself and routing is so much better thought out and better integrated with the ASP.NET pipeline. If you have the opportunity to switch, I'd recommend it. If not, please post your UrlRewriting.net configuration file so we can tweak the regular expressions that apparently aren't working. Please post some examples of URIs that don't work too. Also, what's the values defined in the `web.config` section: ``` <system.web> <globalization requestEncoding="utf-8" responseEncoding="utf-8" fileEncoding="utf-8" /> </system.web> ``` Like in the above example, use UTF-8. Also be sure to encode the characters as UTF-8 in the URI.
urlrewriting.net not accepting arabic query string
[ "", "c#", "asp.net", "arabic", "" ]
I'm having trouble getting this `<div>` to automatically update using jQuery. It's essentially a chat feature that would have the page refresh every two seconds. I've tried several variances of this with no luck, so completely rewriting it differently is more than welcome. **jQuery code**: ``` function updateChat() { $.get("chat_data.php", function(data) { $("div#chattable").html(data); }); window.setTimeout("updateChat();", 2000); } $(document).ready(function() { updateChat(); }); ``` Here's the `<div>` code, which gets the data from `chat_data.php`. This should be updating: ``` <div id="chattable"> <?php include("js/chat_data.php"); ?> </div> ``` **chat\_data.php**: ``` <?php foreach($query->result() as $row): echo "<div class='chatrow'>"; echo "<div class='chattime'>".date("[M.d] g:ia",strtotime($row->time))."</div>"; echo "<div class='chatnamematch'>[".$row->name."]</div>"; echo "<div class='chatbody'>".$row->body."</div>"; echo "</div>"; endforeach; ?> ```
The error that you get, of `Call to a member function result() on a non-object in /var/www/js/chat_data.php on line 1` means that the variable `$query` is not being set correctly. You said you were using CodeIgniter, is so, which folder is the file `ajax_data.php` located in? Application/controllers?
Everything looks like it should work. Does the URL in $.get need to be "js/chat\_data.php" instead of "chat\_data.php"? Also I would take the php include out of the div because it's just going to load via AJAX anyway.
jQuery: Auto-Refreshing a `<div>`
[ "", "php", "jquery", "codeigniter", "" ]
I've got a problem with my terrain engine (using DirectX). I'm using a vector to hold the vertices of a detail block. When the block increases in detail, so the vector does. BUT, when the block decreases its detail, the vector doesn't shrink in size. So, my question: is there a way to shrink the size of a vector? I did try this: ``` vertexvector.reserve(16); ```
The usual trick is to swap with an empty vector: ``` vector<vertex>(vertexvector.begin(), vertexvector.end()).swap(vertexvector); ```
If you pop elements from a vector, it does not free memory (because that would invalidate iterators into the container elements). You can copy the vector to a new vector, and then swap that with the original. That will then make it not waste space. The Swap has constant time complexity, because a swap must not invalidate iterators to elements of the vectors swapped: So it has to just exchange the internal buffer pointers. ``` vector<vertex>(a).swap(a); ``` It is known as the "Shrink-to-fit" idiom. Incidentally, the next C++ version includes a "shrink\_to\_fit()" member function for std::vector.
shrinking a vector
[ "", "c++", "stl", "vector", "" ]
I was wondering, why can't I overload '=' in C#? Can I get a better explanation?
Memory managed languages usually work with references rather than objects. When you define a class and its members you are defining the object behavior, but when you create a variable you are working with references to those objects. Now, the operator = is applied to references, not objects. When you assign a reference to another you are actually making the receiving reference point to the same object that the other reference is. ``` Type var1 = new Type(); Type var2 = new Type(); var2 = var1; ``` In the code above, two objects are created on the heap, one referred by var1 and the other by var2. Now the last statement makes the var2 reference point to the same object that var1 is referring. After that line, the garbage collector can free the second object and there is only one object in memory. In the whole process, no operation is applied to the objects themselves. Going back to why = cannot be overloaded, the system implementation is the only sensible thing you can do with references. You can overload operations that are applied to the objects, but not to references.
If you overloaded '=' you would never be able to change an object reference after it's been created. ... think about it - any call to theObjectWithOverloadedOperator=something inside the overloaded operator would result in another call to the overloaded operator... so what would the overloaded operator really be doing ? Maybe setting some other properties - or setting the value to a new object (immutability) ? Generally not what '=' implies.. You can, however, override the implicit & explicit cast operators: <http://www.blackwasp.co.uk/CSharpConversionOverload.aspx>
Why can '=' not be overloaded in C#?
[ "", "c#", "operator-overloading", "" ]
I've just started learning C# and in the introduction to arrays they showed how to establish a variable as an array but is seems that one must specify the length of the array at assignment, so what if I don't know the length of the array?
Arrays must be assigned a length. To allow for any number of elements, use the [`List`](http://msdn.microsoft.com/en-us/library/6sh2ey19.aspx) class. For example: ``` List<int> myInts = new List<int>(); myInts.Add(5); myInts.Add(10); myInts.Add(11); myInts.Count // = 3 ```
Use `List<>` to build up an 'array' of unknown length. Use `List<>.ToArray()` to return a real array, and not a `List`. ``` var list = new List<int>(); list.Add(1); list.Add(2); list.Add(3); var array = list.ToArray(); ```
Array of an unknown length in C#
[ "", "c#", "arrays", "" ]
I have a file: Base.h ``` class Base; class DerivedA : public Base; class DerivedB : public Base; /*etc...*/ ``` and another file: BaseFactory.h ``` #include "Base.h" class BaseFactory { public: BaseFactory(const string &sClassName){msClassName = sClassName;}; Base * Create() { if(msClassName == "DerivedA") { return new DerivedA(); } else if(msClassName == "DerivedB") { return new DerivedB(); } else if(/*etc...*/) { /*etc...*/ } }; private: string msClassName; }; /*etc.*/ ``` Is there a way to somehow convert this string to an actual type (class), so that BaseFactory wouldn't have to know all the possible Derived classes, and have if() for each one of them? Can I produce a class from this string? I think this can be done in C# through Reflection. Is there something similar in C++?
Nope, there is none, unless you do the mapping yourself. C++ has no mechanism to create objects whose types are determined at runtime. You can use a map to do that mapping yourself, though: ``` template<typename T> Base * createInstance() { return new T; } typedef std::map<std::string, Base*(*)()> map_type; map_type map; map["DerivedA"] = &createInstance<DerivedA>; map["DerivedB"] = &createInstance<DerivedB>; ``` And then you can do ``` return map[some_string](); ``` Getting a new instance. Another idea is to have the types register themself: ``` // in base.hpp: template<typename T> Base * createT() { return new T; } struct BaseFactory { typedef std::map<std::string, Base*(*)()> map_type; static Base * createInstance(std::string const& s) { map_type::iterator it = getMap()->find(s); if(it == getMap()->end()) return 0; return it->second(); } protected: static map_type * getMap() { // never delete'ed. (exist until program termination) // because we can't guarantee correct destruction order if(!map) { map = new map_type; } return map; } private: static map_type * map; }; template<typename T> struct DerivedRegister : BaseFactory { DerivedRegister(std::string const& s) { getMap()->insert(std::make_pair(s, &createT<T>)); } }; // in derivedb.hpp class DerivedB { ...; private: static DerivedRegister<DerivedB> reg; }; // in derivedb.cpp: DerivedRegister<DerivedB> DerivedB::reg("DerivedB"); ``` You could decide to create a macro for the registration ``` #define REGISTER_DEC_TYPE(NAME) \ static DerivedRegister<NAME> reg #define REGISTER_DEF_TYPE(NAME) \ DerivedRegister<NAME> NAME::reg(#NAME) ``` I'm sure there are better names for those two though. Another thing which probably makes sense to use here is `shared_ptr`. If you have a set of unrelated types that have no common base-class, you can give the function pointer a return type of `boost::variant<A, B, C, D, ...>` instead. Like if you have a class Foo, Bar and Baz, it looks like this: ``` typedef boost::variant<Foo, Bar, Baz> variant_type; template<typename T> variant_type createInstance() { return variant_type(T()); } typedef std::map<std::string, variant_type (*)()> map_type; ``` A `boost::variant` is like an union. It knows which type is stored in it by looking what object was used for initializing or assigning to it. Have a look at its documentation [here](http://www.boost.org/doc/libs/1_38_0/doc/html/variant.html). Finally, the use of a raw function pointer is also a bit oldish. Modern C++ code should be decoupled from specific functions / types. You may want to look into [`Boost.Function`](http://www.boost.org/doc/libs/1_38_0/doc/html/function.html) to look for a better way. It would look like this then (the map): ``` typedef std::map<std::string, boost::function<variant_type()> > map_type; ``` `std::function` will be available in the next version of C++ too, including `std::shared_ptr`.
No there isn't. My preferred solution to this problem is to create a dictionary which maps name to creation method. Classes that want to be created like this then register a creation method with the dictionary. This is discussed in some detail in the [GoF patterns book](http://en.wikipedia.org/wiki/Design_Patterns_(book)).
Is there a way to instantiate objects from a string holding their class name?
[ "", "c++", "inheritance", "factory", "instantiation", "" ]
[Project Euler](http://en.wikipedia.org/wiki/Project_Euler) and other coding contests often have a maximum time to run or people boast of how fast their particular solution runs. With Python, sometimes the approaches are somewhat kludgey - i.e., adding timing code to `__main__`. What is a good way to profile how long a Python program takes to run?
**Python includes a profiler called [`cProfile`](https://docs.python.org/3/library/profile.html#module-cProfile).** It not only gives the total running time, but also times each function separately, and tells you how many times each function was called, making it easy to determine where you should make optimizations. You can call it from within your code, or from the interpreter, like this: ``` import cProfile cProfile.run('foo()') ``` Even more usefully, you can invoke cProfile when running a script: ``` python -m cProfile myscript.py ``` Or when running a module: ``` python -m cProfile -m mymodule ``` To make it even easier, I made a little batch file called 'profile.bat': ``` python -m cProfile %1 ``` So all I have to do is run: ``` profile euler048.py ``` And I get this: ``` 1007 function calls in 0.061 CPU seconds Ordered by: standard name ncalls tottime percall cumtime percall filename:lineno(function) 1 0.000 0.000 0.061 0.061 <string>:1(<module>) 1000 0.051 0.000 0.051 0.000 euler048.py:2(<lambda>) 1 0.005 0.005 0.061 0.061 euler048.py:2(<module>) 1 0.000 0.000 0.061 0.061 {execfile} 1 0.002 0.002 0.053 0.053 {map} 1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler objects} 1 0.000 0.000 0.000 0.000 {range} 1 0.003 0.003 0.003 0.003 {sum} ``` EDIT: Updated link to a good video resource from PyCon 2013 titled [***Python Profiling***](https://web.archive.org/web/20170318204046/http://lanyrd.com/2013/pycon/scdywg/) [Also via YouTube](https://www.youtube.com/watch?v=QJwVYlDzAXs).
A while ago I made [`pycallgraph`](https://pycallgraph.readthedocs.io/en/master/) which generates a visualisation from your Python code. **Edit:** I've updated the example to work with 3.3, the latest release as of this writing. After a `pip install pycallgraph` and installing [GraphViz](http://www.graphviz.org/) you can run it from the command line: ``` pycallgraph graphviz -- ./mypythonscript.py ``` Or, you can profile particular parts of your code: ``` from pycallgraph import PyCallGraph from pycallgraph.output import GraphvizOutput with PyCallGraph(output=GraphvizOutput()): code_to_profile() ``` Either of these will generate a `pycallgraph.png` file similar to the image below: ![enter image description here](https://i.stack.imgur.com/aiNEA.png)
How do I profile a Python script?
[ "", "python", "performance", "optimization", "time-complexity", "profiling", "" ]
Question is as stated in the title: What are the performance implications of marking methods / properties as virtual? Note - I'm assuming the virtual methods will *not* be overloaded in the common case; I'll usually be working with the base class here.
Virtual functions only have a very small performance overhead compared to direct calls. At a low level, you're basically looking at an array lookup to get a function pointer, and then a call via a function pointer. Modern CPUs can even predict indirect function calls reasonably well in their branch predictors, so they generally won't hurt modern CPU pipelines too badly. At the assembly level, a virtual function call translates to something like the following, where `I` is an arbitrary immediate value. ``` MOV EAX, [EBP + I] ; Move pointer to class instance into register MOV EBX, [EAX] ; Move vtbl pointer into register. CALL [EBX + I] ; Call function ``` Vs. the following for a direct function call: ``` CALL I ; Call function directly ``` The real overhead comes in that virtual functions can't be inlined, for the most part. (They can be in JIT languages if the VM realizes they're always going to the same address anyhow.) Besides the speedup you get from inlining itself, inlining enables several other optimizations such as constant folding, because the caller can know how the callee works internally. For functions that are large enough not to be inlined anyhow, the performance hit will likely be negligible. For very small functions that might be inlined, that's when you need to be careful about virtual functions. Another thing to keep in mind is that all programs require flow control, and this is never free. What would replace your virtual function? A switch statement? A series of if statements? These are still branches that may be unpredictable. Furthermore, given an N-way branch, a series of if statements will find the proper path in O(N), while a virtual function will find it in O(1). The switch statement may be O(N) or O(1) depending on whether it is optimized to a jump table.
Rico Mariani outlines issues regarding performance in his [Performance Tidbits blog](http://blogs.msdn.com/ricom/archive/2004/08/24/219751.aspx), where he stated: > **Virtual Methods:** Are you using > virtual methods when direct calls > would do? Many times people go with > virtual methods to allow for future > extensibility. Extensibility is a > good thing but it does come at a price > – make sure your full extensibility > story is worked out and that your use > of virtual functions is actually going > to get you to where you need to be. > For instance, sometimes people think > through the call site issues but then > don’t consider how the “extended” > objects are going to be created. > Later they realize that (most of) the > virtual functions didn’t help at all > and they needed an entirely different > model to get the “extended” objects > into the system. > > **Sealing:** Sealing can be a way of > limiting the polymorphism of your > class to just those sites where > polymorphism is needed. If you will > fully control the type then sealing > can be a great thing for performance > as it enables direct calls and > inlining. Basically the argument against virtual methods is that it disallows the code to be a candidate of in-lining, as opposed to direct calls. In the MSDN article [Improving .NET Application Performance and Scalability](http://msdn.microsoft.com/en-us/library/ms998547.aspx), this is further expounded: > **Consider the Tradeoffs of Virtual Members** > > Use virtual members to provide extensibility. If you do not need to extend your class > design, avoid virtual members because they are more expensive to call due to a virtual > table lookup and they defeat certain run-time performance optimizations. For example, virtual members cannot be inlined by the compiler. Additionally, when you allow subtyping, you actually present a very complex contract to consumers and you inevitably end up with versioning problems when you attempt to upgrade your class in the future. A criticism of the above, however, comes from the TDD/BDD camp (who wants methods defaulting to virtual) arguing that the performance impact is negligible anyway, especially as we get access to much faster machines.
What are the performance implications of marking methods / properties as virtual?
[ "", "c#", "performance", "virtual", "" ]
I have a formview with an insertion template. In this template there is a drop down list with a number of items I want users to be able to select from. Beside the drop down list there is a button which I am using to add the selected item from the drop down list to a gridview which also exists in the insertion template. My problem is that when I click the button to add the selected item from the drop down list the selected item, index or value from the drop down list are not available. I am using a OnClick event handler to catch the event from the button click but I suspect there is some kind of refresh of the template going on here which I am not understanding as nothing appears to be accessible from the button event handler. I don't believe a postback is occurring as I have disabled the CausesValidation property for my button.
So it turns it out it was all my fault. The formview control I have is contained in a panel which did not have view state enabled. This was preventing the drop down list from remembering the item I had selected it seems. Thanks all for your comments and suggestions.
It seems like you are binding your DDL on postbacks as well. If the ddl data isnt hardcoded and you have the call for your ddl databind function in the Page\_Load, you need to call the function like this to ensure it is not bound on postback: ``` if(!IsPostBack) { BindDDL(); } ``` Otherwise we need more information to help you and please post your code.
Obtaining selected item, value or index from drop down list in formview after button press
[ "", "c#", "asp.net", "" ]
I have a project where i'm trying to use weakrefs with callbacks, and I don't understand what I'm doing wrong. I have created simplified test that shows the exact behavior i'm confused with. Why is it that in this test test\_a works as expected, but the weakref for self.MyCallbackB disappears between the class initialization and calling test\_b? I thought like as long as the instance (a) exists, the reference to self.MyCallbackB should exist, but it doesn't. ``` import weakref class A(object): def __init__(self): def MyCallbackA(): print 'MyCallbackA' self.MyCallbackA = MyCallbackA self._testA = weakref.proxy(self.MyCallbackA) self._testB = weakref.proxy(self.MyCallbackB) def MyCallbackB(self): print 'MyCallbackB' def test_a(self): self._testA() def test_b(self): self._testB() if __name__ == '__main__': a = A() a.test_a() a.test_b() ```
You want a [WeakMethod](http://code.activestate.com/recipes/81253/). An explanation why your solution doesn't work can be found in the discussion of the recipe: > Normal weakref.refs to bound methods don't quite work the way one expects, because bound methods are first-class objects; **weakrefs to bound methods are dead-on-arrival** unless some other strong reference to the same bound method exists.
According to the documentation for the Weakref module: > In the following, the term referent means the object which is referred to > by a weak reference. > > A weak reference to an object is not > enough to keep the object alive: when > the only remaining references to a > referent are weak references, garbage > collection is free to destroy the > referent and reuse its memory for > something else. Whats happening with MyCallbackA is that you are holding a reference to it in the instances of A, thanks to - ``` self.MyCallbackA = MyCallbackA ``` Now, there is no reference to the bound method MyCallbackB in your code. It is held only in a.\_\_class\_\_.\_\_dict\_\_ as an unbound method. Basically, a bound method is created (and returned to you) when you do self.methodName. (AFAIK, a bound method works like a property -using a descriptor (read-only): at least for new style classes. I am sure, something similar i.e. w/o descriptors happens for old style classes. I'll leave it to someone more experienced to verify the claim about old style classes.) So, self.MyCallbackB dies as soon as the weakref is created, because there is no strong reference to it! My conclusions are based on :- ``` import weakref #Trace is called when the object is deleted! - see weakref docs. def trace(x): print "Del MycallbackB" class A(object): def __init__(self): def MyCallbackA(): print 'MyCallbackA' self.MyCallbackA = MyCallbackA self._testA = weakref.proxy(self.MyCallbackA) print "Create MyCallbackB" # To fix it, do - # self.MyCallbackB = self.MyCallBackB # The name on the LHS could be anything, even foo! self._testB = weakref.proxy(self.MyCallbackB, trace) print "Done playing with MyCallbackB" def MyCallbackB(self): print 'MyCallbackB' def test_a(self): self._testA() def test_b(self): self._testB() if __name__ == '__main__': a = A() #print a.__class__.__dict__["MyCallbackB"] a.test_a() ``` **Output** > Create MyCallbackB > Del MycallbackB > Done playing with MyCallbackB > MyCallbackA **Note :** I tried verifying this for old style classes. It turned out that "print a.test\_a.\_\_get\_\_" outputs - `<method-wrapper '__get__' of instancemethod object at 0xb7d7ffcc>` for both new and old style classes. So it may not really be a descriptor, just something descriptor-like. In any case, the point is that a bound-method object is created when you acces an instance method through self, and unless you maintain a strong reference to it, it will be deleted.
Why doesn't the weakref work on this bound method?
[ "", "python", "weak-references", "" ]
In all the examples I've seen of the #if compiler directive, they use "DEBUG". Can I use "RELEASE" in the same way to exclude code that I don't want to run when compiled in debug mode? The code I want to surround with this block sends out a bunch of emails, and I don't want to accidentally send those out when testing.
No, it won't, unless you do some work. The important part here is what DEBUG really is, and it's a kind of constant defined that the compiler can check against. If you check the project properties, under the Build tab, you'll find three things: * A text box labelled "Conditional compilation symbols" * A check box labelled "Define DEBUG constant" * A check box labelled "Define TRACE constant" There is no such checkbox, nor constant/symbol pre-defined that has the name RELEASE. However, you can easily add that name to the text box labelled Conditional compilation symbols, but make sure you set the project configuration to Release-mode before doing so, as these settings are per configuration. So basically, unless you add that to the text box, `#if RELEASE` won't produce any code under any configuration.
`RELEASE` is not defined, but you can use ``` #if (!DEBUG) ... #endif ```
Will #if RELEASE work like #if DEBUG does in C#?
[ "", "c#", ".net", "debugging", "" ]
I am building a simple client/server application using java sockets and experimenting with the ObjectOutputStream etc. I have been following the tutorial at this url [http://java.sun.com/developer/technicalArticles/ALT/sockets](http://java.sun.com/developer/technicalArticles/ALT/sockets/) starting half way down when it talks about transporting objects over sockets. See my code for the client [http://pastebin.com/m37e4c577](http://pastebin.com/m37e4c577 "http://pastebin.com/m37e4c577") However this doesn't seem to work and i cannot figure out what's not working. The code commented out at the bottom is directly copied out of the tutorial - and this works when i just use that instead of creating the client object. Can anyone see anything i am doing wrong?
The problem is the order you are creating the streams: In the server from the article (which I assume is what you are using), when a new connection is opened, the server opens first an input stream, and then an output stream: ``` public Connect(Socket clientSocket) { client = clientSocket; try { ois = new ObjectInputStream(client.getInputStream()); oos = new ObjectOutputStream(client.getOutputStream()); } catch(Exception e1) { // ... } this.start(); } ``` The commented example code uses the reverse order, first establishing the output stream, then the input stream: ``` // open a socket connection socket = new Socket("localhost", 2000); // open I/O streams for objects oos = new ObjectOutputStream(socket.getOutputStream()); ois = new ObjectInputStream(socket.getInputStream()); ``` But your code does it the other way around: ``` server = new Socket(host, port); in = new ObjectInputStream(server.getInputStream()); out = new ObjectOutputStream(server.getOutputStream()); ``` Establishing an output stream/input stream pair will stall until they have exchanged their handshaking information, so you must match the order of creation. You can do this just by swapping lines 34 and 35 in your example code.
You are not writing the object anywhere. See that link again, somewhere you have to write: ``` oos.writeObject( new Date() ); ``` In your code you only have ``` ois.readObject(); ``` That's why
Java Socket Programming
[ "", "java", "sockets", "network-programming", "objectinputstream", "" ]
Win32's [`CreateFile`](http://msdn.microsoft.com/en-us/library/aa363858(VS.85).aspx) has `FILE_FLAG_DELETE_ON_CLOSE`, but I'm on Linux. I want to open a temporary file which will always be deleted upon program termination. I could understand that in the case of a program crash it may not be practical to guarantee this, but in any other case I'd like it to work. I know about RAII. I know about signals. I know about `atexit(3)`. I know I can open the file and delete it immediately and the file will remain accessible until the file descriptor is closed (which even handles a crash). None of these seem like a complete and straightforward solution: 1. RAII: been there, done that: I have an object whose destructor deletes the file, but the destructor is not called if the program is terminated by a signal. 2. signals: I'm writing a low-level library which makes registering a signal handler a tricky proposition. For example, what if the application uses signals itself? I don't want to step on any toes. I might consider some clever use of `sigaction(2)` to cope...but haven't put enough thought into this possibility yet. 3. `atexit(3)`: apparently useless, since it isn't called during abnormal termination (e.g. via a signal). 4. preemptive `unlink(2)`: this is pretty good except that I need the file to remain visible in the filesystem (otherwise the system is harder to monitor/troubleshoot). What would you do here? **Further Explanation** I elided one detail in my original post which I now realize I should have included. The "file" in this case is not strictly a normal file, but rather is a POSIX Message Queue. I create it via `mq_open()`. It can be closed via `mq_close()` or `close()` (the former is an alias for the latter on my system). It can be removed from the system via `mq_unlink()`. All of this makes it analogous to a regular file, *except* that I cannot choose the directory in which the file resides. This makes the current most popular answer (placing the file in `/tmp`) unworkable, because the "file" is created by the system in a virtual filesystem with very limited capacity. (I've mounted the virtual filesystem in `/dev/mqueue`, following the example in `man mq_overview`) . This also explains why I need the name to remain visible (making the immediate-unlink approach unworkable): the "file" must be shared between two or more processes.
The requirement that the name remains visible while the process is running makes this hard to achieve. Can you revisit that requirement? If not, then there probably isn't a perfect solution. I would consider combining a signal handling strategy with what Kamil Kisiel suggests. You could keep track of the signal handlers installed before you install your signal handlers. If the default handler is SIG\_IGN, you wouldn't normally install your own handler; if it is SIG\_DFL, you would remember that; if it is something else - a user-defined signal handler - you would remember that pointer, and install your own. When your handler was called, you'd do whatever you need to do, and then call the remembered handler, thus chaining the handlers. You would also install an atexit() handler. You would also document that you do this, and the signals for which you do it. Note that signal handling is an imperfect strategy; SIGKILL cannot be caught, and the atexit() handler won't be called, and the file will be left around. David Segond's suggestion - a temporary file name daemon - is interesting. For simple processes, it is sufficient; if the process requesting the temporary file forks and expects the child to own the file thereafter (and exits) then the daemon has a problem detecting when the last process using it dies - because it doesn't automatically know the processes that have it open.
If you're just making a temporary file, just create it in `/tmp` or a subdirectory thereof. Then make a best effort to remove it when done through `atexit(3)` or similar. As long as you use unique names picked through `mkstemp(3)` or similar even if it fails to be deleted because of a program crash, you don't risk reading it again on subsequent runs or other such conditions. At that point it's just a system-level problem of keeping `/tmp` clean. Most distros wipe it on boot or shutdown, or run a regular cronjob to delete old files.
Guaranteed file deletion upon program termination (C/C++)
[ "", "c++", "c", "file", "termination", "unlink", "" ]
Well, as the title says, how can I set an image or anything else as the mouse cursor. I know you can use the built in ones that are part of the Cursors class, but I was wondering if there's a way to use your own image (preferably without P/Invoke, but either way is fine.) EDIT: I guess I didn't explain what I wanted clearly enough. I don't have any .cur files (are they easy to create?), I want to know if there's a way to take an image from disk (or wherever) and use that as the cursor. Ultimatley what I'd like would be to do something like this: ``` myForm.Cursor = new Cursor(Image.FromFile("foo.jpg")); ``` Possible?
At the simplest, you just use: ``` form.Cursor = new Cursor(path); ``` But there are overloads to load from other sources (an unmanaged pointer, a raw stream, or a resx).
If you want some more information on how to create your own cursor resources then there is a good tutorial [here](http://www.functionx.com/vcnet/resources/cursors.htm). You should create your cursor file and embed it as a resource in your executable - easy in Visual Studio. This is tidier and more efficient than loading it from a separate file. You can then load it directly using the Cursor constructor that takes a resouce name.
How do you use a custom mouse Cursor in a .NET application?
[ "", "c#", ".net", "mouse-cursor", "" ]
We have the following example table (actually taken from another example here on stackoverflow...) ``` CREATE TABLE example ( id integer primary key, name char(200), parentid integer, value integer); ``` And given a specific child we want to get the top Parent. I know of the tablefunc connectby function but that is for getting a parents children. But, I'm interested in the other direction, given a child what is its top parent? What type of query would I try and use? Any friendly advice is appreciated.
You could write a PL/PgSQL function to perform the recursion: ``` CREATE LANGUAGE plpgsql; CREATE OR REPLACE FUNCTION get_top_parent( child integer ) RETURNS integer as $$ DECLARE parent integer; last_parent integer; BEGIN last_parent := child; SELECT INTO parent parentid FROM example WHERE id = child; IF parent is NOT NULL THEN parent := get_top_parent(parent); ELSE parent := last_parent; END IF; RETURN parent; END $$ LANGUAGE plpgsql; ``` This function can definitely be optimized. It will likely be slow if depth is very high and the tables are large, so like Jegern mentioned it might be worth caching the hierarchy, possibly using triggers and such.
Look into Joe Celko's books, [SQL for Smarties](https://rads.stackoverflow.com/amzn/click/com/0123693799) and his book on [Trees and Hierarchies](https://rads.stackoverflow.com/amzn/click/com/1558609202). He has a section or two in SQL for Smarties on trees and hierarchies, or if you want to really get into it then you can get the other book. SQL for Smarties will also touch on a lot of other database design and querying info. Some really good stuff in there. He presents alternative ways of modeling trees which can work much better than the adjacency list model that you're using. In one of his models the question of "who is the top most parent" becomes very trivial.
Climbing a Parent/Child Database Relationship in Postgres
[ "", "sql", "postgresql", "recursion", "" ]
I've a working logger class, which outputs some text into a richtextbox (Win32, C++). Problem is, i always end up using it like this: ``` stringstream ss; ss << someInt << someString; debugLogger.log(ss.str()); ``` instead, it would be much more convenient to use it like a stream as in: ``` debugLogger << someInt << someString; ``` Is there a better way than forwarding everything to an internal stringstream instance? If'd do this, when would i need to flush?
You need to implement `operator <<` appropriately for your class. The general pattern looks like this: ``` template <typename T> logger& operator <<(logger& log, T const& value) { log.your_stringstream << value; return log; } ``` Notice that this deals with (non-`const`) references since the operation modifies your logger. Also notice that you need to return the `log` parameter in order for chaining to work: ``` log << 1 << 2 << endl; // is the same as: ((log << 1) << 2) << endl; ``` If the innermost operation didn't return the current `log` instance, all other operations would either fail at compile-time (wrong method signature) or would be swallowed at run-time.
Overloading the insertion operator<< is not the way to go. You will have to add overloads for all the endl or any other user defined functions. The way to go is to define your own streambuf, and to bind it into a stream. Then, you just have to use the stream. Here are a few simple examples: * [Logging In C++](http://www.ddj.com/cpp/201804215) by Petru Marginean, DDJ Sept 05th 2007 * [Rutger E.W. van Beusekom's logstream class](http://lilypond.org/~janneke/vc/stream.cc/), check also the .hpp alongside with this file
How to use my logging class like a std C++ stream?
[ "", "c++", "logging", "stream", "" ]
What is the best way to localize enumeration descriptions in .net? (See [Adding descriptions to enumeration constants](http://dotnet.mvps.org/dotnet/faqs/?id=enumdescription&lang=en) for enum description example) Ideally I would like something that uses ResourceManager and resource files so it fits in with how other areas of the app are localized.
This is what I ended up going with, I didn't see the value in adding a custom attribute class to hold a resource key and then looking up into the resource files - why not just use the enums typename + value as a resource key? ``` using System; using System.Resources; using System.Reflection; public class MyClass { enum SomeEnum {Small,Large}; private ResourceManager _resources = new ResourceManager("MyClass.myResources", System.Reflection.Assembly.GetExecutingAssembly()); public string EnumDescription(Enum enumerator) { string rk = String.Format("{0}.{1}",enumerator.GetType(),enumerator); string localizedDescription = _resources.GetString(rk); if (localizedDescription == null) { // A localized string was not found so you can either just return // the enums value - most likely readable and a good fallback. return enumerator.ToString(); // Or you can return the full resourceKey which will be helpful when // editing the resource files(e.g. MyClass+SomeEnum.Small) // return resourceKey; } else return localizedDescription; } void SomeRoutine() { // Looks in resource file for a string matching the key // "MyClass+SomeEnum.Large" string s1 = EnumDescription(SomeEnum.Large); } } ```
My solution, using native decription attribute: ``` public class LocalizedEnumAttribute : DescriptionAttribute { private PropertyInfo _nameProperty; private Type _resourceType; public LocalizedEnumAttribute(string displayNameKey) : base(displayNameKey) { } public Type NameResourceType { get { return _resourceType; } set { _resourceType = value; _nameProperty = _resourceType.GetProperty(this.Description, BindingFlags.Static | BindingFlags.Public); } } public override string Description { get { //check if nameProperty is null and return original display name value if (_nameProperty == null) { return base.Description; } return (string)_nameProperty.GetValue(_nameProperty.DeclaringType, null); } } } public static class EnumExtender { public static string GetLocalizedDescription(this Enum @enum) { if (@enum == null) return null; string description = @enum.ToString(); FieldInfo fieldInfo = @enum.GetType().GetField(description); DescriptionAttribute[] attributes = (DescriptionAttribute[])fieldInfo.GetCustomAttributes(typeof(DescriptionAttribute), false); if (attributes.Any()) return attributes[0].Description; return description; } } ``` The Enum declaration ``` public enum MyEnum { [LocalizedEnum("ResourceName", NameResourceType = typeof(ResourceType))] Test = 0 } ``` Then call `MyEnumInstance.GetLocalizedDescription()`
Localizing enum descriptions attributes
[ "", "c#", ".net", "localization", "enums", "" ]
Time and time again I find myself having to write thread-safe versions of BindingList and ObservableCollection because, when bound to UI, these controls cannot be changed from multiple threads. What I'm trying to understand is **why** this is the case - is it a design fault or is this behavior intentional?
The problem is designing a thread safe collection is not simple. Sure it's simple enough to design a collection which can be modified/read from multiple threads without corrupting state. But it's much more difficult to design a collection that is usable given that it's updated from multiple threads. Take the following code as an example. ``` if ( myCollection.Count > 0 ) { var x = myCollection[0]; } ``` Assume that myCollection is a thread safe collection where adds and updates are guaranteed not to corrupt state. This code is not thread safe and is a race condition. Why? Even though myCollection is safe, there is no guarantee that a change does not occur between the two method calls to myCollection: namedly Count and the indexer. Another thread can come in and remove all elements between these calls. This type of problem makes using a collection of this type quite frankly a nightmare. You can't ever let the return value of one call influence a subsequent call on the collection. **EDIT** I expanded this discussion on a recent blog post: <http://blogs.msdn.com/jaredpar/archive/2009/02/11/why-are-thread-safe-collections-so-hard.aspx>
To add a little to Jared's excellent answer: thread safety does not come for free. Many (most?) collections are only used within a single thread. Why should those collections have performance or functionality penalties to cope with the multi-threaded case?
Why aren't classes like BindingList or ObservableCollection thread-safe?
[ "", "c#", ".net", "thread-safety", "observablecollection", "bindinglist", "" ]
Does anyone know how much memory is taken up when you create a reference type variable? String s = "123"; How much memory would 's' take up as a reference, not the data pointing to it?
This is broken down in the following fashion: ``` String s = "123"; ``` The variable s: this will consume the native pointer size on the current architecture (which is considered 32bit if the OS is 32bit or the process is executing under WoW64), so 32 bits or 64 bits accordingly. In this case s is either on the stack, or en-registered. Were you to place the string reference into an array then that space would be consumed on the heap. The fact that string is an object: 8 bytes of overhead split 4 bytes for the method table, which doubles as the indication of what actual type an object is plus 4 bytes for some housekeeping bits and the syncblock that allows it to be used as the target of a lock statement. The string is always terminated by the null character (though this is an implementation detail not part of the contract of the runtime) so that it can be used directly with C-Style string apis, characters are UTF-16 so two bytes per character in the sense .Net uses character (the details of why is complicated and requires a segue into Unicode I shall omit). Strings further contain the following: ### Versions of .Net prior to 4.0 * an int for the length of the string in characters * an int for the length of the underlying array holding the characters * a character which is the first character in the string (subsequent characters are directly after it) or the null character for an empty string The string **may** consume up to twice the amount of memory required to actually hold the character array needed owning to the way StringBuilder's work Thus the string itself will consume between 16 + (2\*n) + 2 and 16 + (4\*n) + 2 bytes on the heap depending on how it was created. ### Versions of .Net from 4.0 onwards * an int for the length of the string in characters * a character which is the first character in the string (subsequent characters are directly after it) or the null character for an empty string The string itself will consume at least 12 + (2\*n) + 2 bytes on the heap. --- Note that in both cases the string may take up slightly more actual space than it uses depending on what alignment the runtime enforces, this is likely to be no more than the IntPtr.Size. This may be further complicated by string interning (where two separate instances end up pointing to the same string since it is immutable) since you should in theory divide the heap overhead (plus the intern overhead) by the number of 'independent' references to the string. for more discussion of this take a look at [this article](http://www.codeproject.com/KB/dotnet/strings.aspx). Note however that this article is out of date for the changes in 4.0.
The size of the reference itself will depend on your processor architecture - 4 bytes on 32-bit, 8 bytes on 64-bit.
c# reference variable mem allocation
[ "", "c#", "memory", "variables", "reference", "" ]
I have the following data structure and data: ``` CREATE TABLE `parent` ( `id` int(11) NOT NULL auto_increment, `name` varchar(10) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1; INSERT INTO `parent` VALUES(1, 'parent 1'); INSERT INTO `parent` VALUES(2, 'parent 2'); CREATE TABLE `other` ( `id` int(11) NOT NULL auto_increment, `name` varchar(10) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1; INSERT INTO `other` VALUES(1, 'other 1'); INSERT INTO `other` VALUES(2, 'other 2'); CREATE TABLE `relationship` ( `id` int(11) NOT NULL auto_increment, `parent_id` int(11) NOT NULL, `other_id` int(11) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1; INSERT INTO `relationship` VALUES(1, 1, 1); INSERT INTO `relationship` VALUES(2, 1, 2); INSERT INTO `relationship` VALUES(3, 2, 1); ``` I want to find the the parent records with both other's 1 & 2. This is what I've figured out, but I'm wondering if there is a better way: ``` SELECT p.id, p.name FROM parent AS p LEFT JOIN relationship AS r1 ON (r1.parent_id = p.id) LEFT JOIN relationship AS r2 ON (r2.parent_id = p.id) WHERE r1.other_id = 1 AND r2.other_id = 2; ``` The result is 1, "parent 1" which is correct. The problem is that once you get a list of 5+ joins, it gets messy and as the relationship table grows, it gets slow. Is there a better way? I'm using MySQL and PHP, but this is probably pretty generic.
Ok, I tested this. The queries from best to worst were: **Query 1: Joins (0.016s; basically *instant*)** ``` SELECT p.id, name FROM parent p JOIN relationship r1 ON p.id = r1.parent_id AND r1.other_id = 100 JOIN relationship r2 ON p.id = r2.parent_id AND r2.other_id = 101 JOIN relationship r3 ON p.id = r3.parent_id AND r3.other_id = 102 JOIN relationship r4 ON p.id = r4.parent_id AND r4.other_id = 103 ``` **Query 2: EXISTS (0.625s)** ``` SELECT id, name FROM parent p WHERE EXISTS (SELECT 1 FROM relationship WHERE parent_id = p.id AND other_id = 100) AND EXISTS (SELECT 1 FROM relationship WHERE parent_id = p.id AND other_id = 101) AND EXISTS (SELECT 1 FROM relationship WHERE parent_id = p.id AND other_id = 102) AND EXISTS (SELECT 1 FROM relationship WHERE parent_id = p.id AND oth ``` **Query 3: Aggregate (1.016s)** SELECT p.id, p.name FROM parent p WHERE (SELECT COUNT(\*) FROM relationship WHERE parent\_id = p.id AND other\_id IN (100,101,102,103)) **Query 4: UNION Aggregate (2.39s)** ``` SELECT id, name FROM ( SELECT p1.id, p1.name FROM parent AS p1 LEFT JOIN relationship as r1 ON(r1.parent_id=p1.id) WHERE r1.other_id = 100 UNION ALL SELECT p2.id, p2.name FROM parent AS p2 LEFT JOIN relationship as r2 ON(r2.parent_id=p2.id) WHERE r2.other_id = 101 UNION ALL SELECT p3.id, p3.name FROM parent AS p3 LEFT JOIN relationship as r3 ON(r3.parent_id=p3.id) WHERE r3.other_id = 102 UNION ALL SELECT p4.id, p4.name FROM parent AS p4 LEFT JOIN relationship as r4 ON(r4.parent_id=p4.id) WHERE r4.other_id = 103 ) a GROUP BY id, name HAVING count(*) = 4 ``` Actually the above was producing the wrong data so it's either wrong or I did something wrong with it. Whatever the case, the above is just a bad idea. If that's not fast then you need to look at the explain plan for the query. You're probably just lacking appropriate indices. Try it with: ``` CREATE INDEX ON relationship (parent_id, other_id) ``` Before you go down the route of aggregation (SELECT COUNT(\*) FROM ...) you should read [SQL Statement - “Join” Vs “Group By and Having”](https://stackoverflow.com/questions/477006/sql-statement-join-vs-group-by-and-having/477013#477013). **Note:** The above timings are based on: ``` CREATE TABLE parent ( id INT PRIMARY KEY, name VARCHAR(50) ); CREATE TABLE other ( id INT PRIMARY KEY, name VARCHAR(50) ); CREATE TABLE relationship ( id INT PRIMARY KEY, parent_id INT, other_id INT ); CREATE INDEX idx1 ON relationship (parent_id, other_id); CREATE INDEX idx2 ON relationship (other_id, parent_id); ``` and nearly 800,000 records created with: ``` <?php ini_set('max_execution_time', 600); $start = microtime(true); echo "<pre>\n"; mysql_connect('localhost', 'scratch', 'scratch'); if (mysql_error()) { echo "Connect error: " . mysql_error() . "\n"; } mysql_select_db('scratch'); if (mysql_error()) { echo "Selct DB error: " . mysql_error() . "\n"; } define('PARENTS', 100000); define('CHILDREN', 100000); define('MAX_CHILDREN', 10); define('SCATTER', 10); $rel = 0; for ($i=1; $i<=PARENTS; $i++) { query("INSERT INTO parent VALUES ($i, 'Parent $i')"); $potential = range(max(1, $i - SCATTER), min(CHILDREN, $i + SCATTER)); $elements = sizeof($potential); $other = rand(1, min(MAX_CHILDREN, $elements - 4)); $j = 0; while ($j < $other) { $index = rand(0, $elements - 1); if (isset($potential[$index])) { $c = $potential[$index]; $rel++; query("INSERT INTO relationship VALUES ($rel, $i, $c)"); unset($potential[$index]); $j++; } } } for ($i=1; $i<=CHILDREN; $i++) { query("INSERT INTO other VALUES ($i, 'Other $i')"); } $count = PARENTS + CHILDREN + $rel; $stop = microtime(true); $duration = $stop - $start; $insert = $duration / $count; echo "$count records added.\n"; echo "Program ran for $duration seconds.\n"; echo "Insert time $insert seconds.\n"; echo "</pre>\n"; function query($str) { mysql_query($str); if (mysql_error()) { echo "$str: " . mysql_error() . "\n"; } } ?> ``` So once again joins carry the day.
Given that parent table contains unique key on (parent\_id, other\_id) you can do this: ``` select p.id, p.name from parent as p where (select count(*) from relationship as r where r.parent_id = p.id and r.other_id in (1,2) ) >= 2 ```
How do you perform an AND with a join?
[ "", "sql", "mysql", "join", "left-join", "" ]
and thanks for any assistance. How would I return from a method an unknown Generic.List type. ``` public void Main() { List<A> a= GetData("A"); } public List<T> GetData(string listType) { if(listType == "A") { List<A> a= new List<A>() ... return a; } else { List<B> b = new List<B>() return b; } } ``` In the below example I recieve an error similar to: Can't Convert `List<A>` to `List<T>` Is this possible? The error occurs on the 'return a;' line of code. Also, What will I need to do to make sure an error does not occur on the line: ``` List<A> a= GetData("A"); ``` Thanks, Steven
Use `IList` instead of `List<T>`.
An alternative to being limited to returning a list of objects would be to either ensure that A and B derive from a common base type or implement a common interface, then return a list of that base type or interface. Include a constraint on the Generic method to that effect:- ``` List<ICommon> GetData<T>() where T: ICommon { } ```
return unknown Generic List<T>
[ "", "c#", "generics", "types", "" ]
I need a solution for the page refresh and the back button when using AJAX. I'm using simple javascript for the AJAX implementation. If anybody could send a code snippet I would be very grateful.
essentially, you need to use & monitor the hash portion of the url... ``` http://.../path?parms#hashpart ``` Whan you change the hash, iirc window.location.hash , it won't reload the page, but your ajax can monitor, and respond to it.
If you're using jQuery, there's [the history plugin](http://www.mikage.to/jquery/jquery_history.html).
Back Button and Refresh with AJAX
[ "", "javascript", "ajax", "history", "" ]
When using os.listdir method I need to tell which item in the resulting list is a directory or just a file. I've faced a problem when I had to go through all the directories in this list, and then add a file in every single directory. Is there a way to go through this list and remove all files from it? If it isn't possible to do with os.listdir, what method should I use instead? Thanks.
Use `os.path.isdir` to filter out the directories. Possibly something like ``` dirs = filter(os.path.isdir, os.listdir('/path')) for dir in dirs: # add your file ```
This might be faster: ``` current, dirs, files = os.walk('/path').next() ``` The list of directories will be in the `dirs` variable.
How do I tell a file from directory in Python?
[ "", "python", "file", "directory", "" ]
In C# what is the advantage of ``` public class blah { public event EventHandler Blahevent; } ``` versus ``` public class blah { private event EventHandler blahevent; public event EventHandler Blahevent { add { blahevent+=value; } remove { blahevent-=value } } } ``` or vice versa. does the first one open you up to blahinstance.Blahevent = null, or blahinstance.Blahevent(obj,even)
There is no advantage to explicit implementation of the add/remove methods unless you want to do something different. Possible reasons: * Perhaps take control of the event backing code yourself (to directly link to some other event rather than going though a pointless cascade for example) * do something else in addition on add or remove * Change security demands on the add or remove * expose the underlying delegate What the default implementation does is maintain a private hidden delegate field which is replaced each time a delegate is added or removed. For most cases there is no need to do any of the above but the flexibility is there.
The second one has the option of controlling exactly what happens when the specified event is subscribed to or unsubscribed from if there is specific logic that needs to run in addition to adding or removing the pointer.
C# Event Subscription
[ "", "c#", "events", "language-features", "" ]
In a KeyDown event, I have the KeyEventArgs to work with. It has (among other things) these three properties: * `e.KeyCode` * `e.KeyData` * `e.KeyValue` Which one should I use for what?
**Edit:** Somehow I misread your question to include checking a valid character. Did you modify it? I've added a description of each. * **KeyCode** is the [Keys](http://msdn.microsoft.com/en-us/library/system.windows.forms.keys.aspx) enumeration value for the key that is down * **KeyData** is the same as KeyCode, but combined with any SHIFT/CTRL/ALT keys * **KeyValue** is simply an integer representation of KeyCode If you *just* need the character, I'd probably recommend using the KeyPress event and using the [KeyPressEventArgs](http://msdn.microsoft.com/en-us/library/system.windows.forms.keypresseventargs.aspx).[KeyChar](http://msdn.microsoft.com/en-us/library/system.windows.forms.keypresseventargs.keychar.aspx) property. You can then use [Char.IsLetterOrDigit()](http://msdn.microsoft.com/en-us/library/system.char.aspx) to find out if it's a valid character. Alternatively, you might be able to cast [KeyEventArgs.KeyCode](http://msdn.microsoft.com/en-us/library/system.windows.forms.keyeventargs.keycode.aspx) to a char and then use Char.IsLetterOrDigit on that.
I would suggest using the `KeyCode` property to check against the `Keys` enumeration for most operations. However some of the basic differences below might help you to better decide which one you need for your situation. Differences: * `KeyCode` - Represents the `Keys` enumeration value that represents the key that is currently in Down state. * `KeyData` - Same as `KeyCode`, except that it has additional information in the form of modifiers - Shift/Ctrl/Alt etc. * `KeyValue` - The numeric value of the `KeyCode`.
C#: In a KeyDown event, what should I use to check what key is down?
[ "", "c#", "keyboard", "" ]
I'm getting this awkward error any time I try and create a dialog from Greasemonkey... I believe it has to do with the limitations of XPCNativeWrapper <https://developer.mozilla.org/en/XPCNativeWrapper#Limitations_of_XPCNativeWrapper> , though I am not 100% sure. None of the core jQuery methods that I've used have caused errors (append, css, submit, keydown, each, ...). It is possible that this could be an error in Greasemonkey or due to the interaction between Greasemonkey and jquery ui, but I am really interested in figuring out how to get them to work together. ``` // ==UserScript== // @name Dialog Test // @namespace http://strd6.com // @description jquery-ui-1.6rc6 Dialog Test // @include * // // @require http://ajax.googleapis.com/ajax/libs/jquery/1.3.1/jquery.min.js // @require http://strd6.com/stuff/jqui/jquery-ui-personalized-1.6rc6.min.js // ==/UserScript== $(document).ready(function() { $('<div title="Test">SomeText</div>').dialog(); }); ``` Error: [Exception... "Component is not available" nsresult: "0x80040111 (NS\_ERROR\_NOT\_AVAILABLE)" location: "JS frame :: file:///home/daniel/.mozilla/firefox/.../components/greasemonkey.js :: anonymous :: line 347" data: no] [Break on this error] if (line) { Firefox version: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.6) Gecko/2009020911 Ubuntu/8.04 (hardy) Firefox/3.0.6 Update: The focus() method from the standard jQuery library also throws the same error: ``` $('body').focus(); ``` Maybe the UI is calling the focus method at some point? Any help will be greatly appreciated!
This thread is pretty old but the way to use Greasemonkey with Jquery to focus() is to add a [0] to the jquery object to turn it back to a DOM element. ``` //Example: $('#obj').focus(); //Does not work document.getElementById('obj').focus(); //Works //Hybrid: $(#obj)[0].focus(); //Work around ```
Here is one workaround, but there are still other less dramatic problems involved. ``` // ==UserScript== // @name Dialog Test // @namespace http://strd6.com // @description jquery-ui-1.6rc6 Dialog Test // @include * // // @resource jQuery http://ajax.googleapis.com/ajax/libs/jquery/1.3.1/jquery.min.js // @resource jQueryUI http://strd6.com/stuff/jqui/jquery-ui-personalized-1.6rc6.min.js // ==/UserScript== // Inject jQuery into page... gross hack... for now... (function() { var head = document.getElementsByTagName('head')[0]; var script = document.createElement('script'); script.type = 'text/javascript'; var jQuery = GM_getResourceText('jQuery'); var jQueryUI = GM_getResourceText('jQueryUI'); script.innerHTML = jQuery + jQueryUI; head.appendChild(script); $ = unsafeWindow.$; })(); $(document).ready(function() { $('<div title="Test">SomeText</div>').dialog(); }); ``` The problems having now stem from $ being in the unsafeWindow context, so certain GM methods cannot be called from the unsafe context (like GM\_getValue when inside $.each). There's got to be a way to get to the root of this and have jQueryUI work from within Greasemonkey. I'm 90% certain that it's an XPCNativeWrapper issue, so there should be an simple workaround by changing some code in the dialog plugin.
jQuery UI Dialog Throw Errors When Invoked from Greasemonkey
[ "", "javascript", "jquery", "greasemonkey", "" ]
I want to do build a small app that creates MIDI sounds. I've never dealt with sound in programming so I'd like to start with something that's basic and has good documentation. I want to stick with Python since I'm the most comfortable with it and don't want to overwhelm myself, initially. My time is split about 50/50 between Windows and Ubuntu so something that "just works" on both platforms would be really helpful. Any suggestions?
The MIDIUtil Library (hosted [here](http://code.google.com/p/midiutil/) at Google Code) does what you want: write MIDI Files from a pure Python library. Once nice thing about it (and full disclosure: I'm the author) is that you don't have to keep track of lower-level MID events such as note-on and note-off: it handles them for you. As an example to write a note, you would do something like: ``` MyMIDI = MIDIFile(1) track = 0 channel = 0 pitch = 60 time = 0 duration = 1 volume = 100 MyMIDI.addNote(track,channel,pitch,time,duration,volume) ``` Hope this helps
I was looking for a pure-Python library to generate a MIDI file, [mxm's Python MIDI library](http://www.mxm.dk/products/public/pythonmidi) is exactly that. From [this dzone snippet](http://snippets.dzone.com/posts/show/572), there is a single-file version of the above library, [smidi.py](http://larndham.net/service/pys60/smidi.py) ([gist'd here](http://gist.github.com/202595) for posterity) Usage is quite simple: ``` >>> import smidi >>> m = smidi.MidiOutFile('out.mid') >>> m.header() >>> m.start_of_track() >>> m.update_time(0) >>> m.note_on(note=0x40) # single note >>> m.update_time(192) >>> m.note_off(note=0x40) # stop it after 192 >>> m.update_time(0) >>> m.end_of_track() >>> m.eof() ``` Presumably works on Windows (as the original example uses `C:\out.mid` as the output filename), and I've tested it on OS X
Simple, Cross Platform MIDI Library for Python
[ "", "python", "audio", "midi", "" ]
What kinds of options do I have when a dropdown menu is faced with text that is so wide that extending the menu's width to accommodate is not feasible? i.e. It breaks the page layout, or just looks too ugly if the dropdown is adjusted to fit the long item. Truncation? Truncation plus full hover text? Don't allow items that long? Anyone encountered any elegant solutions to this? Thanks.
I agree with GordonB regarding truncating the options. Excessively long options can be hard to read, and as you mentioned it looks horrible. If your dropdown is populated from user input, however, I'd restrict the length. What can be said with 15 words should be said with 5 ... if it can't, then perhaps a dropdown isn't your best option. For example, if your options are the titles of research papers and their authors, you can probably abbreviate them down to a few key words ("String Theory and You [Brown 2008]"). On the other hand, if the options themselves differ by only a few words and lose meaning if they are truncated (e.g. a list of options like "Peanut butter and grape jelly sandwich with carrot sticks and soy milk" and "Peanut butter and boysenberry jelly sandwith with carrot sticks and 2% milk") maybe you would be better served by displaying all the options sequentially, accompanied by a checkbox or radio button as appropriate. (If you're using ASP.NET, basically I'm saying using a repeater instead of a DropDownList) This second approach might also allow you to incorporate other elements that you wouldn't be able to in a dropdown. Take a look at [this Amazon search result page](http://www.amazon.com/s/ref=nb_ss_gw?url=search-alias%3Dstripbooks&field-keywords=bacon&x=0&y=0) for ideas.
I realise I'm fairly late to this question, but I've been hunting for an answer and I may have found a fairly elegant solution. Have a look here: * <http://www.getharvest.com/blog/2009/12/dropdown-problems-on-internet-explorer/> * <http://www.dougboude.com/blog/1/2008/05/Viewing-Option-Text-in-IE7-thats-Wider-than-the-Select-List.cfm> The first link talks about a couple of solutions before recommending a solution based on the second link. The idea is that on click, you change the width of the `<select>` tag such that it is big enough to show the full text of the options. By keeping the `<select>` tag inside a div with overflow set to 'hidden', it doesn't screw with the rest of the page. Try it out - it's a pretty good solution.
Recommendations for dropdown menu items that are too wide?
[ "", "javascript", "html", "" ]
When reading files off of a hard drive, mmap is generally regarded as a good way to quickly get data into memory. When working with optical drives, accesses take more time and you have a higher latency to worry about. What approach/abstraction do you use to hide/eliminate as much latency and/or overall load time of the optical drive as possible?
There's no real abstraction you can employ. Optical drives have very specific characteristics that must be optimized for to get the best performance. Some tips: The biggest killer on optical drives is seek time. Where possible make sure all the files you are reading are sequential on disc and as closely packed as possible. If you must seek then seek in one direction and as infrequently as possible. Asynchronous reading can also massively improve performance. If you need to load and process files A,B & C then before processing A you should start reading file B, and while processing B you should be reading file C and so on. Generally the more data you can read in one go the better, e.g avoid lots of little reads(). You will only get the theoretical throughput of a disc while reading large amounts of data. Some OS's /drivers will minimize the penalty of reading lots of little files by caching sectors, some will not. Doing lots of exists(filename) checking can also be detrimental on some filesystems / OSs where only parts of the TOC are cached. In our applications we usually pack files into one or more "lumped" files and have them ordered sequentially based on their access order. Some files (and directories) are compressed and read in their entirety before being decompressed in memory. This can be a win if you have a directory that contains a multitude of small files (e.g XML or scripts). Basically lots of benchmarking and tweaking :)
Minimize or eliminate seeks by reading in giant chunks of data sequentially from a few files (optimally one).
What approach works best for quickly reading files off of optical drives?
[ "", "c++", "c", "file", "optical-drive", "" ]
I'm building a streaming video site. The idea is that the customers should pay for a membership, login to the system, and be able to view the videos. I'm going with [FlowPlayer](http://flowplayer.org) for showing the actual videos. The problem now is, the videos need to be stored somewhere publically and the url to the .flv files needs to be passed to flowplayer for it to be able to show them. This creates a problem because anyone can do a view source, download the video, and distribute it all across the internet. I know some people serve images using php by doing an image `header()` and then they can do something like: ``` <img src="image.php?userId=1828&img=test.gif" /> ``` The php script validates the user ID and serves up the .gif and the actual url of the gif is never revealed. Is there anyway to do this with .flv or any other video format also? E.g, the file and user ID passed onto the PHP script, it validates them, and returns the video?
You can set up a directory containing the FLV files on your webserver that can only be accessed by PHP, then in your PHP script you can authenticate the user as usual and simply send a header to the browser telling it to expect an FLV, then echo the raw FLV data: ``` <?php // here is where // you want your // user authentication if ($isAuthenticated) { header("Content-type: video/flv"); echo file_get_contents($pathToFLV); } ?> ``` As Chad Birch discussed, this will only prevent people from linking directly to the video - you can't prevent piracy this way.
The short answer is that no, you're never going to be able to prevent people from downloading your videos if they want to. There are various ways to make it trickier for them to do it, but there's no foolproof method. You're hitting what is basically the entire problem with DRM - you can't show someone your content without giving it to them unencrypted at some point, and if they can view it, they can rip it.
How to serve .flv files using PHP?
[ "", "php", "security", "video", "streaming", "http-headers", "" ]
I just wrote the following function: ``` public string Ebnf { get { var props = GetType().GetProperties(BindingFlags.Public | BindingFlags.Instance); var ruleProps = from p in props where p.PropertyType.IsSubclassOf(typeof(ARule)) select p; var rules = from p in ruleProps select (ARule)p.GetValue(this, null); var ebnfs = from r in rules select r.Name + " = " + r.Ebnf + "."; return string.Join("\n", ebnfs.ToArray()); } } ``` I started wondering if Linq actually saved me space, or whether I was using Linq just for the sake of it: ``` public string EbnfNonLinq { get { var ebnfs = new List<string>(); var props = GetType().GetProperties(BindingFlags.Public | BindingFlags.Instance); foreach (var p in props) { if (p.PropertyType.IsSubclassOf(typeof(ARule))) { var r = (ARule)p.GetValue(this, null); ebnfs.Add(r.Name + " = " + r.Ebnf + "."); } } return string.Join("\n", ebnfs.ToArray()); } } ``` 7 lines of code vs. 5: it's a savings. But I wonder if the density of the first function is too much. (This is not performance critical code, so I am not concerned about that.) Which do you think is prettier, more maintainable, comprehensible, better?
I like the first one a bit better. I think the linq is prettier, but would probably make it the prettieriest by getting entire name in one shot. Something like this, if possible: ``` var ebnfs = from p in props where p.PropertyType.IsSubclassOf(typeof(ARule)) let rule = (ARule)p.GetValue(this, null) select rule.Name + " = " + rule.Ebnf + "."; return String.Join(",", ebnfs.ToArray()); ``` ^^ warning, untested :) I don't like using .Where or .Select much, so I stick to naked linq and really just use those when it is the only thing I am doing to a collection. As the commenter noted, this does make debugging a bit harder in some cases. However, if it is unlikely that there is something odd going on in rule.Name/Ebnf, then that probably won't be a big problem.
Fluffy syntax is fluffy. Terse syntax is terse. ``` public string Ebnf { get { var props = GetType().GetProperties(BindingFlags.Public | BindingFlags.Instance); string[] ebnfs = props .Where(prop => prop.PropertyType.IsSubclassOf(typeof(ARule))) .Select(prop => (ARule)prop.GetValue(this, null)) .Select(rule => rule.Name + " = " + rule.Ebnf + ".") .ToArray(); return string.Join("\n", ebnfs); } } ```
To Linq or not to Linq - which version is prettier?
[ "", "c#", "linq", "" ]
I almost have the same problem as the guy in this thread: [Convert Float that has period instead of comma?](https://stackoverflow.com/questions/359298/convert-float-that-has-period-instead-of-comma) So that my ``` double x = 234.4; string y = x.ToString(); ``` I get `y == "234,4"`; Even worse ... `Double.Parse("234.4")` throws an exception. I have written alot of code before I was asked to use period instead of comma, so I would prefer to have **some way to change my CultureInfo at a global level**. Is there some setting in the projects that I can do? I've tried: ``` Application.CurrentCulture = CultureInfo.CreateSpecificCulture("en-US"); Thread.CurrentThread.CurrentCulture = CultureInfo.CreateSpecificCulture("en-US"); Thread.CurrentThread.CurrentUICulture = new CultureInfo("en-US"); ``` They kind of work. They work for most part of the application, but **all controls that reside in the tabs of my TabControl still wants to use my computers Localized CultureInfo**. Any Ideas on how to solve this?
Thanks to Florin Sabaus comment I found the solution, which was to place ``` Application.CurrentCulture = System.Globalization.CultureInfo.CreateSpecificCulture("en-US"); ``` in Main() right before I created my form. Now I won't have to change all my .ToString() and Double.Parse() :-) Yey!
You could try to use ``` double.Parse("...", CultureInfo.InvariantCulture) ``` and ``` x.ToString(CultureInfo.InvariantCulture) ``` in the parts of the program that you are positive you need to have decimal period instead of comma or other regional settings dependent decimal separator. Hope it helps.
C# doubles show comma instead of period
[ "", "c#", "currentculture", "" ]
self.close() is working fine in IE but not in Mozilla. Does anyone know what is the issue and how can I solve it?
Did you open the window using `window.open`? [According to the docs on window.close](https://developer.mozilla.org/En/DOM/Window.close): > This method is only allowed to be called for windows that were opened by a script using the window.open method. If the window was not opened by a script, the following error appears in the JavaScript Console: Scripts may not close windows that were not opened by script.
Try using `window.close()` instead.
self.close() not working for mozilla
[ "", "javascript", "" ]
When creating a unit test for a class that uses the [`HttpContext.Current.Cache` class](http://msdn.microsoft.com/en-us/library/system.web.httpcontext.aspx), I get an error when using NUnit. The functionality is basic - check if an item is in the cache, and if not, create it and put it in: ``` if (HttpContext.Current.Cache["Some_Key"] == null) { myObject = new Object(); HttpContext.Current.Cache.Insert("Some_Key", myObject); } else { myObject = HttpContext.Current.Cache.Get("Some_Key"); } ``` When calling this from a unit test, it fails with at `NullReferenceException` when encountering the first `Cache` line. In Java, I would use [Cactus](http://jakarta.apache.org/cactus/index.html) to test server-side code. Is there a similar tool I can use for C# code? [This SO question](https://stackoverflow.com/questions/244280/creating-a-system-web-cachcing-cache-object-in-a-unit-test) mentions mock frameworks - is this the only way I can test these methods? Is there a similar tool to run tests for C#? Also, I don't check if the `Cache` is null as I don't want to write code specifically for the unit test and assume it will always be valid when running on a server. Is this valid, or should I add null checks around the cache?
The way to do this is to avoid direct use of the HttpContext or other similar classes, and substitute them with mocks. After all, you're not trying to test that the HttpContext functions properly (that's microsoft's job), you're just trying to test that the methods got called when they should have. Steps (In case you just want to know the technique without digging through loads of blogs): 1. Create an interface which describes the methods you want to use in your caching (probably things like GetItem, SetItem, ExpireItem). Call it ICache or whatever you like 2. Create a class which implements that interface, and passes methods through to the real HttpContext 3. Create a class which implements the same interface, and just acts like a mock cache. It can use a Dictionary or something if you care about saving objects 4. Change your original code so it doesn't use the HttpContext at all, and instead only ever uses an ICache. The code will then need to get an instance of the ICache - you can either pass an instance in your classes constructor (this is all that dependency injection really is), or stick it in some global variable. 5. In your production app, set the ICache to be your real HttpContext-Backed-Cache, and in your unit tests, set the ICache to be the mock cache. 6. Profit!
I agree with the others that using an interface would be the best option but sometimes it’s just not feasible to change an existing system around. Here’s some code that I just mashed together from one of my projects that should give you the results you’re looking for. It’s the farthest thing from pretty or a great solution but if you really can’t change your code around then it should get the job done. ``` using System; using System.IO; using System.Reflection; using System.Text; using System.Threading; using System.Web; using NUnit.Framework; using NUnit.Framework.SyntaxHelpers; [TestFixture] public class HttpContextCreation { [Test] public void TestCache() { var context = CreateHttpContext("index.aspx", "http://tempuri.org/index.aspx", null); var result = RunInstanceMethod(Thread.CurrentThread, "GetIllogicalCallContext", new object[] { }); SetPrivateInstanceFieldValue(result, "m_HostContext", context); Assert.That(HttpContext.Current.Cache["val"], Is.Null); HttpContext.Current.Cache["val"] = "testValue"; Assert.That(HttpContext.Current.Cache["val"], Is.EqualTo("testValue")); } private static HttpContext CreateHttpContext(string fileName, string url, string queryString) { var sb = new StringBuilder(); var sw = new StringWriter(sb); var hres = new HttpResponse(sw); var hreq = new HttpRequest(fileName, url, queryString); var httpc = new HttpContext(hreq, hres); return httpc; } private static object RunInstanceMethod(object source, string method, object[] objParams) { var flags = BindingFlags.Instance | BindingFlags.Public | BindingFlags.NonPublic; var type = source.GetType(); var m = type.GetMethod(method, flags); if (m == null) { throw new ArgumentException(string.Format("There is no method '{0}' for type '{1}'.", method, type)); } var objRet = m.Invoke(source, objParams); return objRet; } public static void SetPrivateInstanceFieldValue(object source, string memberName, object value) { var field = source.GetType().GetField(memberName, BindingFlags.GetField | BindingFlags.NonPublic | BindingFlags.Instance); if (field == null) { throw new ArgumentException(string.Format("Could not find the private instance field '{0}'", memberName)); } field.SetValue(source, value); } } ```
Unit test HttpContext.Current.Cache or other server-side methods in C#?
[ "", "c#", "unit-testing", "nunit", "mocking", "" ]
So I need a 2-dimensional `ConcurrentHashMap`. It has to be as blazing fast as possible, as I'm going to be adding to and updating its values extremely frequently. It's in a multithreaded application, hence the choice to use ConcurrentHashMap instead of just HashMap. Both the "x" and "y" indices are integers with a known range (0 through 40,000,000). What I need to know is: What's the most efficient way to implement this so it'll be as speedy as possible? The most obvious route is to do a literal 2-D hashmap: `ConcurrentHashMap<Integer, ConcurrentHashMap<Integer, ValueObj>> foo;` Or I could make a private class "IntPair" with two properties x and y, and use that as a key... though if I do that, what's the most efficient way to do `equals()` and `hashcode()`? and will I wind up allocating too many new `IntPair`s? Could I keep a set of `IntPair`s for each x/y I've assigned, and then use a purely reflexive equals() such that I'm just checking for the exact same object instance? --- ## Update: Now that I've taken a closer look at Integer.valueOf(int), the specific caching model it uses wouldn't make sense here, since I'm dealing with a very sparse matrix with unpredictable entries. I really need to be caching all those IntPairs which are used, not a prespecified subset. Intuitively, it seems to me that looking up an IntPair in a big map to see if I've already created it would, in fact, be more-or-less the same as just looking it up in the big "2-D" ConcurrentHashMap anyway, wouldn't it? So it seems the solution here is really to just use `new IntPair(x,y)` each time I look up a key. Yes?
It depends on how sparse your (x,y) points are, in the 40,000,000 x 40,000,000 matrix. My guess is that the matrix is going to be quite sparse anyway, so creating a lot of `ConcurrentHashMap`s is going to be expensive. Your (immutable) `IntPair` suggestion seems more attractive in comparison. As you've suggested, you can even cache some of these pairs to improve performance (see [`Integer.valueOf(int)`](http://java.sun.com/javase/6/docs/api/java/lang/Integer.html#valueOf(int)) to see how this can be implemented using a static nested class and a static factory method). Since the hashcode will always be required, you can pre-compute it in the constructor and save it as a final field. To compute equals, you could use the identity equality for objects in the cache, otherwise you'll need to compare x and y individually. EDIT: Here's the [source code](http://www.docjar.com/html/api/java/lang/Integer.java.html) (OpenJDK) for [`Integer.valueOf(int)`](http://java.sun.com/javase/6/docs/api/java/lang/Integer.html#valueOf(int)).
`ConcurrentHashMap` is quite large, so you probably don't want a collection of them. Short lived objects are actually very fast to allocate. Are you going to have to create the `Integers` anyway? You could intern the coordinate objects, but the cost for just a lookup would probably be comparable to creating them anyway. The real win with `Integer` is that the same instances are shared when you keep around lots of them for some time. If performance is really a huge issue, you could write (or use) a map-type object that maps longs to references. I wouldn't be surprised to see custom maps out there which also have functionality associated with coordinate systems (like finding nearest or within a range).
2-D (concurrent) HashMap: 2-property key type? hashmap of hashmaps? [update]
[ "", "java", "performance", "hashmap", "concurrenthashmap", "" ]
I have a set of classes I wish to serialize the data from. There is a lot of data though, (we're talking a std::map with up to a million or more class instances). Not wishing to optimize my code too early, I thought I'd try a simple and clean XML implementation, so I used tinyXML to save the data out to XML, but it was just far too slow. So I've started looking at using Boost.Serialization writing and reading standard ascii or binary. It seems to be much better suited to the task as I don't have to allocate all this memory as an overhead before I get started. My question is essentially how to go about planning an optimal serialization strategy for a file format. I don't particularly want to serialize the whole map if it's not necessary, as it's really only the contents I'm after. Having played around with serialization a little (and looked at the output), I don't understand how loading the data back in could know when it's reached the end of the map for example, if I simply save out all the items one after another. What issues do you need to consider when planning a serialization strategy? Thanks.
There are many advantages to boost.serialization. For instance, as you say, just including a method with a specified signature, allows the framework to serialize and deserialize your data. Also, boost.serialization includes serializers and readers for all the standard STL containers, so you don't have to bother if all keys have been stored (they will) or how to detect the last entry in the map when deserializing (it will be detected automatically). There are, however, some considerations to make. For example, if you have a field in your class that it is calculated, or used to speed-up, such as indexes or hash tables, you don't have to store these, **but** you have to take into account that you have to reconstruct these structures from the data read from the disk. As for the "file format" you mention, I think some times we try to focus in the format rather than in the data. I mean, the exact format of the file don't matter as long as you are able to retrieve the data seamlessly using (say) boost.serialization. If you want to share the file with other utilities that don't use serialization, that's another thing. But just for the purposes of (de)serialization, you don't have to care about the internal file format.
Read this [FAQ](http://www.parashift.com/c++-faq-lite/serialization.html)! Does that help to get started?
C++ serialization of complex data using Boost
[ "", "c++", "serialization", "stl", "boost", "" ]
The last two lines of this code do not work correctly -- the results are coming back from the LINQ query. I'm just not sure how to successfully bind the indicated columns in the results to the textfield and valuefield of the dropdownlist: ``` protected void BindMarketCodes() { List<lkpMarketCode> mcodesList = new List<lkpMarketCode>(); LINQOmniDataContext db = new LINQOmniDataContext(); var mcodes = from p in db.lkpMarketCodes orderby 0 select p; mcodesList = mcodes.ToList<lkpMarketCode>(); //bind to Country COde droplist dd2.DataSource = mcodesList; dd2.DataTextField = mcodesList[0].marketName; dd2.DataValueField = mcodesList[0].marketCodeID.ToString(); } ```
See revised code below ``` protected void BindMarketCodes() { using (var dataContext = new LINQOmniDataContext()) { //bind to Country COde droplist dd2.DataSource = from p in dataContext.lkpMarketCodes orderby p.marketName select new {p.marketCodeID, p.marketName}; dd2.DataTextField = "marketName"; dd2.DataValueField = "marketCodeID"; dd2.DataBind(); } } ```
``` protected void BindMarketCodes() { using(var dc = new LINQOmniDataContext()) { dd2.DataSource = from p in db.lkpMarketCodes orderby 0 select new {p.marketName, p.marketCodeID }; dd2.DataTextField = "marketName"; dd2.DataValueField = "marketCodeID"; dd2.DataBind(); } } // no need to use ToList() // no need to use a temp list; // using an anonymous type will limit the columns in your resulting SQL select // make sure to wrap in a using block; ```
How to bind LINQ data to dropdownlist
[ "", "c#", "asp.net", "linq", "ado.net", "" ]
Where does Scripting ends and Programming begins? ActionScript 3 and JavaScript/JScript are so different...
The distinction was meaningful once, but the line is getting increasingly blurred to the point where I dont think it is useful today. * historically, scripting languages are interpreted, programming langauages are compiled. But with the advent of VM's and JIT, the line is increasingly blurred. * scripting is when an existing application or tool is manipulated programatically (e.g. office macros), rather than building an app from scratch. Again the line is getting blurred, because libraries and frameworks means that everybody is building on existing tools. Also, API's allow you to use traditional compiled languages to manipulate applications, eg. in Office you can now automate using .net, which I supppose means you can write Excel macros in managed C++. * languages tend to outgrow their niche. Perl was concieved as a scripting language for text manipulation, but has since grown into a full-fledged programming language. * scripting languages have traditionally been higher-level with features like garbage collection and implicit typing which allowed the developer to focus on the task rather than the metal. Real programming languages like C forced you to be more concerned about memory management and machine architecture. Again, the line is being blurred by newer languages. For example, C# has high-level features like garbage-collection and dynamic types, and low-level features like pointers. For these reasons *scripting* has been seen as more accessible, where *programming* has been seen as more demanding and hard-core. Therefore the term *scripting* can be used disparagingly, like in "JavaScript is just a scripting languague, not a real programming language" or "he is just a scripter, not a real programmer".
Is there a difference? Youre just writing software in different languages for different run-time environments.
Scripting and Programming
[ "", "javascript", "actionscript", "scripting", "" ]
I'd like to generate a secure one-click-access type of url similar to the examples below. I'll be using PHP but that is irrelevant as I'm just looking to understand the underlying concept. Some answers suggest using a GUID, but I don't think that will give me an absolutely unique, secure URL like below. ``` # Google Calendar 3qq6jlu04ptlhmb9fencsu5t2k # Private 3qq6jlu04ptlhmb9fencsu5t2k # Private 'token' 163a0afe7fkb1ba2acd04c11ef0eefe8 # LogMeIn # 1024 bit - 128 Character URL 72oxuj0fzefqo3fu04xjtvmd0adj2948rfh7g5by4forkwcy7t651z7hcb6kjjgqkxmvmfqpyrcfy15z1fto8ewcxstjc6avicag7d5qnoimsm19kb9kgi9i7v6z01d5 ``` I'm leaning toward that 128 character, 1024 bit style, as it seems very secure. I guess I could make four MD5 hashes and merge them, but is that really effective? I have two specific intentions for a url such as this, but I'm sure there are others who may find this useful. 1) Instant log in shortcut/icon for users 2) Single-use url (Password recovery links)
Update: For something like a single use URL, I'd go with the GUID-esque appoach that has been suggested. Make sure to have a short lifespan on the link. For a instant log-in, there is no really secure way to have a single URL. Yes you can generate a URL which is going to be damn near impossible to guess, but that doesn't give you super security. If you want to remember users, why not use an encrypted authentication cookie? The example you give, Google Calendar doesn't log you in via the URL alone, you have to be authenticated first before the URL means anything. E.g. clicking on google calendar from my gmail gives me: <https://www.google.com/calendar/render?tab=mc&gsessionid=-LTeHrnKoeAbDcVaN68NHA> That doesn't help you access my account unless you've first authenticated as me. Old post: You can generate a GUID in PHP using [com\_create \_guid](http://uk.php.net/com_create_guid) and use that. On linux I think you can use [uuid\_create](http://manpages.unixforum.co.uk/man-pages/unix/freebsd-6.2/3/uuid_create-man-page.html), or this code from [here](http://algorytmy.pl/doc/php/function.com-create-guid.php): ``` <?php function guid(){ if (function_exists('com_create_guid')){ return com_create_guid(); }else{ mt_srand((double)microtime()*10000);//optional for php 4.2.0 and up. $charid = strtoupper(md5(uniqid(rand(), true))); $hyphen = chr(45);// "-" $uuid = chr(123)// "{" .substr($charid, 0, 8).$hyphen .substr($charid, 8, 4).$hyphen .substr($charid,12, 4).$hyphen .substr($charid,16, 4).$hyphen .substr($charid,20,12) .chr(125);// "}" return $uuid; } } echo guid(); ?> ```
Try [uniqid](http://uk.php.net/uniqid) - and perhaps combine with an md5 hash, as given in the examples: ``` // no prefix // works only in PHP 5 and later versions $token = md5(uniqid()); // better, difficult to guess $better_token = md5(uniqid(rand(), true)); ``` I must note however that no urls generated in this way (whatever the hash algorithm) will be 'secure', simply very difficult to guess.
Generating Private, Unique, Secure URLs
[ "", "php", "security", "url", "" ]
I am writing a C++ DLL that is called by an external program. 1.) I take an array of strings (as char \*var) as an argument from this program. 2.) I want to iterate through this array and call a COM function on each element of the string array. The COM function must take a BSTR: ``` DLL_EXPORT(void) runUnitModel(char *rateMaterialTypeNames) { HRESULT hr = CoInitialize(NULL); // Create the interface pointer. IUnitModelPtr pIUnit(__uuidof(BlastFurnaceUnitModel)); pIUnit->initialiseUnitModel(); int i; for(i=0; i < sizeOfPortRatesArray; i++) pIUnit->createPort(SysAllocString(BSTR((const char *)rateMaterialTypeNames[i]))); ``` I think its the SysAllocString(BSTR((const char \*)rateMaterialTypeNames[i])) bit that is giving me problems. I get an access violation when the programs runs. Is this the right way to access the value of the rateMaterialTypeName at i? Note I am expecting something like "IronOre" as the value at i, not a single character.
If you're using Microsofts **ATL**, you can use the [CComBSTR](http://msdn.microsoft.com/en-us/library/zh7x9w3f(VS.80).aspx) class. It will accept a char\* and create a BSTR from it, also, you don't need to worry about deleting the BSTR, all that happens in the dtor for CComBSTR. Also, see [Matthew Xaviers](https://stackoverflow.com/questions/586734/pass-an-element-from-c-type-string-array-to-a-com-object-as-bstr-in-c/586787#586787) answer, it doesn't look like you're passing your array of strings into that function properly. Hope this helps
Because a variable holding a C string is just a pointer to the first element (a char\*), in order to pass an array of C strings, the parameter to your function should be a char\*\*: ``` DLL_EXPORT(void) runUnitModel(char **rateMaterialTypeNames) ``` This way, when you evaluate rateMaterialTypeNames[i], the result will be a char\*, which is the parameter type you need to pass to SysAllocString(). Added note: you will also need to convert the strings to wide chars at some point, as Tommy Hui's answer points out.
Pass an element from C type string array to a COM object as BSTR? (in C++)
[ "", "c++", "c", "com", "bstr", "" ]
Ok, a really simple question but I am too thick to figure it out. I want to get the difference between two times. For example, "1:07" (1 minute and 7 seconds) and "3:01" (3 minutes and 1 second). It will only be ever minutes and seconds. I have been trying to make use of this: ``` function timeDiff($firstTime,$lastTime) { // convert to unix timestamps $firstTime=strtotime($firstTime); $lastTime=strtotime($lastTime); // perform subtraction to get the difference (in seconds) between times $timeDiff=$lastTime-$firstTime; // return the difference return $timeDiff; } ``` But I think I am running in the wrong direction? Thank you for any help. ## EDIT I tried this: echo timeDiff('1:07','2:30'); I got this output "4980" What is the above? Is it seconds? I have no idea how to get it as "1:23" which is the difference. ## EDIT 2 Thank you all, I learnt so much just from this one thread, esp. Paul's. It works very well and I like the defensiveness!
**You can't use strtotime as it will interpret MM:SS as HH:MM** - that's why you are getting higher values than expected. You could simply prepend your MM:SS values with '00:' to make them look like HH:MM:SS. *Note however that strtotime, if just given HH:MM:SS, will give a timestamp for **today**, which is fine for throwaway code. Don't use that technique for anything important, consider what happens if your two calls to strtotime straddle midnight!* Alternatively, something like this will turn a MM:SS value into a timestamp you can do arithmetic on ``` function MinSecToSeconds($minsec) { if (preg_match('/^(\d+):(\d+)$/', $minsec, $matches)) { return $matches[1]*60 + $matches[2]; } else { trigger_error("MinSecToSeconds: Bad time format $minsec", E_USER_ERROR); return 0; } } ``` It's a little more defensive than using explode, but shows another approach!
This should give you the difference between the two times in seconds. ``` $firstTime = '1:07'; $secondTime = '3:01'; list($firstMinutes, $firstSeconds) = explode(':', $firstTime); list($secondMinutes, $secondSeconds) = explode(':', $secondTime); $firstSeconds += ($firstMinutes * 60); $secondSeconds += ($secondMinutes * 60); $difference = $secondSeconds - $firstSeconds; ```
PHP: Work out duration between two times
[ "", "php", "" ]
I'm dealing with a Postgres table (called "lives") that contains records with columns for time\_stamp, usr\_id, transaction\_id, and lives\_remaining. I need a query that will give me the most recent lives\_remaining total for each usr\_id 1. There are multiple users (distinct usr\_id's) 2. time\_stamp is not a unique identifier: sometimes user events (one by row in the table) will occur with the same time\_stamp. 3. trans\_id is unique only for very small time ranges: over time it repeats 4. remaining\_lives (for a given user) can both increase and decrease over time example: ``` time_stamp|lives_remaining|usr_id|trans_id ----------------------------------------- 07:00 | 1 | 1 | 1 09:00 | 4 | 2 | 2 10:00 | 2 | 3 | 3 10:00 | 1 | 2 | 4 11:00 | 4 | 1 | 5 11:00 | 3 | 1 | 6 13:00 | 3 | 3 | 1 ``` As I will need to access other columns of the row with the latest data for each given usr\_id, I need a query that gives a result like this: ``` time_stamp|lives_remaining|usr_id|trans_id ----------------------------------------- 11:00 | 3 | 1 | 6 10:00 | 1 | 2 | 4 13:00 | 3 | 3 | 1 ``` As mentioned, each usr\_id can gain or lose lives, and sometimes these timestamped events occur so close together that they have the same timestamp! Therefore this query won't work: ``` SELECT b.time_stamp,b.lives_remaining,b.usr_id,b.trans_id FROM (SELECT usr_id, max(time_stamp) AS max_timestamp FROM lives GROUP BY usr_id ORDER BY usr_id) a JOIN lives b ON a.max_timestamp = b.time_stamp ``` Instead, I need to use both time\_stamp (first) and trans\_id (second) to identify the correct row. I also then need to pass that information from the subquery to the main query that will provide the data for the other columns of the appropriate rows. This is the hacked up query that I've gotten to work: ``` SELECT b.time_stamp,b.lives_remaining,b.usr_id,b.trans_id FROM (SELECT usr_id, max(time_stamp || '*' || trans_id) AS max_timestamp_transid FROM lives GROUP BY usr_id ORDER BY usr_id) a JOIN lives b ON a.max_timestamp_transid = b.time_stamp || '*' || b.trans_id ORDER BY b.usr_id ``` Okay, so this works, but I don't like it. It requires a query within a query, a self join, and it seems to me that it could be much simpler by grabbing the row that MAX found to have the largest timestamp and trans\_id. The table "lives" has tens of millions of rows to parse, so I'd like this query to be as fast and efficient as possible. I'm new to RDBM and Postgres in particular, so I know that I need to make effective use of the proper indexes. I'm a bit lost on how to optimize. I found a similar discussion [here](https://stackoverflow.com/questions/121387/sql-fetch-the-row-which-has-the-max-value-for-a-column). Can I perform some type of Postgres equivalent to an Oracle analytic function? Any advice on accessing related column information used by an aggregate function (like MAX), creating indexes, and creating better queries would be much appreciated! P.S. You can use the following to create my example case: ``` create TABLE lives (time_stamp timestamp, lives_remaining integer, usr_id integer, trans_id integer); insert into lives values ('2000-01-01 07:00', 1, 1, 1); insert into lives values ('2000-01-01 09:00', 4, 2, 2); insert into lives values ('2000-01-01 10:00', 2, 3, 3); insert into lives values ('2000-01-01 10:00', 1, 2, 4); insert into lives values ('2000-01-01 11:00', 4, 1, 5); insert into lives values ('2000-01-01 11:00', 3, 1, 6); insert into lives values ('2000-01-01 13:00', 3, 3, 1); ```
On a table with 158k pseudo-random rows (usr\_id uniformly distributed between 0 and 10k, `trans_id` uniformly distributed between 0 and 30), *By query cost, below, I am referring to Postgres' cost based optimizer's cost estimate (with Postgres' default `xxx_cost` values), which is a weighed function estimate of required I/O and CPU resources; you can obtain this by firing up PgAdminIII and running "Query/Explain (F7)" on the query with "Query/Explain options" set to "Analyze"* * Quassnoy's query has a cost estimate of 745k (!), and completes in 1.3 seconds (given a compound index on (`usr_id`, `trans_id`, `time_stamp`)) * Bill's query has a cost estimate of 93k, and completes in 2.9 seconds (given a compound index on (`usr_id`, `trans_id`)) * **Query #1 below** has a cost estimate of 16k, and completes in 800ms (given a compound index on (`usr_id`, `trans_id`, `time_stamp`)) * **Query #2 below** has a cost estimate of 14k, and completes in 800ms (given a compound function index on (`usr_id`, `EXTRACT(EPOCH FROM time_stamp)`, `trans_id`)) + this is Postgres-specific * **Query #3 below** (Postgres 8.4+) has a cost estimate and completion time comparable to (or better than) query #2 (given a compound index on (`usr_id`, `time_stamp`, `trans_id`)); it has the advantage of scanning the `lives` table only once and, should you temporarily increase (if needed) [work\_mem](http://www.postgresql.org/docs/current/static/runtime-config-resource.html#GUC-WORK-MEM) to accommodate the sort in memory, it will be by far the fastest of all queries. All times above include retrieval of the full 10k rows result-set. Your goal is minimal cost estimate *and* minimal query execution time, with an emphasis on estimated cost. Query execution can dependent significantly on runtime conditions (e.g. whether relevant rows are already fully cached in memory or not), whereas the cost estimate is not. On the other hand, keep in mind that cost estimate is exactly that, an estimate. The best query execution time is obtained when running on a dedicated database without load (e.g. playing with pgAdminIII on a development PC.) Query time will vary in production based on actual machine load/data access spread. When one query appears slightly faster (<20%) than the other but has a *much* higher cost, it will generally be wiser to choose the one with higher execution time but lower cost. When you expect that there will be no competition for memory on your production machine at the time the query is run (e.g. the RDBMS cache and filesystem cache won't be thrashed by concurrent queries and/or filesystem activity) then the query time you obtained in standalone (e.g. pgAdminIII on a development PC) mode will be representative. If there is contention on the production system, query time will degrade proportionally to the estimated cost ratio, as the query with the lower cost does not rely as much on cache *whereas* the query with higher cost will revisit the same data over and over (triggering additional I/O in the absence of a stable cache), e.g.: ``` cost | time (dedicated machine) | time (under load) | -------------------+--------------------------+-----------------------+ some query A: 5k | (all data cached) 900ms | (less i/o) 1000ms | some query B: 50k | (all data cached) 900ms | (lots of i/o) 10000ms | ``` **Do not forget to run `ANALYZE lives` once after creating the necessary indices.** --- **Query #1** ``` -- incrementally narrow down the result set via inner joins -- the CBO may elect to perform one full index scan combined -- with cascading index lookups, or as hash aggregates terminated -- by one nested index lookup into lives - on my machine -- the latter query plan was selected given my memory settings and -- histogram SELECT l1.* FROM lives AS l1 INNER JOIN ( SELECT usr_id, MAX(time_stamp) AS time_stamp_max FROM lives GROUP BY usr_id ) AS l2 ON l1.usr_id = l2.usr_id AND l1.time_stamp = l2.time_stamp_max INNER JOIN ( SELECT usr_id, time_stamp, MAX(trans_id) AS trans_max FROM lives GROUP BY usr_id, time_stamp ) AS l3 ON l1.usr_id = l3.usr_id AND l1.time_stamp = l3.time_stamp AND l1.trans_id = l3.trans_max ``` **Query #2** ``` -- cheat to obtain a max of the (time_stamp, trans_id) tuple in one pass -- this results in a single table scan and one nested index lookup into lives, -- by far the least I/O intensive operation even in case of great scarcity -- of memory (least reliant on cache for the best performance) SELECT l1.* FROM lives AS l1 INNER JOIN ( SELECT usr_id, MAX(ARRAY[EXTRACT(EPOCH FROM time_stamp),trans_id]) AS compound_time_stamp FROM lives GROUP BY usr_id ) AS l2 ON l1.usr_id = l2.usr_id AND EXTRACT(EPOCH FROM l1.time_stamp) = l2.compound_time_stamp[1] AND l1.trans_id = l2.compound_time_stamp[2] ``` **2013/01/29 update** Finally, as of version 8.4, Postgres supports [Window Function](http://www.postgresql.org/docs/8.4/static/functions-window.html#FUNCTIONS-WINDOW-TABLE) meaning you can write something as simple and efficient as: **Query #3** ``` -- use Window Functions -- performs a SINGLE scan of the table SELECT DISTINCT ON (usr_id) last_value(time_stamp) OVER wnd, last_value(lives_remaining) OVER wnd, usr_id, last_value(trans_id) OVER wnd FROM lives WINDOW wnd AS ( PARTITION BY usr_id ORDER BY time_stamp, trans_id ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING ); ```
I would propose a clean version based on `DISTINCT ON` (see [docs](http://www.postgresql.org/docs/9.3/static/sql-select.html#SQL-DISTINCT)): ``` SELECT DISTINCT ON (usr_id) time_stamp, lives_remaining, usr_id, trans_id FROM lives ORDER BY usr_id, time_stamp DESC, trans_id DESC; ```
PostgreSQL - fetch the rows which have the Max value for a column in each GROUP BY group
[ "", "sql", "postgresql", "query-optimization", "cbo", "cost-based-optimizer", "" ]
How can I match a space character in a PHP regular expression? I mean like "gavin schulz", the space in between the two words. I am using a regular expression to make sure that I only allow letters, number and a space. But I'm not sure how to find the space. This is what I have right now: ``` $newtag = preg_replace("/[^a-zA-Z0-9s|]/", "", $tag); ```
If you're looking for a space, that would be `" "` (one space). If you're looking for one or more, it's `" *"` (that's *two* spaces and an asterisk) or `" +"` (one space and a plus). If you're looking for common spacing, use `"[ X]"` or `"[ X][ X]*"` or `"[ X]+"` where `X` is the physical tab character (and each is preceded by a single space in all those examples). These will work in *every*\* regex engine I've ever seen (some of which don't even have the one-or-more `"+"` character, ugh). If you know you'll be using one of the more modern regex engines, `"\s"` and its variations are the way to go. In addition, I believe word boundaries match start and end of lines as well, important when you're looking for words that may appear without preceding or following spaces. For PHP specifically, [this page](http://www.wellho.net/regex/php.html) may help. From your edit, it appears you want to remove all non valid characters The start of this is (note the space inside the regex): ``` $newtag = preg_replace ("/[^a-zA-Z0-9 ]/", "", $tag); # ^ space here ``` If you also want trickery to ensure there's only one space between each word and none at the start or end, that's a little more complicated (and probably another question) but the basic idea would be: ``` $newtag = preg_replace ("/ +/", " ", $tag); # convert all multispaces to space $newtag = preg_replace ("/^ /", "", $tag); # remove space from start $newtag = preg_replace ("/ $/", "", $tag); # and end ```
# Cheat Sheet Here is a small cheat sheet of everything you need to know about whitespace in regular expressions: ### `[[:blank:]]` Space or tab only, not newline characters. It is the same as writing `[ \t]`. ### `[[:space:]]` & `\s` `[[:space:]]` and `\s` are the same. They will both match any whitespace character spaces, newlines, tabs, etc... ### `\v` Matches vertical Unicode whitespace. ### `\h` Matches *horizontal* whitespace, including Unicode characters. It will also match spaces, tabs, non-breaking/mathematical/ideographic spaces. ### `x` (e**X**tended flag) Ignore all whitespace. Keep in mind that this is a flag, so you will add it to the end of the regex like `/hello/gmx`. This flag will ignore whitespace in your regular expression. For example, if you write an expression like `/hello world/x`, it will match `helloworld`, but not `hello world`. The extended flag also allows comments in your regex. ### Example ``` /helloworld #hello this is a comment/ ``` If you need to use a space, you can use `\` to match spaces.
Matching a space in regex
[ "", "php", "regex", "" ]
When you pass a collection like list, array to another function in python, does it make a copy of it, or is it just a pointer?
Python [passes references-to-objects by value](https://web.archive.org/web/20120615042202/http://testingreflections.com/node/view/5126). > Python passes references-to-objects by > value (like Java), and everything in > Python is an object. This sounds > simple, but then you will notice that > some data types seem to exhibit > pass-by-value characteristics, while > others seem to act like > pass-by-reference... what's the deal? > > It is important to understand mutable > and immutable objects. Some objects, > like strings, tuples, and numbers, are > immutable. Altering them inside a > function/method will create a new > instance and the original instance > outside the function/method is not > changed. Other objects, like lists > and dictionaries are mutable, which > means you can change the object > in-place. Therefore, altering an > object inside a function/method will > also change the original object > outside.
Thing is, the whole reference/value concept won't fit into python. Python has no "value" of a variable. Python has only objects and names that refer to objects. So when you call a function and put a "name" inside the parenthesis, like this: ``` def func(x): # defines a function that takes an argument ... # do something here func(myname) # calling the function ``` The actual object that `myname` is pointing is passed, not the **name** `myname` **itself**. Inside the function **another name** (`x`) is given to refer to the same object passed. You can modify the object inside the function if it is mutable, but you **can't change what the outside name is pointing to**. Just the same that happens when you do ``` anothername = myname ``` Therefore I can answer your question with: **it is "pass by value" but all values are just references to objects.**
Passing values in Python
[ "", "python", "pass-by-reference", "pass-by-value", "" ]
I've got a console program written in C# which runs under user foo. The program creates a file. In some situations a web application running under the IUSR needs to be able to delete the files created by the console app. I'd like to grant DELETE (or anything equivalent) to IUSR when the file is created. How can I do that in C# ? I've found [FileIOPermission](http://msdn.microsoft.com/en-us/library/system.security.permissions.fileiopermission(VS.80).aspx) and I'm not sure what that's for but as you can't specify a particular user I'm pretty sure that's now what I need. Anyone got a good pointer on how to do this ? [By the way I realise that in some circs granting the IUSR DELETE rights on any files would be a reasonably dodgy thing to do but in this case the nature of the files involved means I'm happy to grant these rights to IUSR]
Use the Windows Explorer -> select the directory where the file resides -> right-click -> Properties -> Security tab -> give the "Modify" right to the IUSR\_xxx user account. I presume that you have physical access to the computer that runs both the console app and the web app. Edited: for programmatic setting of ntfs permissions you need to fiddle around with the **System.Security.AccessControl.FileSecurity** class and the **File.SetAccessControl** method. Hope it helps.
@Sabau: thanks for the amendment to your answer - it inspired me to try again and this time I seem to have got it worked out. I wrote a little test program so that others can see how it's done. For my testing I gave the IUSR full control but obviously you can add/deny whatever you like. ``` using System; using System.Collections.Generic; using System.Text; using System.IO; using System.Security.Permissions; using System.Security.Principal; using System.Security.AccessControl; namespace GrantingFilePermsTests { class Program { static void Main(string[] args) { string strFilePath1 = "E:/1.txt"; string strFilePath2 = "E:/2.txt"; if (File.Exists(strFilePath1)) { File.Delete(strFilePath1); } if (File.Exists(strFilePath2)) { File.Delete(strFilePath2); } File.Create(strFilePath1); File.Create(strFilePath2); // Get a FileSecurity object that represents the // current security settings. FileSecurity fSecurity = File.GetAccessControl(strFilePath1); // Add the FileSystemAccessRule to the security settings. fSecurity.AddAccessRule(new FileSystemAccessRule("IUSR_SOMESERVER",FileSystemRights.FullControl,AccessControlType.Allow)); // Set the new access settings. File.SetAccessControl(strFilePath1, fSecurity); } } } ``` Thanks to all for their replies.
C# file creation - how to grant IUSR DELETE?
[ "", "c#", "security", "permissions", "ntfs", "" ]
I have created a .Net application to run on an App Server that gets requests for a report and prints out the requested report. The C# application uses Crystal Reports to load the report and subsequently print it out. The application is run on Server which is connected to via a Remote Desktop connection under a particular user account (required for old apps). When I disconnect from the Remote Session the application starts raising exceptions such as: *Message: CrystalDecisions.Shared.CrystalReportsException: Load report failed* This type of error is never raised when the Remote Session is active. The server running the app is running Windows Server 2003, my box which creates the connection is Windows XP. I appreciate this is fairly weird, however I cannot see any problem with the application deployment I have created. Does anyone know what could be cause this issue? EDIT: I bit the bullet and created the application as a windows service, obviously this doesn't take long I just wasn't convinced it would solve the problem. Anyway it doesn't!!! I have also tried removing the multi-thread code that was calling the print function asynchronously. I did this in order to simply the app and narrow down the reason it could fail. Anyway, this didn't improve the situation either! EDIT: The two errors I get are: > System.Runtime.InteropServices.COMException > (0x80000201): Invalid printer > specified. at > CrystalDecisions.ReportAppServer.Controllers.PrintOutputControllerClass.ModifyPrinterName(String > newVal) at > CrystalDecisions.CrystalReports.Engine.PrintOptions.set\_PrinterName(String > value) at > Dsa.PrintServer.Service.Service.PrintCrystalReport(Report > report) The printer isn't invalid, this is confirmed when 60 seconds later the time ticks and the report is printed successfully. And > The request could not be submitted for > background processing. at > CrystalDecisions.ReportAppServer.Controllers.ReportSourceClass.GetLastPageNumber(RequestContext > pRequestContext) at > CrystalDecisions.ReportSource.EromReportSourceBase.GetLastPageNumber(ReportPageRequestContext > reqContext) --- End of inner > exception stack trace --- at > CrystalDecisions.ReportAppServer.ConvertDotNetToErom.ThrowDotNetException(Exception > e) at > CrystalDecisions.ReportSource.EromReportSourceBase.GetLastPageNumber(ReportPageRequestContext > reqContext) at > CrystalDecisions.CrystalReports.Engine.FormatEngine.PrintToPrinter(Int32 > nCopies, Boolean collated, Int32 > startPageN, Int32 endPageN) at > CrystalDecisions.CrystalReports.Engine.ReportDocument.PrintToPrinter(Int32 > nCopies, Boolean collated, Int32 > startPageN, Int32 endPageN) at > Dsa.PrintServer.Service.Service.PrintCrystalReport(Report > report) EDIT: I ran filemon to check if there were any access issue. At the point when the error occurs file mon reports Request: OPEN | Path: C:\windows\assembly\gac\_msil\system\2.0.0.0\_\_b77a5c561934e089\ws2\_32.dll | Result: NOT FOUND | Other: Attributes Error
Our particular problem has been solved. Basically when the reports were created they were saved with information about printers. Basically a particular printer had been set for the report and saved. This printer no longer exists which is why the report had started failing. Basically we had to open the report designer and remove any association with printers in the report.
We have encountered these errors several times in the past few years. Many times I wished for some more specific error message. First, I would verify that there aren't multiple Crystal Reports versions installed. In our expereince, we found that Crystal Reports 9.0 doesn't seem to play well with 10. Uninstalling version 9 seemed to help some of our customers. If both are installed, I highlly recommend uninstalling *both*, then re-installing Crystal Reports 10. One of our earliest Crystal Reports errors was the dreaded "The request could not be submitted for background processing." Reports would work fine for a while, then suddenly they would stop. After looking at the code, I found a place where we were not disposing of a ReportDocument. Correctly disposing this document fixed the issue. Lately, we hit a spat of "The request could not be submitted for background processing." and "Invalid Printer" errors. One customer's server had several network printers defined by IP address. Printing would work just fine for a while, then suddenly, *bam*, the customer got the "Invalid Printer" error and called our support. A fellow developer fixed the "Invalid Printer" problem by doing all of the following things: 1. Edit and save the .rpt file in Visual Studio 2005. We had been keeping the report format compatible with Crystal Reports 9, because we wanted our changes to be distributable to our customers still using older versions. 2. Save Crystal Reports files with the "No Printer" option. From the Visual Studio 2005 menu, select Crystal Reports/Design/Printer Setup, then select the "No Printer" check box. 3. We changed a formula that displayed "Page N of M" from something strange like this: ``` "Page " + Left (CStr (PageNumber), Length (CStr (PageNumber)) - 3) + " of " + Left (CStr (TotalPageCount), Length (CStr (TotalPageCount)) - 3) ``` to this: ``` "Page " + CStr(PageNumber,0) + " of " + CStr(TotalPageCount,0) ``` Unfortunately, I have no idea which (any or all) of the above actions may fixed the error. I say "may", because for all I know, the error magically went away because of some IT network change. Finally, have you read [this white paper?](https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/50a6f5e8-8164-2b10-7ca4-b5089df76b33) While it did not necessarily solve our problems, it was very interesting.
.Net Crystal Report printing application running on termianal service connection errors when session is disconnected
[ "", "c#", "crystal-reports", "remote-desktop", "" ]
I want to know when I should use `include` or `require` and what's the advantage of each one.
[`require`](https://www.php.net/require) requires, [`include`](http://www.php.net/include) includes. According to the manual: > require() is identical to include() except upon failure it will produce a fatal E\_ERROR level error. In other words, it will halt the script whereas include() only emits a warning (E\_WARNING) which allows the script to continue.
As others have said, if "require" doesn't find the file it's looking for, execution will halt. If include doesn't file the file it's looking for, execution will continue. In general, require should be used when importing code/class/function libraries. If you attempt to call a function, instantiate a class, etc. and the definitions aren't there, Bad Things will happen. Therefore, you **require** php to include your file, and if it can't, you stop. Use include when you're using PHP to output content or otherwise execute code that, if it doesn't run, won't necessarily destroy later code. The classic example of this is implementing a View in a Model/View/Controller framework. Nothing new should be defined in a view, nor should it change application state. Therefore, it's ok to use include, because a failure won't break other things happening in the application. One small tangent. There's a lot of conflicting information and mis-information out there regarding performance of include vs. require vs. require\_once vs. include\_once. They perform radically different under different situations/use-cases. This is one of those places where you really need to benchmark the difference in your own application.
What is the difference between require and include with php?
[ "", "php", "include", "" ]
I've got code that looks like this: ``` for (std::list<item*>::iterator i = items.begin(); i != items.end(); i++) { bool isActive = (*i)->update(); //if (!isActive) // items.remove(*i); //else other_code_involving(*i); } items.remove_if(CheckItemNotActive); ``` I'd like remove inactive items immediately after update them, in order to avoid walking the list again. But if I add the commented-out lines, I get an error when I get to `i++`: "List iterator not incrementable". I tried some alternates which didn't increment in the for statement, but I couldn't get anything to work. What's the best way to remove items as you are walking a std::list?
You have to increment the iterator first (with i++) and then remove the previous element (e.g., by using the returned value from i++). You can change the code to a while loop like so: ``` std::list<item*>::iterator i = items.begin(); while (i != items.end()) { bool isActive = (*i)->update(); if (!isActive) { items.erase(i++); // alternatively, i = items.erase(i); } else { other_code_involving(*i); ++i; } } ```
You want to do: ``` i= items.erase(i); ``` That will correctly update the iterator to point to the location after the iterator you removed.
Can you remove elements from a std::list while iterating through it?
[ "", "c++", "list", "std", "stdlist", "" ]
Similar to [How can I access the ServletContext from within a JAX-WS web service?](https://stackoverflow.com/questions/261348/how-can-i-access-the-servletcontext-from-within-a-jax-ws-web-service), is there a way to access applicationContext, easier than this? ``` import javax.annotation.Resource; import javax.jws.WebService; import javax.servlet.ServletContext; import javax.xml.ws.WebServiceContext; import javax.xml.ws.handler.MessageContext; import org.springframework.web.context.WebApplicationContext; import org.springframework.web.context.support.WebApplicationContextUtils; @WebService public class MyWebService { // boilerplate code begins :( @Resource private WebServiceContext context; private WebApplicationContext webApplicationContext = null; /** * @return * @throws IllegalStateException */ private WebApplicationContext getWebApplicationContext() throws IllegalStateException { if (webApplicationContext != null) return webApplicationContext; ServletContext servletContext = (ServletContext) context.getMessageContext().get( MessageContext.SERVLET_CONTEXT); webApplicationContext = WebApplicationContextUtils.getRequiredWebApplicationContext(servletContext); return webApplicationContext; } } ```
I don't think that the web service should have to know about web or servlet contexts or its application context. I don't see why it should have to know any of that. Shouldn't it be far more passive? Inject what it needs and let it do its work. The service interactions with a client should be based on a contract defined up front. If it has to get unknown values from a context of some kind, how will clients know what needs to be set or how to set it? I'd go further and say that a web service should be a wrapper for a Spring service interface. It's just one more choice among all the possible ways to expose it. Your web service should do little more than marshal and unmarshal the XML request/response objects and collaborate with Spring services.
``` import org.springframework.beans.factory.annotation.Autowired; import org.springframework.beans.factory.annotation.Qualifier; import org.springframework.web.context.support.SpringBeanAutowiringSupport; @WebService( endpointInterface = "Bla", targetNamespace = "http://bla/v001", wsdlLocation = "WEB-INF/wsdl/bla.wsdl", serviceName = "BlaService", portName = "BlaPort") public class BlaWs extends SpringBeanAutowiringSupport implements BlaPort { @Autowired @Qualifier("dao") private Dao dao; ... } ```
How can I access the ApplicationContext from within a JAX-WS web service?
[ "", "java", "web-services", "spring", "jax-ws", "" ]
I would like to run a search with MSSQL Full text engine where given the following user input: "Hollywood square" I want the results to have both Hollywood and square[s] in them. I can create a method on the web server (C#, ASP.NET) to dynamically produce a sql statement like this: ``` SELECT TITLE FROM MOVIES WHERE CONTAINS(TITLE,'"hollywood*"') AND CONTAINS(TITLE, '"square*"') ``` Easy enough. HOWEVER, I would like this in a stored procedure for added speed benefit and security for adding parameters. Can I have my cake and eat it too?
I agreed with above, look into AND clauses ``` SELECT TITLE FROM MOVIES WHERE CONTAINS(TITLE,'"hollywood*" AND "square*"') ``` However you shouldn't have to split the input sentences, you can use variable ``` SELECT TITLE FROM MOVIES WHERE CONTAINS(TITLE,@parameter) ``` by the way search for the exact term (contains) search for any term in the phrase (freetext)
Have you tried using the AND logical operator in your string? I pass in a raw string to my sproc and stuff 'AND' between the words. <http://msdn.microsoft.com/en-us/library/ms187787.aspx>
Best way to implement a stored procedure with full text search
[ "", "sql", "sql-server", "stored-procedures", "full-text-search", "dynamic-sql", "" ]
I have a game that's based on a 25x20 HTML table (the game board). Every 3 seconds the user can "move," which sends an AJAX request to the server, at which time the server rerenders the entire HTML table and sends it to the user. This was easy to write, but it wastes a lot of bandwidth. Are there any libraries, client (preferably jquery) or server-side, that help send differential instead of full updates for large tables? Usually only 5-10 tiles change on a given reload, so I feel like I could cut bandwidth use by an order of magnitude by sending just those tiles instead of all 500 every 3 seconds. I'm also open to "you idiot, why are you using HTML tables"-type comments if you can suggest a better alternative. For example are there any CSS/DOM manipulation techniques I should be considering instead of using an HTML table? Should I use a table but give each td coordinates for an id (like "12x08") and then use jquery to replace cells by id? A clarification: the tiles are text, not images.
You can model your game board as a multidimensional javascript array: ``` [[x0, x1, x2, x3 ... xn], ..... .....] ``` each entry is an array representing a row. Each cell holds the numerical value of the game piece/square. This model can be the "contract" you send to the server via ajax as JSON. The server calculates the same array and sends it back to the UI. You can render that array into a table, divs or whatever you like. Prototype.js and jQuery make creating dhtml super easy. This array format will be much smaller than a whole HTML response laden with markup. It also gives you freedom to render the board in whatever way you like. You can further compress this format and just send the deltas. For example: save the coordinates of tiles changed by the user and send those to the server: ``` [(x1, y2),.....(xn, yn)] ``` Or you can do it the other way around: send the full model array to the server, and have the server calculate the deltas. Check out Sponty, and watch the ajax traffic every few minutes or so, we do something very similar: [http://www.thesponty.com/](http://www.thesponty.com/stackoverflow) The client sends the full model to the server, and the server sends the diffs.
If you known the state between refreshes on the server side (see comment on question), you an send the data using JSON like so (not sure about exact syntax): ``` [ { x: 3, y: 5, class: "asdf", content: "1234" }, { x: 6, y: 5, class: "asdf", content: "8156" }, { x: 2, y: 2, class: "qwer", content: "1337" } ] ``` Compact that (remove extra whitespace, etc.), gzip it, and send it to your Javascript. Surprisingly, the Javascript code to read this isn't that complicated (simply DOM manipulations).
Differential AJAX updates for HTML table?
[ "", "jquery", "python", "html", "dhtml", "" ]
``` std::string sAttr(""); sAttr = sAttr+VAL_TAG_OPEN+sVal->c_str()+VAL_TAG_CLOSE; ``` else where in the code I have defined ``` const char VAL_TAG_OPEN[] = "<value>"; ``` `sVal` is a variable that is retrieved off of a array of string pointers. This works fine in most of the system, windows and linux. However at a customer site, where to my belief has a version of linux on which we had done some extensive testing, produce a result as if I have never used the `VAL_TAG_OPEN` and `VAL_TAG_CLOSE`. The results I recieve is for ``` sAttr = sAttr+sVal->c_str(); ``` Whats going on ?. Does std::string concatenation varies across runtime ?
Why the `->c_str()`? If `sVal` is a `std::string`, try removing this call. Remember that the order of evaluation is undefined, so you may end up adding pointers instead of concatenating strings, because `VAL_TAG_OPEN`, `sVal->c_str()` and `VAL_TAG_CLOSE` are all plain C strings. I suggest you use the addition assignment operator `+=`, e.g. : ``` sAttr += VAL_TAG_OPEN; sAttr += *sVal; /* sVal->c_str() ? */ sAttr += VAL_TAG_CLOSE; ``` (which should be faster anyway).
I don't thing its the order of evaluation that is causing the issue. Its because of the constant char arrays at the beginning and end ``` const char VAL_TAG_OPEN[] = "<value>"; const char VAL_TAG_CLOSE[] = "</value>" ``` The concatenation operator thought VAL\_TAG\_OPN and VAL\_TAG\_CLOSE as not a null terminator string. Hence the optimizer just ignored them thinking it as garbage. ``` sAttr += std::string(VAL_TAG_OPEN); sAttr += *sVal; sAttr += std::string(VAL_TAG_CLOSE); ``` This does solve it.
Runtime dependency for std::string concatenation
[ "", "c++", "windows", "linux", "string", "concatenation", "" ]
How do I open a file with the default associated program in Java? (for example a movie file)
You can use [`Desktop.getDesktop().open(File file)`](http://java.sun.com/javase/6/docs/api/java/awt/Desktop.html#open(java.io.File)). See the following question for other options: "[[Java] How to open user system preffered editor for given file?](https://stackoverflow.com/questions/526037/java-how-to-open-user-system-preffered-editor-for-given-file)"
few examples to open files with default program ``` Example 1 : Runtime.getRuntime().exec("rundll32.exe shell32.dll ShellExec_RunDLL " + fileName); Example 2 : Runtime.getRuntime().exec("rundll32.exe url.dll FileProtocolHandler " + fileName); Example 3 : Desktop.getDesktop().open(fileName); alternative... Runtime.getRuntime().exec(fileName.toString()); Runtime.getRuntime().exec("cmd.exe /c Start " + fileName); Runtime.getRuntime().exec("powershell.exe /c Start " + fileName); Runtime.getRuntime().exec("explorer.exe " + fileName); Runtime.getRuntime().exec("rundll32.exe SHELL32.DLL,OpenAs_RunDLL " + fileName); ``` Or.... ``` public static void openFile(int selecType, File fileName) throws Exception { String[] commandText = null; if (!fileName.exists()) { JOptionPane.showMessageDialog(null, "File not found", "Error", 1); } else { switch (selecType) { case 0: //Default function break; case 1: commandText = new String[]{"rundll32.exe", "shell32.dll", "ShellExec_RunDLL", fileName.getAbsolutePath()}; break; case 2: commandText = new String[]{"rundll32.exe", "url.dll", "FileProtocolHandler", fileName.getAbsolutePath()}; break; case 3: commandText = new String[]{fileName.toString()}; break; case 4: commandText = new String[]{"cmd.exe", "/c", "Start", fileName.getAbsolutePath()}; break; case 5: commandText = new String[]{"powershell.exe", "/c", "Start", fileName.getAbsolutePath()}; break; case 6: commandText = new String[]{"explorer.exe", fileName.getAbsolutePath()}; break; case 7: commandText = new String[]{"rundll32.exe", "shell32.dll", "OpenAs_RunDLL", fileName.getAbsolutePath()}; //File open With break; } if (selecType == 0) { Desktop.getDesktop().open(fileName); } else if (selecType < 8) { Process runFile = new ProcessBuilder(commandText).start(); runFile.waitFor(); } else { String errorText = "\nChoose a number from 1 to 7\n\nExample : openFile(1,\"" + fileName + "\")\n\n"; System.err.println(errorText); JOptionPane.showMessageDialog(null, errorText, "Error", 1); } } } ```
How to open a file with the default associated program
[ "", "java", "desktop", "" ]
This is kind of a branch off of my [other question](https://stackoverflow.com/questions/582550/c-begin-endreceive-how-do-i-read-large-data). Read it if you like, but it's not necessary. Basically, I realized that in order to effectively use C#'s BeginReceive() on large messages, I need to either (a) read the packet length first, then read exactly that many bytes or (b) use an end-of-packet delimiter. My question is, are either of these present in protocol buffers? I haven't used them yet, but going over the documentation it doesn't seem like there is a length header or a delimiter. If not, what should I do? Should I just build the message then prefix/suffix it with the length header/EOP delimiter?
You need to include the size or end marker in your protocol. Nothing is built into stream based sockets (TCP/IP) other than supporting an indefinite stream of octets arbitrarily broken up into separate packets (and packets can be spilt in transit as well). A simple approach would be for each "message" to have a fixed size header, include both a protocol version and a payload size and any other fixed data. Then the message content (payload). Optionally a message footer (fixed size) could be added with a checksum or even a cryptographic signature (depending on your reliability/security requirements). Knowing the payload size allows you to keep reading a number of bytes that will be enough for the rest of the message (and if a read completes with less, doing another read for the remaining bytes until the whole message has been received). Having a end message indicator also works, but you need to define how to handle your message containing that same octet sequence...
Apologies for arriving late at the party. I am the author of protobuf-net, one of the C# implementations. For network usage, you should consider the "[De]SerializeWithLengthPrefix" methods - that way, it will automatically handle the lengths for you. There are examples in the source. I won't go into huge detail on an old post, but if you want to know more, add a comment and I'll get back to you.
How to detect when a Protocol Buffer message is fully received?
[ "", "c#", "networking", "sockets", "protocol-buffers", "" ]
I have a Java program which prints to a HTML file. Another method reads in this file and prints it to the editor pane. My problem is that the color orange doesn't work. My code is as follows: ``` print.println("<html><p><br><font color=\"orange\">"); ``` When I pass this html file into the editor pane all the colors are right. Red, yellow and blue all work but orange doesn't (I have also tried replacing orange with pink etc but no luck). If I change the orange to say blue it works fine. When I open it up as a HTML file on the web the text that I want in orange is in orange and works fine. Help? Also all the indenting is fine but when it gets near the end, it moves to the right. For Example: ``` abc 0 bcd 1 vde 1 def 2 ... swe 7 dfr 8 ..................................................... ert 3 ``` Where ert 3 is the last string that needs to be printed. But it manages to fit all the text in the box. --- *Edit*: Thanks for the help for the first part. Any ideas about the bottom part?
Orange is not working because its not a default html word <http://www.handleidinghtml.nl/divers/kleuren/schermafdrukken/16kleuren.gif> show the 16 default colors. Most browsers however support other colors but java only supports HTML 3.2 read all on HTMl and swing in <http://java.sun.com/docs/books/tutorial/uiswing/components/html.html>
The JEditorPane does not support all the things a modern browser supports. [Soon](http://forums.sun.com/thread.jspa?threadID=5327759), there will be a [JWebPane](http://weblogs.java.net/blog/ixmal/archive/2008/05/introducing_jwe.html) component, which will provide full WebKit-based HTML support.
Reading HTML file to view in JEditorPane
[ "", "java", "html", "user-interface", "swing", "" ]
I am building a multi-user web application. Each user can have their own site under my application. I am considering how to allow user to modify template without security problem? I have evaluated some python template engine. For example, genshi, it is a pretty wonderful template engine, but however it might be dangerous to allow user to modify genshi template. It have a syntax like this: ``` <?python ?> ``` This syntax allow you run whatever you want python can do. I notice that it seems can be shutdown by passing some parameter. But there are still a lots of potential problems. For example, user can access build-in functions, and methods of passed variables. For example, if I pass a ORM object to template. It might contain some method and variable that I don't want to allow user touch it. May like this: ``` site.metadata.connection.execute("drop table xxx") ``` So my question is how can I allow user to modify template of their site without security problems? Any python template engine can be used. Thanks.
Look at [Django templte engine](http://docs.djangoproject.com/en/dev/topics/templates/#topics-templates). It does not support execution of arbitrary python code and all accessible variables must be passed into template explicity. This should be pretty good foundation for building user-customizable pages. Beware that you'll still need to handle occasional syntax errors from your users.
Jinja2 is a Django-ish templating system that has a sandboxing feature. I've never attempted to use the sandboxing, but I quite like Jinja2 as an alternative to Django's templates. It still promotes separation of template from business logic, but has more Pythonic calling conventions, namespacing, etc. [Jinja2 Sandbox](http://jinja.pocoo.org/2/documentation/sandbox)
How can I provide safety template for user to modify with python?
[ "", "python", "web", "templates", "" ]
I'm modifying [the code in this tutorial](http://www.switchonthecode.com/tutorials/wcf-tutorial-events-and-callbacks) to build some basic subscribe push wcf client/server classes, and I've just hit a bit of a brick wall. The server class in the tutorial is created using the following code: ``` class Program { static void Main(string[] args) { using (ServiceHost host = new ServiceHost( typeof(StringReverser), new Uri[]{ new Uri("net.pipe://localhost") })) { host.AddServiceEndpoint(typeof(IStringReverser), new NetNamedPipeBinding(), "PipeReverse"); host.Open(); Console.WriteLine("Service is available. " + "Press <ENTER> to exit."); Console.ReadLine(); host.Close(); } } } ``` Which I assume publishes an instance of **StringReverser** my problem is I need a reference to that instance so I can call a method on it to push data back to the client. In the tutorial the server just replies to the client using a callback method, instead I'm storing a reference to the client in a list of subscribers. When I need to push data back to the clients I need a reference to the Service object so I can actually utilize do the callback. Is there a way to publish a Service using WCF that lets you have a reference to the service object? or can I get a reference to the service object from the **host** object? Any help would be appreciated...
You can use the singleton pattern in your StringReverser class and pass the instance of it to the ServiceHost constructor: ``` ServiceHost host = new ServiceHost( StringReverser.Instance, new Uri[]{new Uri("net.pipe://localhost")} ); ```
I agree that Julien's answer is the correct approach, but it is incomplete (at least for .NET 4.5). After you pass in the instance of the service, you have to set the instance context mode for the `ServiceHost` to Single. If you don't do that, you'll get an error when the `ServiceHost` `Open()` method is called. The way to set the context mode was not at all obvious. Here is a fragment from one of my programs, taken from a different SO answer: ``` var baseAddress = new Uri("http://localhost:15003/MockGateway"); using (var host = new ServiceHost(new MockGatewayService(), baseAddress)) { // Since we are passing an instance of the service into ServiceHost (rather // than passing in the type) we have to set the context mode to single. var behavior = host.Description.Behaviors.Find<ServiceBehaviorAttribute>(); behavior.InstanceContextMode = InstanceContextMode.Single; // Continue to use the service here. If you ever need to get a reference // to the service object you can do so with... MockGatewayService myService = host.SingletonInstance as MockGatewayService; // ... } ```
How do I get a reference to a WCF service object from the code that creates it?
[ "", "c#", "wcf", "" ]
I have a html file need to refresh every 10 seconds, so I have this line in html : ``` meta http-equiv="Refresh" content="10; url=Default.aspx" ``` In my C# code I have this : ``` public partial class _Default : System.Web.UI.Page<Br> { public static List<String> Active_User_List= new List<String>(), User_List_To_Remove; public static int Refresh_In_Seconds=10;<Br> .. } ``` How to replace the 10 with the variable "Refresh\_In\_Seconds" ?
Use this: ``` <meta http-equiv="Refresh" content="<%= Refresh_In_Seconds %>; url=Default.aspx" /> ```
Try: ``` <meta http-equiv="Refresh" content="<%=Refresh_In_Seconds%>; url=default.aspx" /> ``` FYI, that should probably not be a static member.
C# : How to put variable into meta tag
[ "", "c#", ".net", "" ]
How do I improve the performance of the following situation? I have an application that accepts multiple connections across a network. The application is written in C# and accepts Socket connections. Every five minutes, the application needs to perform a function call that updates the application and reports information back to the sockets. Truncated code follows ``` ... { new Thread(Loop).Start(); } public Loop() { ... while (true) { ... string line = user.Read(); ... } ... } ``` The above code is what is run when a Socket is connected to the server. The following code is what is run every five minutes. ``` ... { new Thread(TryTick).Start(); } public void TryTick() { while(true) { Tick(); Thread.Sleep(new TimeSpan(0, 5, 0)); } } ``` Tick() does some File I/O operations as well as parsing a very limited (under 1MB) set of XML data. However, this code will tax my processor more than I had thought it would. Once a tick occurs, the system seems to grab an entire Core of my dual core development machine and doesn't let go. This seems to be something fairly simple, but perhaps I am doing it the easy way instead of the fast way. This system is meant to handle up to 100 users, much more data and have a response time of under 1 second during load, so performance is an issue.
Have you tried compiling in release mode to see if there is much difference in speed?
You really need to profile this code before deciding what to do . . . if you don't have access to a profiler, add simple instrumentation to determine where the bottleneck is. Start with timing Tick() to verify it is the source of the problems (as opposed to somewhere else in your program), and then repeat for sub-sections of Tick() as needed. You really need to profile and measure here . . . making presumptions about where your perf problem lies could lead you to waste a lot of time.
Improving performance of C# code
[ "", "c#", "performance", "multithreading", "" ]
A website returns the following JSON response, how would I consume it (in javascript)? ``` [{"ID1":9996,"ID2":22}] ``` Is JSON simply returning an array?
We use: ``` function evalResponse(response) { var xyz123 = null; eval("xyz123 = " + response); return xyz123; } ``` An alternative method is to simply use: ``` var myObj = eval(response); ``` Basically, you have to call `eval()` on the response to create a javascript object. This is because the response itself is just a string when you get it back from your AJAX call. After you `eval` it, you have an object that you can manipulate. ``` function myCallback(response) { var myObj = evalResponse(response); alert(myObj.ID1); } ``` You could use a javascript library to handle this for you. Or, you could try to parse the string yourself. `eval()` has it's own problems, but it works.
If you use <http://www.JSON.org/json2.js> you can use it's method JSON.parse to retrieve the json *string* as an object (without the use of eval (which is considered evil)), so in this case you would use: ``` var nwObj = JSON.parse('[{"ID1":9996,"ID2":22}]'); alert(nwObj.ID1); //=> 9996 ```
New to JSON, what can I do with this json response
[ "", "javascript", "json", "" ]
**EDIT:** *As of Java 8, static methods are now allowed in interfaces.* Here's the example: ``` public interface IXMLizable<T> { static T newInstanceFromXML(Element e); Element toXMLElement(); } ``` Of course this won't work. But why not? One of the possible issues would be, what happens when you call: ``` IXMLizable.newInstanceFromXML(e); ``` In this case, I think it should just call an empty method (i.e. {}). All subclasses would be forced to implement the static method, so they'd all be fine when calling the static method. So why isn't this possible? **EDIT:** I guess I'm looking for answer that's deeper than "because that's the way Java is". Is there a particular technological reason why static methods can't be overwritten? That is, why did the designers of Java decide to make instance methods overrideable but not static methods? **EDIT:** The problem with my design is I'm trying to use interfaces to enforce a coding convention. That is, the goal of the interface is twofold: 1. I want the IXMLizable interface to allow me to convert classes that implement it to XML elements (using polymorphism, works fine). 2. If someone wants to make a new instance of a class that implements the IXMLizable interface, they will always know that there will be a newInstanceFromXML(Element e) static constructor. Is there any other way to ensure this, other than just putting a comment in the interface?
# Java 8 permits static interface methods With Java 8, interfaces *can* have static methods. They can also have concrete instance methods, but not instance fields. There are really two questions here: 1. Why, in the bad old days, couldn't interfaces contain static methods? 2. Why can't static methods be overridden? # Static methods in interfaces There was no strong technical reason why interfaces couldn't have had static methods in previous versions. This is [summed up nicely by the poster](https://stackoverflow.com/questions/129267/why-no-static-methods-in-interfaces-but-static-fields-and-inner-classes-ok/135722#135722) of a duplicate question. Static interface methods were initially considered as [a small language change,](https://web.archive.org/web/20150515204937/https://blogs.oracle.com/darcy/entry/project_coin) and then there was [an official proposal](https://mail.openjdk.org/pipermail/coin-dev/2009-March/000127.html) to add them in Java 7, but it was later [dropped due to unforeseen complications.](https://bugs.java.com/bugdatabase/view_bug?bug_id=4093687) Finally, Java 8 introduced static interface methods, as well as override-able instance methods with a default implementation. They still can't have instance fields though. These features are part of the lambda expression support, and you can read more about them in [Part H of JSR 335.](http://jcp.org/en/jsr/detail?id=335) # Overriding static methods The answer to the second question is a little more complicated. Static methods are resolvable at compile time. Dynamic dispatch makes sense for instance methods, where the compiler can't determine the concrete type of the object, and, thus, can't resolve the method to invoke. But invoking a static method requires a class, and since that class is known *statically*—at compile time—dynamic dispatch is unnecessary. A little background on how instance methods work is necessary to understand what's going on here. I'm sure the actual implementation is quite different, but let me explain my notion of method dispatch, which models observed behavior accurately. Pretend that each class has a hash table that maps method signatures (name and parameter types) to an actual chunk of code to implement the method. When the virtual machine attempts to invoke a method on an instance, it queries the object for its class and looks up the requested signature in the class's table. If a method body is found, it is invoked. Otherwise, the parent class of the class is obtained, and the lookup is repeated there. This proceeds until the method is found, or there are no more parent classes—which results in a `NoSuchMethodError`. If a superclass and a subclass both have an entry in their tables for the same method signature, the sub class's version is encountered first, and the superclass's version is never used—this is an "override". Now, suppose we skip the object instance and just start with a subclass. The resolution could proceed as above, giving you a sort of "overridable" static method. The resolution can all happen at compile-time, however, since the compiler is starting from a known class, rather than waiting until runtime to query an object of an unspecified type for its class. There is no point in "overriding" a static method since one can always specify the class that contains the desired version. --- # Constructor "interfaces" Here's a little more material to address the recent edit to the question. It sounds like you want to effectively mandate a constructor-like method for each implementation of `IXMLizable`. Forget about trying to enforce this with an interface for a minute, and pretend that you have some classes that meet this requirement. How would you use it? ``` class Foo implements IXMLizable<Foo> { public static Foo newInstanceFromXML(Element e) { ... } } Foo obj = Foo.newInstanceFromXML(e); ``` Since you have to explicitly name the concrete type `Foo` when "constructing" the new object, the compiler can verify that it does indeed have the necessary factory method. And if it doesn't, so what? If I can implement an `IXMLizable` that lacks the "constructor", and I create an instance and pass it to your code, it *is* an `IXMLizable` with all the necessary interface. *Construction is part of the implementation,* not the interface. Any code that works successfully with the interface doesn't care about the constructor. Any code that cares about the constructor needs to know the concrete type anyway, and the interface can be ignored.
This was already asked and answered, [here](https://stackoverflow.com/questions/129267/why-no-static-methods-in-interfaces-but-static-fields-and-inner-classes-ok/129934#129934) To duplicate my answer: There is never a point to declaring a static method in an interface. They cannot be executed by the normal call MyInterface.staticMethod(). If you call them by specifying the implementing class MyImplementor.staticMethod() then you must know the actual class, so it is irrelevant whether the interface contains it or not. More importantly, static methods are never overridden, and if you try to do: ``` MyInterface var = new MyImplementingClass(); var.staticMethod(); ``` the rules for static say that the method defined in the declared type of var must be executed. Since this is an interface, this is impossible. The reason you can't execute "result=MyInterface.staticMethod()" is that it would have to execute the version of the method defined in MyInterface. But there can't be a version defined in MyInterface, because it's an interface. It doesn't have code by definition. While you can say that this amounts to "because Java does it that way", in reality the decision is a logical consequence of other design decisions, also made for very good reason.
Why can't I define a static method in a Java interface?
[ "", "java", "interface", "static-methods", "" ]
I've got a report being built from a dataset. The dataset uses the Sort property to order the data. I know that I can create a sort expression like this: "field desc, field2 asc" But what I need now is a way to do a custom sort. In SQL, I can perform a custom sort by doing something like this: ``` order by case when field = 'Some Value' then 0 end case when field = 'Another Value' then 1 end ``` To basically re-define my sort (i.e, Some Value comes before Another Value). Is it possible to do something similar as a sort expression against a DataView?
Ok, I just whipped this up real quick, and didn't do all the neccessary error handling and null checking, but it should give you an idea and should be enough to get you started: ``` public static class DataTableExtensions { public static DataView ApplySort(this DataTable table, Comparison<DataRow> comparison) { DataTable clone = table.Clone(); List<DataRow> rows = new List<DataRow>(); foreach (DataRow row in table.Rows) { rows.Add(row); } rows.Sort(comparison); foreach (DataRow row in rows) { clone.Rows.Add(row.ItemArray); } return clone.DefaultView; } } ``` Usage: ``` DataTable table = new DataTable(); table.Columns.Add("IntValue", typeof(int)); table.Columns.Add("StringValue"); table.Rows.Add(11, "Eleven"); table.Rows.Add(14, "Fourteen"); table.Rows.Add(10, "Ten"); table.Rows.Add(12, "Twelve"); table.Rows.Add(13, "Thirteen"); ``` //Sort by StringValue: ``` DataView sorted = table.ApplySort((r, r2) => { return ((string)r["StringValue"]).CompareTo(((string)r2["StringValue"])); }); ``` Result: 11 Eleven 14 Fourteen 10 Ten 13 Thirteen 12 Twelve //Sort by IntValue: ``` DataView sorted = table.ApplySort((r, r2) => { return ((int)r["IntValue"]).CompareTo(((int)r2["IntValue"])); }); ``` Result: 10 Ten 11 Eleven 13 Thirteen 12 Twelve 14 Fourteen EDIT: Changed it to extension method. Now in your Lambda, (or you can create a full blown Comparison method) you can do any kind of custom sorting logic that you need. Remember, -1 is less than, 0 is equal to, and 1 is greater than.
I like BFree's answer, though I'd worry about the risk that my code would end up updating the cloned table rather than the real one. (I haven't thought through it enough to know if that's actually an issue if you're only using the extension method in a `DataView`.) You can do this on the original `DataTable` by adding a calculated `DataColumn` to it (using the [`Expression`](http://msdn.microsoft.com/en-us/library/system.data.datacolumn.expression.aspx) property) and then sorting on its value. In your case it would be something like: ``` DataColumn c = myTable.Columns.Add("Sort", typeof(int)); c.Expression = "iif(field='SomeValue', 0, iif(field='AnotherValue', 1, 2))"; ``` which sorts `SomeValue` first, `AnotherValue` second, and everything else after that.
DataView.Sort - more than just asc/desc (need custom sort)
[ "", "c#", "asp.net", "" ]
The function `somefunction()` takes a triple pointer as an argument. ``` int somefunction(tchar ***returnErrors); ``` How to allocate memory for the `returnErrors` parameter?
*At a guess . . .* You can think of returnErrors as a pointer to an array of strings. 1. The first \* imples pointer to array of tchar (or a single string of tchars) 2. The second \* imples a pointer to an array of strings. 3. The last \* is so you can change returnErrors and pass back the new memory. To delare memory for this (silly example, allocating memory inside SomeFunction) ``` tchar ** errors; // Oops it appears I need to pass back two error strings (+ 1 for null on end, so we know there are no more - thanks tlholaday) errors = malloc(sizeof(tchar*) * 3); // the first string has length 20 (+ 1 for null terminator) errors[0] = malloc(sizeof(tchar) * 21); // the second string has length 30 (+ 1 for null terminator) errors[1] = malloc(sizeof(tchar) * 31); // ensure the last is null errors[2] = 0; *returnErrors = errors; ``` **NB: The calling function needs to know that SomeFunction has allocated memory and needs to free it.**
Are you *implementing* somefunction or *calling* somefunction? If you are calling somefunction, it is likely that somefunction will be allocating the memory, so all you need to do is pass it a safe place to scribble and clean up afterwards. ``` tchar **theErrors = 0; // a vector of tchar vectors. somefunction(&theErrors); if (theErrors) { // use the error values // free the memory somehow - this is for a null-terminated convention tchar **victim = theErrors; while (*victim) delete[](*victim++); delete[] theErrors; } ``` Note: I'm using 0 and delete[] instead of NULL and free because the tag says c++.
Allocating memory for triple pointer
[ "", "c++", "c", "pointers", "" ]
I'm relatively new to JUnit, and I was writing a few of my first tests today. For some particular method, I wanted to pass random values (all of which are within the correct range). If the method fails for any reason, I want to know which value caused it to fail. So what's the recommended way of doing this? (Or is it bad to use random values in JUnit tests?)
If you really want to use random values then simply place the value in use in the textual part of the assert methods. Then if an assert is blown the input value will be present and you can investigate why it had a problem. This is [Fuzz Testing](http://en.wikipedia.org/wiki/Fuzz_testing) and is a powerful technique but is most useful when you *don't* have the available source code or when testing a system with complex internal state and multiple interactions. A more useful type of testing for you may be [white box testing](http://en.wikipedia.org/wiki/White_box_testing) where test inputs are deliberately selected to cover the various classes of input you might get. [JTest](http://www.parasoft.com/jsp/products/article.jsp?articleId=839) appears to be an automated tool for this in java. MS Research supplies [PEX](http://research.microsoft.com/en-us/projects/Pex/) for c#). Simply using a coverage tool and validating that you cover the relevant paths is normally sufficient if doing it by hand, though the boundary cases supplied by the automated tools are often instructive.
You can try to use: <http://www.openfuture.de/Log4Unit/> for logging, **but** I would recommend against random values for unit tests as they are supposed to be repeated. If you want to test a lot of values just use a for loop and some modifications to the index value, which is easily repeated. If you think about it there is really no situation where it would be more beneficial to use random values than "hard coded" ones. If you want a good spread over a value range you can use a function or use random values with a fixed seed (to get the same numbers). If a test fails you want to be able to fix it and run the test again. That's the problem with random numbers in unit tests.
What is the recommended way to log data that caused errors in JUnit?
[ "", "java", "unit-testing", "logging", "junit", "" ]
After providing some answers here, and reading some comments, it would seem that, in practice IOException is never thrown on close for file I/O. Are there any cases in which calling close on a Stream/Reader/Writer actually throws an IOException? If an exception is actually thrown, how should it be dealt with?
For files, you may not see IOException thrown often on close(), but you'll definitely see it for non-File I/O like closing sockets to the network. [Here's an example](https://bugs.java.com/bugdatabase/view_bug?bug_id=6204940) of a Java bug where closing a UDP socket eventually caused an IOException to be thrown.
I have found two cases: * Losing the network connection when there is still data in the buffer to be flushed. * Having the file system fill up (or reaching your user limit for file size) when there is still data in the buffer to be flushed. Both of those examples depend on something happening while there is still data in the buffer. Close flushes the buffer before the file is closes, so if there is an error writing the data to the file it throws an IOException. If you execute the following code passing it the name of a file to create on a network drive, and then before you press the enter key unplug your network cable, it will cause the program to throw an IOException in close. ``` import java.io.File; import java.io.FileWriter; import java.io.IOException; import java.io.Writer; public class Test { public static void main(final String[] argv) { final File file; file = new File(argv[0]); process(file); } private static void process(final File file) { Writer writer; writer = null; try { writer = new FileWriter(file); writer.write('a'); } catch(final IOException ex) { System.err.println("error opening file: " + file.getAbsolutePath()); } finally { if(writer != null) { try { try { System.out.println("Please press enter"); System.in.read(); } catch(IOException ex) { System.err.println("error reading from the keyboard"); } writer.close(); } catch(final IOException ex) { System.err.println("See it can be thrown!"); } } } } } ``` --- Since Java 7 you can use try-with-resources to get out of this mess (removed explicit exception generation code for the `close()` operation): ``` private static void process(final File file) { try (final Writer writer = new FileWriter(file)) { writer.write('a'); } catch (final IOException e) { // handle exception } } ``` this will auto-magically handle the exceptions in `close()` and it performs an explicit `null` check internally.
Does close ever throw an IOException?
[ "", "java", "ioexception", "" ]
I've been working on a web app using Django, and I'm curious if there is a way to schedule a job to run periodically. Basically I just want to run through the database and make some calculations/updates on an automatic, regular basis, but I can't seem to find any documentation on doing this. Does anyone know how to set this up? To clarify: I know I can set up a `cron` job to do this, but I'm curious if there is some feature in Django that provides this functionality. I'd like people to be able to deploy this app themselves without having to do much config (preferably zero). I've considered triggering these actions "retroactively" by simply checking if a job should have been run since the last time a request was sent to the site, but I'm hoping for something a bit cleaner.
One solution that I have employed is to do this: 1) Create a [custom management command](http://docs.djangoproject.com/en/dev/howto/custom-management-commands/#howto-custom-management-commands), e.g. ``` python manage.py my_cool_command ``` 2) Use `cron` (on Linux) or `at` (on Windows) to run my command at the required times. This is a simple solution that doesn't require installing a heavy AMQP stack. However there are nice advantages to using something like Celery, mentioned in the other answers. In particular, with Celery it is nice to not have to spread your application logic out into crontab files. However the cron solution works quite nicely for a small to medium sized application and where you don't want a lot of external dependencies. EDIT: In later version of windows the `at` command is deprecated for Windows 8, Server 2012 and above. You can use `schtasks.exe` for same use. \*\*\*\* UPDATE \*\*\*\* This the new [link](https://docs.djangoproject.com/en/2.2/howto/custom-management-commands/#howto-custom-management-commands) of django doc for writing the custom management command
[Celery](http://celeryproject.org/) is a distributed task queue, built on AMQP (RabbitMQ). It also handles periodic tasks in a cron-like fashion (see [periodic tasks](http://docs.celeryproject.org/en/latest/userguide/periodic-tasks.html)). Depending on your app, it might be worth a gander. Celery is pretty easy to set up with django ([docs](http://docs.celeryproject.org/en/latest/django/)), and periodic tasks will actually skip missed tasks in case of a downtime. Celery also has built-in retry mechanisms, in case a task fails.
Set up a scheduled job?
[ "", "python", "django", "web-applications", "scheduled-tasks", "" ]
Performance is key: Is it better to cascade deletes/updates inside of the Database or let Hibernate/JPA take care of it? Will this effect the ability to query for the data if cascades are inside of the DBMS? I am using HSQLDB if that matters.
In the case of cascading updates, you simply cannot do it in application space if you have foreign key constraints in the database. Example: say you have a lookup table for US states, with a primary key of the two-letter abbreviation. Then you have a table for mailing addresses that references it. Someone tells you that you mistakenly gave Montana the abbreviation "MO" instead of "MT" so you need to change it in the lookup table. ``` CREATE TABLE States (st CHAR(2) PRIMARY KEY, state VARCHAR(20) NOT NULL); INSERT INTO States VALUES ('MO', 'Montana'); CREATE TABLE Addresses (addr VARCHAR(20), city VARCHAR(20), st CHAR(2), zip CHAR(6), FOREIGN KEY (st) REFERENCES States(st)); INSERT INTO Addresses VALUES ('1301 East Sixth Ave.', 'Helena', 'MO', '59620'); ``` Now you go to fix the mistake, without the aid of database-side cascading updates. Below is a test using MySQL 5.0 (assume no records exist for Missouri, which actually does use the abbreviation "MO"). ``` UPDATE States SET st = 'MT' WHERE st = 'MO'; ERROR 1451 (23000): Cannot delete or update a parent row: a foreign key constraint fails (`test/addresses`, CONSTRAINT `addresses_ibfk_1` FOREIGN KEY (`st`) REFERENCES `states` (`st`)) UPDATE Addresses SET st = 'MT' WHERE st = 'MO'; ERROR 1452 (23000): Cannot add or update a child row: a foreign key constraint fails (`test/addresses`, CONSTRAINT `addresses_ibfk_1` FOREIGN KEY (`st`) REFERENCES `states` (`st`)) UPDATE Addresses JOIN States USING (st) SET Addresses.st = 'MT', States.st = 'MT' WHERE States.st = 'MO'; ERROR 1451 (23000): Cannot delete or update a parent row: a foreign key constraint fails (`test/addresses`, CONSTRAINT `addresses_ibfk_1` FOREIGN KEY (`st`) REFERENCES `states` (`st`)) ``` No application-side query can solve this situation. You need cascading updates in the database in order to perform the update in both tables atomically, before the referential integrity constraint is enforced.
**UPDATE:** looks like similar question already was answered here [When/Why to use Cascading in SQL Server?](https://stackoverflow.com/questions/59297/whenwhy-to-use-cascading-in-sql-server) IMO **correct answer on your question** will be as usual **"it depends"**. If you are using a database as a storage of sensitive information (e.g. financial, medical etc.), or if someone else outside of your application could have access to the database I will vote for Hibernate/JPA approach. If your database is for logging (e.g. web site traffic etc.) or if you are developing software with embedded database you can relatively safely use cascade operations. **2. In most cases I will vote for Hibernate/JPA approach because it's more manageable and predictable.** I tell you a story. Some times ago young country decided to change national currency (it happened with young countries). Couple years later new DBA saw in a currency table row with obsolete currency and decided to delete it (who knows why). Guess what happened? 30% of database was deleted because of cascade deletion operations. IMO with cascade operations you have to be super careful with all you delete/update statements from one hand and you lose power of constrains (i.e. foreign keys) for database validation from other hand.
Cascading Deletes/Updates using JPA or Inside of Database?
[ "", "sql", "hibernate", "jpa", "hsqldb", "cascade", "" ]
I know PHP is usually used for web development, where there *is* no standard input, but PHP claims to be usable as a general-purpose scripting language, if you do follow it's funky web-based conventions. I know that PHP prints to `stdout` (or whatever you want to call it) with `print` and `echo`, which is simple enough, but I'm wondering how a PHP script might get input from `stdin` (specifically with `fgetc()`, but any input function is good), or is this even possible?
It is possible to read the `stdin` by creating a file handle to `php://stdin` and then read from it with `fgets()` for a line for example (or, as you already stated, `fgetc()` for a single character): ``` <?php $f = fopen( 'php://stdin', 'r' ); while( $line = fgets( $f ) ) { echo $line; } fclose( $f ); ?> ```
Reading from **STDIN** is [recommended way](http://php.net/manual/en/wrappers.php.php) ``` <?php while (FALSE !== ($line = fgets(STDIN))) { echo $line; } ?> ```
PHP standard input?
[ "", "php", "file-io", "stdin", "" ]
Maybe there is a method that does this that I don't know about - I doubt it though - but I'm trying to convert an array of strings to an array of Objects. Here is the problem: I'm reading a file in from the command line. The file represents several classes of the following types each with their own data fields. Vehicle is the parent class of all who follow: Vehicle,Car,American Car, Foreign car, Truck, Bicycle. I have no problem reading the file into a string array. However I need to create objects of all these types and store them in an array of type Vehicle[]. For example a portion of the file looks like this: * Vehicle * Kim Stanley Robinson * 2344 court drive * (221)885-7777 * stackoverflow@overflow.com * American Car * John Bunyon * 1010 binary lane * (221)885-55643 * bgt.com * convertable * made in detroit * union plant Where Class type is the first line followed by, Owner's Name, address, phone number, email address...Each type has fields particular to it. So a foreign car is not made in Detroit. Each of these fields appear on a separate line in the file. So what I've done is read the entire file into a string array. However, I need to find my types in the array of strings,create objects of those types, and store them in a Vehicle array. My main problem is that each data field is on a separate line. How should I approach this problem? This is java code by the way.
Initially reading the data into a String array is fine. Then you need to loop through that array, and based on the "first line" of each loop ("Vehicle", "American car" etc) you will know how many subsequent elements of the array belong to the same. Something like this (i'll let you fill in the blanks yourself): ``` int i = 0; ArrayList<vehicle> vehicles = new ArrayList(); while (i < data.length) { if (data[i].equalsIgnoreCase("vehicle")) { Vehicle vehicle = new Vehicle(); vehicle.setOwner(data[++i]); ... vehicles.add(vehicle); } else if (data[i].equalsIgnoreCase("american car")) { ... } i++; } ```
question is unclear. Do you want to know how to parse the file and use the words on each line to create a object of it? pseudo: ``` Vehicle_ptr myVeh = null; for each line in file switch line { case vehicle: myVeh = new Vehicle(); case American Car : myVeh = new AmericanCar(); default: if (line.startswithaninteger && (myVeh != NULL)) myVeh.address = line; etcetc. } ``` Tips: use typeidentifiers in the textfile.for example: car:americancar address:12345 bla etcetc Or use a serializer
Array of Strings to an Array of Objects
[ "", "java", "arrays", "" ]
I am adding bookmark from code. Now in Firefox if user already added the url as bookmark he should not be able to bookmark it again. How can I do it? For now every time a new bookmark is getting created.
If you want to prevent people from clicking a bookmark link on your website for a second time, you could try to write using cookies to hide the link if there's a cookie on the client. As far as I know trying to check the bookmarks library (Places) in FireFox by using javascript on your website, that is impossible.
Obviously, you cannot, except for the above two good suggestions: use cookies. Or user information (although if they registered, they probably already bookmarked the site!). Cookie solution is brittle, though, as user can erase them. Why you cannot: that would be an obvious security breach if a JavaScript program could read the list of bookmarks of the user: beside privacy concerns, one can have some sensible information in the URL parameters... Side note: adding a bookmark is done on user action (or should be!). So users should know if they added the bookmark already, no?
How to know if url is already bookmarked
[ "", "javascript", "firefox", "bookmarks", "" ]
Assume I have a user defined Java class called Foo such as: ``` public class Foo { private String aField; @Override public String toString() { return aField; } } ``` And a Collection such as: ``` List<Foo> aList; ``` What I am looking to do is to sort the List alphabetically based upon each member's returned '.toString()' value. I have tried using the Collections.sort() method, but the result was not what I was attempting. What do I need to do inorder to accomplish this?
Use the API `sort(List list, Comparator c)`which specifies a comparator, and implement is as you wish. Alternatively, if you do not specifically need a List, use a [`SortedSet`](http://java.sun.com/javase/6/docs/api/java/util/SortedSet.html), same goes with the comparator.
``` Collections.sort(fooList, new Comparator<Foo>() { public int compare(Foo f1, Foo f2) { return f1.toString().compareTo(f2.toString()); } }); ``` Assuming that toString never returns null and that there are no null items in the list.
Alphabetically Sort a Java Collection based upon the 'toString' value of its member items
[ "", "java", "sorting", "collections", "" ]
I have a large DataGridView control that has several cells most of which contain a button. How can I change the color of those buttons? This changes the "outline" of the button but not the button itself. ``` row.Cells[2].Style.BackColor = System.Drawing.Color.Red; ``` This doesn't seem to change anything that's visible: ``` row.Cells[2].Style.ForeColor = System.Drawing.Color.Red; ``` If it's not possible to change the background, is it possible to change the font on the button? Using .NET 2.0.
As per [MSDN:](http://msdn.microsoft.com/en-us/library/system.windows.forms.datagridviewbuttoncell(VS.80).aspx) > When visual styles are enabled, the > buttons in a button column are painted > using a ButtonRenderer, and cell > styles specified through properties > such as DefaultCellStyle have no > effect. Therefore, you have one of two choices. In your Program.cs you can remove this line: ``` Application.EnableVisualStyles(); ``` which will make it work, but make everything else look like crap. Your other option, and you're not going to like this one, is to inherit from **DataGridViewButtonCell** and override the Paint() method. You can then use the static method on the **ButtonRenderer** class called **DrawButton**, to paint the button yourself. That means figuring out which state the cell currently is in (clicked, hover etc.) and painting the corners and borders etc... You get the idea, it's doable, but a HUGE pain. If you want to though, here's just some sample code to get you started: ``` //Custom ButtonCell public class MyButtonCell : DataGridViewButtonCell { protected override void Paint(Graphics graphics, Rectangle clipBounds, Rectangle cellBounds, int rowIndex, DataGridViewElementStates elementState, object value, object formattedValue, string errorText, DataGridViewCellStyle cellStyle, DataGridViewAdvancedBorderStyle advancedBorderStyle, DataGridViewPaintParts paintParts) { ButtonRenderer.DrawButton(graphics, cellBounds, formattedValue.ToString(), new Font("Comic Sans MS", 9.0f, FontStyle.Bold), true, System.Windows.Forms.VisualStyles.PushButtonState.Default); } } ``` Then here's a test DataGridView: ``` DataGridViewButtonColumn c = new DataGridViewButtonColumn(); c.CellTemplate = new MyButtonColumn(); this.dataGridView1.Columns.Add(c); this.dataGridView1.Rows.Add("Click Me"); ``` All this sample does, is paint a button with the font being "Comic Sans MS". It doesn't take into account the state of the button as you'll see when you run the app. GOOD LUCK!!
I missed Dave's note on Tomas' answer so i am just posting the simple solution to this. Update the **FlatStyle** property of the Button column to Popup and then by updating the backcolor and forecolor you can change the appearance of the button. ``` DataGridViewButtonColumn c = (DataGridViewButtonColumn)myGrid.Columns["colFollowUp"]; c.FlatStyle = FlatStyle.Popup; c.DefaultCellStyle.ForeColor = Color.Navy; c.DefaultCellStyle.BackColor = Color.Yellow; ```
Change Color of Button in DataGridView Cell
[ "", "c#", ".net", "winforms", ".net-2.0", "" ]
Despite closing streams in finally clauses I seem to constantly run into cleaning up problems when using Java. File.delete() fails to delete files, Windows Explorer fails too. Running System.gc() helps sometimes but nothing short of terminating the VM helps consistently and that is not an option. Does anyone have any other ideas I could try? I use Java 1.6 on Windows XP. UPDATE: FLAC code sample removed, the code worked if I isolated it. UPDATE: More info, this happens in Apache Tomcat, Commons FileUpload is used to upload the file and could be the culprit, also I use Runtime.exec() to execute LAME in a separate process to encode the file, but that seems unlikely to cause this since ProcessExplorer clearly indicates that java.exe has a RW lock on the file and LAME terminates fine. UPDATE: I am working with the assumption that there is a missing close() or a close() that does not get called somewhere in my code or external library. I just can't find it!
The code you posted looks good - it should not cause the issues you are describing. I understand you posted just a piece of the code you have - can you try extracting just this part to a separate program, run it and see if the issue still happens? My guess is that there is some other place in the code that does `new FileInputStream(path);` and does not close the stream properly. You might be just seeing the results here when you try to delete the file.
I assume you're using [jFlac](http://jflac.sourceforge.net/). I downloaded jFlac 1.3 and tried your sample code on a flac freshly downloaded from the internet live music archive. For me, it worked. I even monitored it with [ProcessExplorer](http://technet.microsoft.com/en-us/sysinternals/bb896653.aspx) and saw the file handles be opened and then released. Is your test code truly as simple as what you gave us, or is that a simplified version of your code? For me, once close() was called, the handle was released and the file was subsequently successfully deleted. Try changing your infinite loop to: ``` File toDelete = new File(path); if (!toDelete.delete()) { System.out.println("Could not delete " + path); System.out.println("Does it exist? " + toDelete.exists()); } ``` or if you want to keep looping, then put a 1 second sleep between attempts to delete the file. I tried this with JDK6 on WinXP Pro. Don't forget to put a try/catch around your `close()` and log errors if the close throws an exception.
Java keeps lock on files for no apparent reason
[ "", "java", "file", "locking", "" ]
When building console applications that take parameters, you can use the arguments passed to `Main(string[] args)`. In the past I've simply indexed/looped that array and done a few regular expressions to extract the values. However, when the commands get more complicated, the parsing can get pretty ugly. So I'm interested in: * Libraries that you use * Patterns that you use Assume the commands always adhere to common standards such as [answered here](https://stackoverflow.com/questions/108728/suggestions-for-implemetation-of-a-commnd-line-application-interface).
I would strongly suggest using [NDesk.Options](http://www.ndesk.org/Options) ([Documentation](http://www.ndesk.org/doc/ndesk-options/)) and/or [Mono.Options](https://github.com/mono/mono/blob/master/mcs/class/Mono.Options/Mono.Options/Options.cs) (same API, different namespace). An [example from the documentation](http://www.ndesk.org/doc/ndesk-options/NDesk.Options/OptionSet.html#T:NDesk.Options.OptionSet:Docs:Example:1): ``` bool show_help = false; List<string> names = new List<string> (); int repeat = 1; var p = new OptionSet () { { "n|name=", "the {NAME} of someone to greet.", v => names.Add (v) }, { "r|repeat=", "the number of {TIMES} to repeat the greeting.\n" + "this must be an integer.", (int v) => repeat = v }, { "v", "increase debug message verbosity", v => { if (v != null) ++verbosity; } }, { "h|help", "show this message and exit", v => show_help = v != null }, }; List<string> extra; try { extra = p.Parse (args); } catch (OptionException e) { Console.Write ("greet: "); Console.WriteLine (e.Message); Console.WriteLine ("Try `greet --help' for more information."); return; } ```
I really like the Command Line Parser Library ( <http://commandline.codeplex.com/> ). It has a very simple and elegant way of setting up parameters via attributes: ``` class Options { [Option("i", "input", Required = true, HelpText = "Input file to read.")] public string InputFile { get; set; } [Option(null, "length", HelpText = "The maximum number of bytes to process.")] public int MaximumLenght { get; set; } [Option("v", null, HelpText = "Print details during execution.")] public bool Verbose { get; set; } [HelpOption(HelpText = "Display this help screen.")] public string GetUsage() { var usage = new StringBuilder(); usage.AppendLine("Quickstart Application 1.0"); usage.AppendLine("Read user manual for usage instructions..."); return usage.ToString(); } } ```
Best way to parse command line arguments in C#?
[ "", "c#", ".net", "command-line-arguments", "" ]
I am using JSON to parse data and connect to a PHP file. I am not sure what the problem is because I am a newbie to flex. This is the error I am receiving: ``` JSONParseError: Unexpected < encountered at com.adobe.serialization.json::JSONTokenizer/parseError() at com.adobe.serialization.json::JSONTokenizer/getNextToken() at com.adobe.serialization.json::JSONDecoder/nextToken() at com.adobe.serialization.json::JSONDecoder() at com.adobe.serialization.json::JSON$/decode() at DressBuilder2/getPHPData() at DressBuilder2/__getData_result() at flash.events::EventDispatcher/dispatchEventFunction() at flash.events::EventDispatcher/dispatchEvent() at mx.rpc.http.mxml::HTTPService/http://www.adobe.com/2006/flex/mx/internal::dispatchRpcEvent() at mx.rpc::AbstractInvoker/http://www.adobe.com/2006/flex/mx/internal::resultHandler() at mx.rpc::Responder/result() at mx.rpc::AsyncRequest/acknowledge() at DirectHTTPMessageResponder/completeHandler() at flash.events::EventDispatcher/dispatchEventFunction() at flash.events::EventDispatcher/dispatchEvent() at flash.net::URLLoader/onComplete() ``` Here is the actual mxml code: ``` <?xml version="1.0" encoding="utf-8"?> <mx:Application xmlns:mx="http://www.adobe.com/2006/mxml" layout="absolute" width="535" height="345"> <mx:Script> <![CDATA[ import mx.events.DataGridEvent; import mx.controls.TextInput; import mx.rpc.events.ResultEvent; import mx.collections.ArrayCollection; import com.adobe.serialization.json.JSON; [Bindable] private var dataArray:ArrayCollection; private function initDataGrid():void { dataArray = new ArrayCollection(); getData.send(); } private function getPHPData(event:ResultEvent):void { var rawArray:Array; var rawData:String = String(event.result); rawArray = JSON.decode(rawData) as Array; dataArray = new ArrayCollection(rawArray); } private function sendPHPData():void { var objSend:Object = new Object(); var dataString:String = JSON.encode(dataArray.toArray()); dataString = escape(dataString); objSend.setTutorials = "true"; objSend.jsonSendData = dataString; sendData.send(objSend); } private function updatedPHPDataResult(event:ResultEvent):void { lblStatus.text = String(event.result); } private function checkRating(event:DataGridEvent):void { var txtIn:TextInput = TextInput(event.currentTarget.itemEditorInstance); var curValue:Number = Number(txtIn.text); if(isNaN(curValue) || curValue < 0 || curValue > 10) { event.preventDefault(); lblStatus.text = "Please enter a number rating between 0 and 10"; } } ]]> </mx:Script> <mx:HTTPService id="getData" url="http://www.keishalexie.com/imd465/forum.php" useProxy="false" method="GET" resultFormat="text" result="getPHPData(event)"> <mx:request xmlns=""> <getTutorials>"true"</getTutorials> </mx:request> </mx:HTTPService> <mx:HTTPService id="sendData" url="http://www.keishalexie.com/imd465/forum.php" useProxy="false" method="GET" resultFormat="text" result="updatedPHPDataResult(event)"> </mx:HTTPService> <mx:Binding source="dgData.dataProvider as ArrayCollection" destination="dataArray"/> <mx:Panel x="0" y="0" width="535" height="345" layout="absolute" title="Forum"> <mx:DataGrid id="dgData" x="10" y="10" width="495" height="241" dataProvider="{dataArray}" creationComplete="{initDataGrid()}" editable="true" itemEditEnd="{checkRating(event)}"> <mx:columns> <mx:DataGridColumn headerText="Name" dataField="name" editable="false"/> <mx:DataGridColumn headerText="Author" dataField="author" width="115" editable="false"/> <mx:DataGridColumn headerText="Rating" dataField="rating" width="50" editable="true" /> </mx:columns> </mx:DataGrid> <mx:Button x="10" y="259" label="UpdateDatabase" id="butUpdate" click="{sendPHPData()}"/> <mx:Label x="140" y="261" id="lblStatus"/> </mx:Panel> </mx:Application> ``` Here is the PHP: ``` <?php $USERNAME = ''; //database username $PASSWORD = ''; //database password $DATABASE = ''; //database name $URL = ''; //database location if(isset($_GET['getTutorials'])) { mysql_connect($URL, $USERNAME, $PASSWORD); mysql_select_db($DATABASE) or die('Cannot connect to database.'); $returnArray = array(); $query = 'SELECT * FROM Tutorials'; $result = mysql_query($query); while($row = mysql_fetch_assoc($result)) { array_push($returnArray, $row); } mysql_close(); echo json_encode($returnArray); } elseif(isset($_GET['setTutorials'])) { $jsonString = urldecode($_GET['jsonSendData']); $jsonString = str_replace("\\", "", $jsonString); $data = json_decode($jsonString, true); mysql_connect($URL, $USERNAME, $PASSWORD); mysql_select_db($DATABASE) or die('Cannot connect to database.'); foreach ($data as $tutorialEntry) { $query = sprintf( 'UPDATE Tutorials SET rating = "%s" WHERE id = "%s"', mysql_real_escape_string($tutorialEntry['rating']), mysql_real_escape_string($tutorialEntry['id'])); $result = mysql_query($query); if(!$result) { mysql_close(); echo mysql_error(); return; } } mysql_close(); echo "database updated"; } ?> ```
Visiting <http://www.keishalexie.com/imd465/forum.php?getTutorials=1> (Which is what your code is calling) returns *Fatal error*: Call to undefined function: json\_encode() in /homepages/38/d177816689/htdocs/keishalexie/imd465/forum.php on line 23 Which is not a json document. Visit <http://www.keishalexie.com/imd465/forum.php?getTutorials=1> in your browser as you fix the problem. Then once it looks good there, start working on the FLEX side of things.
Looks like you have an unexpected < in your JSON data, sounds like you're getting html back from the server instead of nice clean JSON. Use [firebug](http://getfirebug.com), or a http headers tool to examine the response from the server (or just load up the url in a browser) and see what the response is. You might need to disable error logging when outputting your JSON data, or if you are using a templating system, not use it when sending JSON.
JSON Parse Error using FLEX and PHP
[ "", "php", "apache-flex", "json", "" ]