Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
Analyzing the location.hash with this simple javascript code: `<script type="text/javascript">alert(location.hash);</script>` I have a difficult time separating out GET variables that contain a & (encoded as %26) and a & used to separate variables. Example one: * # code=php&age=15d Example two: * # code=php%20%26%20code&age=15d As you can see, example 1 has no problems, but getting javascript to know that "code=php & code" in example two is beyond my abilities: (Note: I'm not really using these variable names, and changing them to something else will only work so long as a search term does not match a search key, so I wouldn't consider that a valid solution.)
There is no difference between %26 and & in a fragment identifier (‘hash’). ‘&’ is only a reserved character with special meaning in a query (‘search’) segment of a URI. Escaping ‘&’ to ‘%26’ need be given no more application-level visibility than escaping ‘a’ to ‘%61’. Since there is no standard encoding scheme for hiding structured data within a fragment identifier, you could make your own. For example, use ‘+XX’ hex-encoding to encode a character in a component: ``` hxxp://www.example.com/page#code=php+20+2B+20php&age=15d function encodeHashComponent(x) { return encodeURIComponent(x).split('%').join('+'); } function decodeHashComponent(x) { return decodeURIComponent(x.split('+').join('%')); } function getHashParameters() { var parts= location.hash.substring(1).split('&'); var pars= {}; for (var i= parts.length; i-->0;) { var kv= parts[i].split('='); var k= kv[0]; var v= kv.slice(1).join('='); pars[decodeHashComponent(k)]= decodeHashComponent(v); } return pars; } ```
Testing on Firefox 3.1, it looks as if the browser converts hex codes to the appropriate characters when populating the location.hash variable, so there is no way JavaScript can know how the original was a single character or a hex code. If you're trying to encode a character like & inside of your hash variables, I would suggest replacing it with another string. You can also parse the string in weird ways, like (JS 1.6 here): ``` function pairs(xs) { return xs.length > 1 ? [[xs[0], xs[1]]].concat(pairs(xs.slice(2))) : [] } function union(xss) { return xss.length == 0 ? [] : xss[0].concat(union(xss.slice(1))); } function splitOnLast(s, sub) { return s.indexOf(sub) == -1 ? [s] : [s.substr(0, s.lastIndexOf(sub)), s.substr(s.lastIndexOf(sub) + sub.length)]; } function objFromPairs(ps) { var o = {}; for (var i = 0; i < ps.length; i++) { o[ps[i][0]] = ps[i][1]; } return o; } function parseHash(hash) { return objFromPairs( pairs( union( location.hash .substr(1) .split("=") .map( function (s) splitOnLast(s, '&'))))) } >>> location.hash "#code=php & code&age=15d" >>> parseHash(location.hash) { "code": "php & code", "age": "15d" } ```
Detect difference between & and %26 in location.hash
[ "", "javascript", "" ]
I can already use Excel (2007) to import data from SQL Server 2005. I add a data connection and I enter a custom SQL query that retrieves my desired data. Cool. But what I'd like to add is the ability to parameterize that query based on a value found in a known cell on the spreadsheet. My query would go from ``` SELECT * FROM dbo.MyDataTable WHERE Col1 = 'apples' ``` to something like ``` SELECT * FROM dbo.MyDataTable WHERE Col1 = 'Cell("B2")' ``` Is this possible? If so, how?
If you're [using MS Query](http://office.microsoft.com/en-us/excel/HA100996641033.aspx?pid=CH100648471033) to get the data into Excel, [this page](http://office.microsoft.com/en-us/excel/HP102161131033.aspx?pid=CH100648471033) shows how to use a value from a cell on the worksheet as a parameter in your query.
Try your structuring your code as follows: ``` Dim strSQL as String Dim strFruit As String strFruit = CStr(ThisWorkbook.Sheets("Sheet1").Range("A1").FormulaR1C1) strSQL = "SELECT * FROM dbo.MyDataTable WHERE Col1 = '" & strFruit & "'" ``` I have found it useful to load the parameter into a variable first, before merging it into the SQL query. It makes your SQL string more readable and allows you to check/clean the parameter before using it.
How do I use a parameter in an Excel external data request?
[ "", "sql", "excel", "excel-2007", "" ]
I've got a .net calendar up and running and bringing information from a database. By default the day number has a post back action applied to it. What I'm trying to do is have that action apply to the whole cell so the user doesn't need to click on just the text link. I'm the dayRenderer action i have the following line to try and replicate the action but the second argument I'm not sure how to set it. It appears to give it an id e.g. 3315 but I'm not sure how to get the required id manually for this code below. I hope this makes sense! I'm new to .NET so not very savvy with my terminology! ``` e.Cell.Attributes.Add("OnClick", string.Format("javascript:__doPostBack('{0}','{1}')", Calendar1.ClientID, ***ID_NEEDED_HERE***)); ```
The parameter is the number of days since Jan 1 2000 for the first day of your calendar, preceded by a 'V'. So an ID of 'V0' means Jan 1 2000, an ID of 'V5' means Jan 6 2000, an ID of 'V-5' means Dec 27, 1999. Cheers, Ruben
putting e.Cell.Attributes.Add("OnClick",e.SelectUrl); in your dayRenderer will simulate the number click.
.net calendar - making the whole cell perform postback (clickable)
[ "", "c#", ".net", "calendar", "postback", "" ]
In C# 3.0 you can create anonymous class with the following syntax ``` var o = new { Id = 1, Name = "Foo" }; ``` Is there a way to add these anonymous class to a generic list? Example: ``` var o = new { Id = 1, Name = "Foo" }; var o1 = new { Id = 2, Name = "Bar" }; List<var> list = new List<var>(); list.Add(o); list.Add(o1); ``` Another Example: ``` List<var> list = new List<var>(); while (....) { .... list.Add(new {Id = x, Name = y}); .... } ```
You could do: ``` var list = new[] { o, o1 }.ToList(); ``` There are lots of ways of skinning this cat, but basically they'll all use type inference somewhere - which means you've got to be calling a generic method (possibly as an extension method). Another example might be: ``` public static List<T> CreateList<T>(params T[] elements) { return new List<T>(elements); } var list = CreateList(o, o1); ``` You get the idea :)
Here is the answer. ``` string result = String.Empty; var list = new[] { new { Number = 10, Name = "Smith" }, new { Number = 10, Name = "John" } }.ToList(); foreach (var item in list) { result += String.Format("Name={0}, Number={1}\n", item.Name, item.Number); } MessageBox.Show(result); ```
A generic list of anonymous class
[ "", "c#", ".net", "generics", "anonymous-types", "" ]
Is there a way to format a date to have mmm:ss instead of hh:mm:ss? Basically, 7:15:28 should show up instead as 435:28 instead. The SimpleDateFormat class does not support this format is there one that does or will I have to implement a variation myself?
You'll have to implement it yourself, but you should bear in mind that implementing a Formatter is not a simple task, as it has many l10n considerations.
If all you want to do is format a Date into the format you specified (with no other parts of the Date included) you can just do it very simply: ``` public String format(Date date) { Calendar cal = new GregorianCalendar(); cal.setTime(date); return cal.get(Calendar.HOUR) * 60 + cal.get(Calendar.MINUTE) + ":" + cal.get(Calendar.SECOND); } ```
Custom DateFormats mmm:ss
[ "", "java", "" ]
I know that boxing is a popular concept with plenty of information available on it, but I have a few questions which I can't really find answers to: 1) If boxing leads to a value type (struct) being converted to an object (Reference type), or reference type, then why use a value type which will be boxed and incur a performance penalty? I am aware of the benefits and suitability in certain cases of either a struct or class. It is said (1) that values (value types) tend to live on the stack in a temporary storage space, but how long for? If I don't need the type, how can I ensure it is taken care of and disposed at that moment? Or is this where the disposable pattern comes into play? I assume the reason to use a struct will be due to its benefits. Interestingly, if I use a struct to store two strings and a DateTime field, the struct will hold two references (strings) and the DateTime together. I obviously assume this is quicker than the values being scattered. Is there anything I need to be aware of in this design? (2). 1) [http://en.csharp-online.net/Classes, Structs, and Objects—Boxing and Unboxing](http://en.csharp-online.net/Classes,_Structs,_and_Objects%E2%80%94Boxing_and_Unboxing) 2) <http://dotnetperls.com/Content/Struct-Examples.aspx> I have done a search on here for the answers I am after, but no luck. I usually do searches on this site for topics such as GC, generics, exception handling, etc, as there is a lot of wisdom to learn and share. Thanks for the (potential) education to all posters! Please excuse any potential naivety. Learning the internals takes me nicely to spending some time on understanding IL, etc (something to tackle, soon).
If you never pass the value type into a reference variable then boxing will not occur. When you don't know then answer the following [questions](http://msdn.microsoft.com/en-us/library/y23b5415(vs.71).aspx): * Act like primitive types. * Have an instance size under 16 bytes. * Are immutable. * Value semantics are desirable. I also usually consider what is the lifetime of such a variable. If it is a local variable used within a method then I would tent to use struct (otherwise class).
You should use value types because of their logical benefit, not the performance gains. That being said, because value types are managed on the stack, do not need to participate in garbage collection. If you have a type that is constantly created and discarded (like an int, float, double, etc), then you can get a good boost by turning these into structs. The thing to be careful of is that you should only really consider this if you can also make the struct immutable.
Questions about boxing
[ "", "c#", "boxing", "" ]
I have 2 tables in SQL: ``` Table 1 Step Id Step Name Table 2 Profile Id Step Id Completed ``` I would like to return the following results even if there is not match in table 2: ``` Results Table1.Step Id Table1.Step Name Table2.Profile Id Table2.Completed ``` The way I am doing this in SQL is the following: ``` select * from [Table 1] t1 left join [Table 2] t2 on t1.Step Id = t2.Step Id ``` This produces the results I expect. When I translate this into linq: ``` public static List<UserCompletion> GetStepCompletion(string category, string profileid) { List<Step> step = GetSteps(category); List<UserStep> userStep = GetUserSteps(category, profileId); var q = from s in step join us in userStep on s.Id equals us.StepId select new UserCompletion { StepId = s.Id, Headline = s.StepName, ProfileId = us.ProfileId Completed= us.Completed }; return q.ToList(); } ``` It works but like a `JOIN` not a `left join`. I only get matching results back. Also, `UserCompletion` is an object that I return from this method.
You could also try this (assuming us.Completed is boolean): ``` var q = from s in step let us = (from i in userStep where s.Id = i.StepId).SingleOrDefault() select new UserCompletion { StepId = s.Id, Headline = s.StepName, ProfileId = us.ProfileId Completed = us == null ? false : us.Completed }; ``` This won't turn into a join in sql, but a nested select statement something like this: ``` select StepId, Headline, ProfileId, isnull((select top(1) Completed from userStep where StepId = Id), 0) as Completed from step ```
Try something along the lines of the following: ``` var q = from s in step join us in userStep on s.Id equals us.StepId into tmpjoin from x in tmpjoin.DefaultIfEmpty() select new UserCompletion { ... } ```
.NET Linq Left Join
[ "", "c#", ".net", "linq", "join", "" ]
I have an aspx page which has a GridView and a Panel with Summary Data. The Summary Data is in a control and the GridView is on the aspx page. When someone clicks an item in the Summary Data, I want to rebind the GridView, but I can't figure out how to do it from the control. I tried using FindControl, but probably did not use it correctly. I tried this with FindControl, but it didn't work: ``` GridView gv = (GridView)FindControl("gvSearchResults"); //returns null ```
You should send a GridView reference to the control as a Property of the control ( ``` public GridView GridViewToRebind {get; set;} ``` or something).
What kind of control is your Summary Data in? Maybe you could add an EventHandler to your Summary Data control that fires when you click on an item. You would write the handler for the event in your .aspx code-behind, and then link them up in your .aspx's Page\_Load(). Here's a quick example: Default.aspx: ``` <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="GridViewEventHandling._Default" %> <%@ Register TagName="MyControl" TagPrefix="mc" Src="~/SampleData.ascx" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <head runat="server"> <title></title> </head> <body> <form id="form1" runat="server"> <div> <asp:GridView ID="uxGridView" runat="server" AutoGenerateColumns="true"> </asp:GridView> <mc:MyControl ID="myControl" runat="server" /> </div> </form> </body> </html> ``` Default.aspx.cs: ``` using System; using System.Collections.Generic; using System.Web.UI; using System.Web.UI.WebControls; namespace GridViewEventHandling { public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { myControl.OnLinkClick += new EventHandler(myControl_OnLinkClick); } private void myControl_OnLinkClick(object sender, EventArgs e) { uxGridView.DataSource = GetDataSource(); uxGridView.DataBind(); } private IDictionary<string, string> GetDataSource() { IDictionary<string, string> dict = new Dictionary<string, string>(); dict.Add("Product 1", "Description 1"); dict.Add("Product 2", "Description 2"); dict.Add("Product 3", "Description 3"); return dict; } } } ``` SampleData.ascx: ``` <%@ Control Language="C#" AutoEventWireup="true" CodeBehind="SampleData.ascx.cs" Inherits="GridViewEventHandling.SampleData" %> <asp:LinkButton ID="item1" OnClick="HandleClick" CommandName="BindGrid" CommandArgument="1" Text="Item 1" runat="server" /><br /> <asp:LinkButton ID="item2" OnClick="HandleClick" CommandName="BindGrid" CommandArgument="2" Text="Item 2" runat="server" /><br /> <asp:LinkButton ID="item3" OnClick="HandleClick" CommandName="BindGrid" CommandArgument="3" Text="Item 3" runat="server" /><br /> ``` SampleData.ascx.cs: ``` using System; namespace GridViewEventHandling { public partial class SampleData : System.Web.UI.UserControl { public event EventHandler OnLinkClick; protected void HandleClick(object sender, EventArgs args) { if (OnLinkClick != null) OnLinkClick(sender, args); } } } ```
How can I access a GridView on a page from a control on a Page?
[ "", "c#", "asp.net", "" ]
When coding new javascript heavy websites, which order or web browser do you code for? I can see these possible orders, but I am not sure which I like best: 1. Code for one first and get it working well, then start testing with other and fix errors as I go. * This will allow for the most rapid development (with Firefox at least) but I've learned from experience that debugging IE with so much going on at once can be a pain! 2. Code for both at the same time. In other words, for each new feature, ensure it works with both browsers before moving on. * This seems like it will actually take more time, so maybe do several features in Firefox then move to IE to patch them up. What do you all do? **Edit 1:** To respond to a couple of answers here.: @JQuery usage: For some reason I was not expecting this kind of a response, however, now that this seems to be the overwhelming accepted answer, I guess I should tell everyone a few more things about the specifics of my app. This is actually the [DynWeb](https://stackoverflow.com/questions/521723/reflective-web-application) that I started another question for, and as I'm developing, a lot of the important code seems to require that I use document.whatever() instead of any JQuery or Prototype functions that I could find. Specifically, when dynamically importing changing CSS, I have to use some similar to: ``` var cssid = document.all ? 'rules' : 'cssRules'; //found this to take care of IE and Firefox document.styleSheets[sheetIndex][cssid][cssRule].style[element] = value; ``` And I expect that I will have to continue to use this kind of raw coding currently unsupported by either JQuery or Prototype in the future. So while I would normally accept JQuery as an answer, I cannot as it is not a solution for this particular webapp. @Wedge and bigmattyh: As the webapp is supposed to build other webapps, part of the criteria is that anything it builds look and work functionally the same in whatever browsers I support (right now I'm thinking Firefox and IE7/8 atm, maybe more later on). So as this is a more interesting (and much more complicated) problem; are there any sites, references, or insights you may have for specific trouble areas (css entities, specific javascript pitfalls and differences, etc.) and how to avoid them? I'm almost certain that I am going to have to have some sort of `isIE` variable and simply perform different actions based on that, but I would like to avoid it as much as possible. Thanks for your input so far! I will keep this open for the rest of the day to see what others may have to say, and will accept an answer sometime tonight.
This is sort of a trick question. In my opinion you need to work in this order: **1: Conform to Standards** This gets you closest to working in every browser without having to test against every browser. Additionally, you gain the huge benefit that your site should work with any new browser that comes along (Chrome is a good example) so long as it's well made and standards compliant. It also makes it easier to tweak your site to work in specific browsers because the way that the popular browsers deviate from standards compliance is well known. **2: Support the Most Used Browsers *(For Your Site)*** Note carefully the distinction between the breakdown of browser usage on the internet vs. browser usage *on your site*. On the internet as a whole IE is the most popular browser with Firefox a close second and Safari, Opera, and Chrome taking up most of the remainder. However, the demographics of your site's visitors can turn these numbers upside down. On sites that cater to a more technically savvy crowd it's common for firefox to be the dominant browser with IE in the distinct minority. **3: Support Other Browsers as Needed** You need to be very explicit about the fact that browser compatibility is an operating cost for your site, and you need to decide where you draw the line. Depending on your site's purpose and business model it may be fine to support only the most popular browsers, or even a subset of them. On the other hand, it may be a vital business concern to support everything under the Sun, including IE5. It's ok to make a conscious decision to not fully support every browser if you think the cost/benefit ratio is too high to justify it. Indeed, many of the most popular sites on the internet do not work well in older and niche browsers. Though you should strive to make your site still functional in the least popular browsers, even if there are serious appearance or usability problems.
FireFox first then IE. If it works in FireFox, it is more likely to work in the other non-IE browsers, IE sometimes requires special magic.
Design order: Firefox, IE, or both?
[ "", "javascript", "internet-explorer", "firefox", "" ]
On a regular basis, I validate my function arguments: ``` public static void Function(int i, string s) { Debug.Assert(i > 0); Debug.Assert(s != null); Debug.Assert(s.length > 0); } ``` Of course the checks are "valid" in the context of the function. Is this common industry practice? What is common practice concerning function argument validation?
The accepted practice is as follows if the values are not valid or will cause an exception later on: ``` if( i < 0 ) throw new ArgumentOutOfRangeException("i", "parameter i must be greater than 0"); if( string.IsNullOrEmpty(s) ) throw new ArgumentNullException("s","the paramater s needs to be set ..."); ``` So the list of basic argument exceptions is as follows: ``` ArgumentException ArgumentNullException ArgumentOutOfRangeException ```
What you wrote are **preconditions**, and an essential element in **Design by Contract**. Google (or "StackOverflow":) for that term and you'll find quite a lot of good information about it, and some bad information, too. Note that the method includes also **postconditions** and the concept of **class invariant**. Let's leave it clear that assertions are a valid mechanism. Of course, they're usually ([not always](https://stackoverflow.com/questions/17732/when-should-assertions-stay-in-production-code)) *not* checked in Release mode, so this means that you have to test your code before releasing it. If assertions are left enabled and an assertion is violated, the standard behaviour in some languages that use assertions (and in Eiffel in particular) is to throw an assertion violation exception. Assertions left unchecked are *not* a convenient or advisable mechanism if you're publishing a code library, nor (obviously) a way to validate direct possibly incorrect input. If you have "possibly incorrect input" you have to design as part of the normal behaviour of your program an input validation *layer*; but you can still freely use assertions in the internal modules. --- Other languages, like Java, have more of a tradition of **explicitly checking arguments and throwing exceptions if they're wrong**, mainly because these languages don't have a strong "assert" or "design by contract" tradition. (It may seem strange to some, but I find the differences in tradition respectable, and not necessarily evil.) See also [this related question](https://stackoverflow.com/questions/117171/design-by-contract-tests-by-assert-or-by-exception).
Validating function arguments?
[ "", "c#", "validation", "function", "parameters", "arguments", "" ]
I have some functions that use curl to pull information off a couple of sites and insert them into my database. I was just wondering what is the best way to go about executing this task every 24 hours? I am running off windows now, but will probably switch to linux once I am live (if that makes a difference). I am working inside symfomy framework now. I hear cronjobs can do this this...but looking at the site it seems to work remotely and I would rather just keep things in house...Can i just "run a service" on my computer? whatever that means ;) (have heard it used) thanks for any help, Andrew
This is exactly what Cron (linux) or Scheduled Tasks (windows) are for. You can run them on your application server to keep everything in one place. For example, I have a cron running on my home server to backup its MySQL databases every day. Only one system is involved in this process.
Adding `0 0 * * * php /path/to/your/cronjob.php` to your [crontab](http://en.wikipedia.org/wiki/Cron) should accomplish this.
running a php task every 24 hours
[ "", "php", "service", "cron", "scheduled-tasks", "" ]
I am trying to build a jquery script that will open a new browser window (popup window) for a mail like application (user can double click on mails and they will open in a new window). That is not particular hard. My problem is that I want to keep track of the opened windows, so that if a user double clicks on the same mail item a second time, it will just set focus on the already open popup window, and not reload it. I have it working in Firefox, but Internet Explorer returns the following error: ``` Line: 51 Error: The interface is unknown. ``` In particular, it happens when the users has closed a popup window, and then double click on the mail item to open it again. My javascript/jquery skills are rudimentary at best, so I hope somebody here can help out. Here is the code for the script. ``` (function($) { var _popupTracker = {} var _popupCounter = 0; $.fn.openPopupWindow = function(options) { var defaults = { height: 600, // sets the height in pixels of the window. width: 600, // sets the width in pixels of the window. toolbar: 0, // determines whether a toolbar (includes the forward and back buttons) is displayed {1 (YES) or 0 (NO)}. scrollbars: 0, // determines whether scrollbars appear on the window {1 (YES) or 0 (NO)}. status: 0, // whether a status line appears at the bottom of the window {1 (YES) or 0 (NO)}. resizable: 1, // whether the window can be resized {1 (YES) or 0 (NO)}. Can also be overloaded using resizable. left: 0, // left position when the window appears. top: 0, // top position when the window appears. center: 0, // should we center the window? {1 (YES) or 0 (NO)}. overrides top and left createnew: 0, // should we create a new window for each occurance {1 (YES) or 0 (NO)}. location: 0, // determines whether the address bar is displayed {1 (YES) or 0 (NO)}. menubar: 0 // determines whether the menu bar is displayed {1 (YES) or 0 (NO)}. }; var options = $.extend(defaults, options); var obj = this; // center the window if (options.center == 1) { options.top = (screen.height - (options.height + 110)) / 2; options.left = (screen.width - options.width) / 2; } var parameters = "location=" + options.location + ",menubar=" + options.menubar + ",height=" + options.height + ",width=" + options.width + ",toolbar=" + options.toolbar + ",scrollbars=" + options.scrollbars + ",status=" + options.status + ",resizable=" + options.resizable + ",left=" + options.left + ",screenX=" + options.left + ",top=" + options.top + ",screenY=" + options.top; // target url var target = obj.attr("href"); // test if popup window is already open, if it is, just give it fokus. var popup = _popupTracker[target]; if (options.createnew == 0 && popup !== undefined && !popup.closed) { popup.focus(); } else { var name = "PopupWindow" + _popupCounter; _popupCounter++; // open window popup = window.open(target, name, parameters); _popupTracker[target] = popup; _popupTracker[target].focus(); } return false; }; })(jQuery); ``` The line of code that is giving me the error is: ``` if (options.createnew == 0 && popup !== undefined && !popup.closed) ``` Thanks, Egil. **UPDATE:** Turns out that this is in fact a IE8 thing, at least the version in the Windows 7 beta. I put up a test page (<http://egil.dk/popuptest/popup-source.htm>) and it seems to work as expected in from my colleagues IE7. Gah, time wasted! **UPDATE 2:** I should probably tell how to reproduce the error. Go to <http://egil.dk/popuptest/popup-source.htm>, click on one of the links, i.e. "something 1", after the popup finishes loading, tab back to to parent window, and click on the same link again. This time, the popup window will just receive focus again, and NOT reload (this is intentionally and what I want). Now close the popup window, and then click on the same link again. This produces the error in the IE8 beta. In Firefox it correctly reopens.
The only thing that I noticed is that you have an extra comma after menubar in your defaults. After removing that last comma it worked fine for me on IE7. What version if IE is giving you this problem?
The following seems to be working for me. If anyone can improve it, please do! In Firefox, the popup just gets focus if it's open. In IE8, the "interface is unknown" error is caught and the popup is closed and reopened instead. In all cases, the window.open line loads the current 'page' into the popup. ``` var bigimg; // set this variable outside the function so the first time // the popup opens, bigimg is a defined variable and the if's don't choke. function popupbigpic(page) { try { if(window.focus && bigimg) bigimg.focus(); } catch(err) { if(bigimg) bigimg.close(); } bigimg = window.open(page,"popup","width=670,height=665,toolbar=no"); } ```
Need help with jquery/javascript popup window script - getting weird error in internet explorer
[ "", "javascript", "jquery", "internet-explorer", "popupwindow", "" ]
I have a `std::map` like this: ``` map<wstring,int> Scores; ``` It stores names of players and scores. When someone gets a score I would simply do: ``` Scores[wstrPlayerName]++; ``` When there is no element in the map with the key `wstrPlayerName` it will create one, but does it initialize to zero or null before the increment or is it left undefined? Should I test if the element exists every time before increment? I just wondered because I thought primitive-type things are always undefined when created. If I write something like: ``` int i; i++; ``` The compiler warns me that i is undefined and when I run the program it is usually not zero.
operator[] looks like this: ``` Value& map<Key, Value>::operator[](const Key& key); ``` If you call it with a **key** that's not yet in the map, it will default-construct a new instance of **Value**, put it in the map under **key** you passed in, and return a reference to it. In this case, you've got: ``` map<wstring,int> Scores; Scores[wstrPlayerName]++; ``` Value here is **int**, and ints are default-constructed as 0, as if you initialized them with **int()**. Other primitive types are initialized similarly (*e.g.*, **double()**, **long()**, **bool()**, etc.). In the end, your code puts a new pair (wstrPlayerName, 0) in the map, then returns a reference to the int, which you then increment. So, there's no need to test if the element exists yet if you want things to start from 0.
This will default-construct a new instance of `value`. For integers, the default construction is 0, so this works as intended.
Do STL maps initialize primitive types on insert?
[ "", "c++", "dictionary", "stl", "initialization", "primitive-types", "" ]
Say you have a 10 element indexed array and you want to place an element somewhere in the middle (say index 3). Then I want to have the rest of the array shift and thus be 11 elements long. Is there an easy way to do this? I am surprised there is no `put()` function or something. I know it would be easy enough to do this with a combination of `array_splice` and `array_merge` but I was just wondering if there an easier way.
[array\_splice() does this](https://www.php.net/manual/en/function.array-splice.php) ``` $input = array("red", "green", "blue", "yellow"); array_splice($input, 3, 0, "purple"); // $input is now array("red", "green", "blue", "purple", "yellow"); ```
Unfortunately your best bet is what you described in your post.
What is an easy way to put an element in the middle of a PHP array?
[ "", "php", "arrays", "" ]
I'd like to convert a float to a whole number in JavaScript. Actually, I'd like to know how to do BOTH of the standard conversions: by truncating and by rounding. And efficiently, not via converting to a string and parsing.
``` var intvalue = Math.floor( floatvalue ); var intvalue = Math.ceil( floatvalue ); var intvalue = Math.round( floatvalue ); // `Math.trunc` was added in ECMAScript 6 var intvalue = Math.trunc( floatvalue ); ``` [Math object reference](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Math) --- ### Examples **Positive** ``` // value=x // x=5 5<x<5.5 5.5<=x<6 Math.floor(value) // 5 5 5 Math.ceil(value) // 5 6 6 Math.round(value) // 5 5 6 Math.trunc(value) // 5 5 5 parseInt(value) // 5 5 5 ~~value // 5 5 5 value | 0 // 5 5 5 value >> 0 // 5 5 5 value >>> 0 // 5 5 5 value - value % 1 // 5 5 5 ``` **Negative** ``` // value=x // x=-5 -5>x>=-5.5 -5.5>x>-6 Math.floor(value) // -5 -6 -6 Math.ceil(value) // -5 -5 -5 Math.round(value) // -5 -5 -6 Math.trunc(value) // -5 -5 -5 parseInt(value) // -5 -5 -5 value | 0 // -5 -5 -5 ~~value // -5 -5 -5 value >> 0 // -5 -5 -5 value >>> 0 // 4294967291 4294967291 4294967291 value - value % 1 // -5 -5 -5 ``` **Positive - Larger numbers** ``` // x = Number.MAX_SAFE_INTEGER/10 // =900719925474099.1 // value=x x=900719925474099 x=900719925474099.4 x=900719925474099.5 Math.floor(value) // 900719925474099 900719925474099 900719925474099 Math.ceil(value) // 900719925474099 900719925474100 900719925474100 Math.round(value) // 900719925474099 900719925474099 900719925474100 Math.trunc(value) // 900719925474099 900719925474099 900719925474099 parseInt(value) // 900719925474099 900719925474099 900719925474099 value | 0 // 858993459 858993459 858993459 ~~value // 858993459 858993459 858993459 value >> 0 // 858993459 858993459 858993459 value >>> 0 // 858993459 858993459 858993459 value - value % 1 // 900719925474099 900719925474099 900719925474099 ``` **Negative - Larger numbers** ``` // x = Number.MAX_SAFE_INTEGER/10 * -1 // -900719925474099.1 // value = x // x=-900719925474099 x=-900719925474099.5 x=-900719925474099.6 Math.floor(value) // -900719925474099 -900719925474100 -900719925474100 Math.ceil(value) // -900719925474099 -900719925474099 -900719925474099 Math.round(value) // -900719925474099 -900719925474099 -900719925474100 Math.trunc(value) // -900719925474099 -900719925474099 -900719925474099 parseInt(value) // -900719925474099 -900719925474099 -900719925474099 value | 0 // -858993459 -858993459 -858993459 ~~value // -858993459 -858993459 -858993459 value >> 0 // -858993459 -858993459 -858993459 value >>> 0 // 3435973837 3435973837 3435973837 value - value % 1 // -900719925474099 -900719925474099 -900719925474099 ```
## Bitwise OR operator A bitwise or operator can be used to truncate floating point figures and it works for positives as well as negatives: ``` function float2int (value) { return value | 0; } ``` Results ``` float2int(3.1) == 3 float2int(-3.1) == -3 float2int(3.9) == 3 float2int(-3.9) == -3 ``` ## Performance comparison? I've created a [JSPerf test](http://jsperf.com/float-to-int-conversion-comparison) that compares performance between: * `Math.floor(val)` * `val | 0` bitwise **OR** * `~~val` bitwise **NOT** * `parseInt(val)` that only works with positive numbers. In this case you're safe to use bitwise operations well as `Math.floor` function. But if you need your code to **work with positives as well as negatives**, then a bitwise operation is the fastest (OR being the preferred one). [This other JSPerf test](http://jsperf.com/truncating-decimals) compares the same where it's pretty obvious that because of the additional sign checking **Math is now the slowest** of the four. ## Note As stated in comments, BITWISE operators operate on signed 32bit integers, therefore large numbers will be converted, example: ``` 1234567890 | 0 => 1234567890 12345678901 | 0 => -539222987 ```
How do I convert a float number to a whole number in JavaScript?
[ "", "javascript", "syntax", "" ]
How can I make an ORDER BY clause with a small LIMIT (ie 20 rows at a time) return quickly, when I can't use an index to satisfy the ordering of rows? Let's say I would like to retrieve a certain number of titles from a table 'node' (simplified below). I'm using MySQL by the way. ``` node_ID INT(11) NOT NULL auto_increment, node_title VARCHAR(127) NOT NULL, node_lastupdated INT(11) NOT NULL, node_created INT(11) NOT NULL ``` But I need to limit the rows returned to only those a particular user has access to. Many users have access large numbers of nodes. I have this information pre-calculated in a big lookup table (an attempt to make things easier) where the primary key covers both columns and the presence of a row means that usergroup has access to that node: ``` viewpermission_nodeID INT(11) NOT NULL, viewpermission_usergroupID INT(11) NOT NULL ``` My query therefore contains something like ``` FROM node INNER JOIN viewpermission ON viewpermission_nodeID=node_ID AND viewpermission_usergroupID IN (<...usergroups of current user...>) ``` ... and I also use a GROUP BY or a DISTINCT so that a node is only returned once even if two of the user's 'usergroups' both have access to that node. My problem is that there seems to be no way for an ORDER BY clause which sorts results by created or last updated date to use an index, because the rows being returned depend on values in the other viewpermission table. Therefore MySQL would need to find *all* rows which match the criteria, then sort them all itself. If there are one million rows for a particular user, and we want to view, say, the latest 100 or rows 100-200 when ordered by last update, the DB would need to figure out which one million rows the user can see, sort this whole result set itself, before it can return those 100 rows, right? Is there any creative way to get around this? I've been thinking along the lines of: * Somehow add dates into the viewpermission lookup table so that I can build an index containing the dates as well as the permissions. It's a possibility I guess. **Edit: Simplified question** Perhaps I can simplify the question by rewriting it like this: Is there any way to rewrite this query or create an index for the following such that an index can be used to do the ordering (not just to select the rows)? ``` SELECT nodeid FROM lookup WHERE usergroup IN (2, 3) GROUP BY nodeid ``` An index on (usergroup) allows the WHERE part to be satisfied by an index, but the GROUP BY forces a temporary table and filesort on those rows. An index on (nodeid) does nothing for me, because the WHERE clause needs an index with usergroup as its first column. An index on (usergroup, nodeid) forces a temporary table and filesort because the GROUP BY is not the first column of the index that can vary. Any solutions?
Can I answer my own question? I believe I have found that the only way to do what I describe is for my lookup table to have rows for every possible combination of usergroups a person may want to be a member of. To pick a simplified example, instead of doing this: ``` SELECT id FROM ids WHERE groups IN(1,2) ORDER BY id ``` If you need to use the index both to select rows and to order them, you have to abstract that IN(1,2) so that it is constant rather than a range, ie: ``` SELECT id FROM ids WHERE grouplist='1,2' ORDER BY id ``` Of course instead of using the string '1,2' you could have a foreign key there, etc. The point being that you'd have to have a row not just for each group but for each combination of multiple groups. So, there is my answer. Anyway, for my application, I feel that maintaining a lookup for all possible combinations of usergroups for each node is not worth it. For my purposes, I predict that most nodes are visible to most users, so I feel that it is acceptable to simply to make the GROUP BY use the index, as the filtering doesn't need it so badly. In other words, the approach I'll take for my original query may be something like: ``` SELECT <fields> FROM node INNER JOIN viewpermission ON viewpermission_nodeID=node_ID AND viewpermission_usergroupID IN (<...usergroups of current user...>) FORCE INDEX(node_created_and_node_ID) GROUP BY node_created, node_ID ``` GROUP BY can use an index if it starts at the left most column of the index and it is in the first non-const non-system table to be processed. The join then deals with the entire list (which is already ordered), and only those not visible to the current user (which will be a small proportion) are removed by the INNER JOIN.
Copy the value you are going to order by into to viewpermission table and add it to your index. You could use a trigger to maintain that value from the other table.
Optimizing "ORDER BY" when the result set is very large and it can't be ordered by an index
[ "", "sql", "optimization", "limit", "sql-order-by", "" ]
I am trying to rotate a vector around a certain point on the vector(in C++): ``` 1 2 3 4 5 6 7 8 9 ``` rotated around the point (1,1) (which is the "5") 90 degrees would result in: ``` 7 4 1 8 5 2 9 6 3 ``` Right now I am using: ``` x = (x * cos(90)) - (y * sin(90)) y = (y * cos(90)) + (x * sin(90)) ``` But I don't want it rotated around (0,0)
The answer depends on your coordinate system. ## Computer graphics coordinate system, with `(0,0)` at *Top* left If you are using a computer graphics vector implementation where `(0,0)` is the **top** left corner and you are rotating around the point `(dx, dy)`, then the rotation calculation, including the translation back into the original coordinate system, would be: ``` x_rotated = ((x - dx) * cos(angle)) - ((dy - y) * sin(angle)) + dx y_rotated = dy - ((dy - y) * cos(angle)) + ((x - dx) * sin(angle)) ``` ## Physics/Maths coordinate system, with `(0,0)` at *Bottom* left If you are using a more traditional real world coordinate system, where `(0,0)` is the **bottom** left corner, then the rotation calculation, around the point `(dx, dy)` including the translation back into the original coordinate system, would be: ``` x_rotated = ((x - dx) * cos(angle)) - ((y - dy) * sin(angle)) + dx y_rotated = ((x - dx) * sin(angle)) + ((y - dy) * cos(angle)) + dy ``` Thanks to [mmx](https://stackoverflow.com/users/33708/mmx) for their comment on [Pesto](https://stackoverflow.com/a/620769/42473)'s post, and to [SkeletorFromEterenia](https://stackoverflow.com/a/29075925/42473) for highlighting an error in my implementation.
The solution is to translate the vector to a coordinate system in which the center of rotation is (0,0). Apply the rotation matrix and translate the vector back to the original coordinate system. ``` dx = x of rotation center dy = y of rotation center V2 = V - [dx, dy, 0] V3 = V2 * rotation matrix Result = V3 + [dx, dy, 0] ```
C++: Rotating a vector around a certain point
[ "", "c++", "vector", "rotation", "" ]
I've got some JSON data that is giving me a list of languages with info like lat/lng, etc. It also contains a group value that I'm using for icons--and I want to build a legend with it. The JSON looks something like this: ``` {"markers":[ {"language":"Hungarian","group":"a", "value":"yes"}, {"language":"English", "group":"a", "value":"yes"}, {"language":"Ewe", "group":"b", "value":"no"}, {"language":"French", "group":"c", "value":"NA"} ]} ``` And I want to "filter" it to end up like this: ``` {"markers":[ {"group":"a", "value":"yes"}, {"group":"b", "value":"no"}, {"group":"c", "value":"NA"} ]} ``` Right now I've got this, using jQuery to create my legend..but of course it's pulling in all values: ``` $.getJSON("http://127.0.0.1:8000/dbMap/map.json", function(json){ $.each(json.markers, function(i, language){ $('<p>').html('<img src="http://mysite/group' + language.group + '.png\" />' + language.value).appendTo('#legend-contents'); }); }); ``` How can I only grab the unique name/value pairs in the entire JSON object, for a given pair?
I'd transform the array of markers to a key value pair and then loop that objects properties. ``` var markers = [{"language":"Hungarian","group":"a", "value":"yes"}, {"language":"English", "group":"a", "value":"yes"}, {"language":"Ewe", "group":"b", "value":"no"}, {"language":"French", "group":"c", "value":"NA"}]; var uniqueGroups = {}; $.each(markers, function() { uniqueGroups[this.group] = this.value; }); ``` then ``` $.each(uniqueGroups, function(g) { $('<p>').html('<img src="http://mysite/group' + g + '.png\" />' + this).appendTo('#legend-contents'); }); ``` or ``` for(var g in uniqueGroups) { $('<p>').html('<img src="http://mysite/group' + g + '.png\" />' + uniqueGroups[g]).appendTo('#legend-contents'); } ``` This code sample overwrites the unique value with the last value in the loop. If you want to use the first value instead you will have to perform some conditional check to see if the key exists.
How about something more generic? ``` function getDistinct(o, attr) { var answer = {}; $.each(o, function(index, record) { answer[index[attr]] = answer[index[attr]] || []; answer[index[attr]].push(record); }); return answer; //return an object that has an entry for each unique value of attr in o as key, values will be an array of all the records that had this particular attr. } ``` Not only such a function would return all the distinct values you specify but it will also group them if you need to access them. In your sample you would use: ``` $.each(getDistinct(markers, "group"), function(groupName, recordArray) { var firstRecord = recordArray[0]; $('<p>').html('<img src="http://mysite/group' + groupName+ '.png\" />' + firstRecord.value).appendTo('#legend-contents'); } ```
How can I "filter" JSON for unique key name/value pairs?
[ "", "javascript", "jquery", "arrays", "json", "object", "" ]
I understand that there are big differences in data-store, but surely since django is bundled and it abstracts data-store away from Satchmo, something can be done? Truth is that I am not a Python guy, been mostly Java/PHP thus far, but I am willing to learn. Plus, if this is not possible today, lets band together and form a new Open Source project to "extend" satchmo or perhaps branch it, for compatibility?
Possible if: * Someone writes a generic ORM to Bigtable mapper. Most probably, [Appengine Patch Guys](http://code.google.com/appengine/articles/app-engine-patch.html) * Someone rewrites the views and models of Satchmo to remove existing ORM queries and use the minimal functionality of the ORM provided by the patch project, should be either you or the Satchmo guys. * Someone hacks around a lot using the [django helper project](http://code.google.com/appengine/articles/appengine_helper_for_django.html), can only be helper project guys.
You can't. There are alot of dependencies in Satchmo that you aren't allowed to install on AppEngine. See this thread as well: <http://groups.google.com/group/satchmo-users/browse_thread/thread/509265ccd5f5fc1e?pli=1>
How to make Satchmo work in Google App Engine
[ "", "python", "google-app-engine", "e-commerce", "satchmo", "" ]
I have two threads that use an event for synchronization. In each thread they use the same call: ``` ::CreateEvent( NULL,TRUE,FALSE,tcEventName ) ``` The producer thread is the one that makes the call first, while the consumer thread makes the call last, so it's technically opening, not creating the event... I assume. But, when SetEvent is called in the producer thread, the same event never gets triggered in the consumer thread (I'm using WaitForMultipleObjects()) Is there a tool that can tell me if the event is actually getting triggered properly. Also, when I call CreateEvent() in each thread, the returned handle value is different for each... should they be the same? Is there a better way to do this that will ensure it will work? This is on Windows XP using Visual Studio 2005 **Edit:** I did some more checking and found that calling CreateEvent in the producer thread (the second one to call CreateEvent) sets LastError to 183 (ERROR\_ALREADY\_EXISTS), however CreateEvent still returns a handle to the event...what gives? How can it error as already existing but still return a handle? Or is it supposed to do that?
According to the MSDN documentation for [CreateEvent](http://msdn.microsoft.com/en-us/library/ms682396(VS.85).aspx), > If the function succeeds, the return value is a handle to the event object. If the named event object existed before the function call, the function returns a handle to the existing object and GetLastError returns ERROR\_ALREADY\_EXISTS. Based on your description, I don't see a problem with what you're doing. There's nothing I see to indicate you're doing something incorrectly. For me, though, I usually create the event once using CreateEvent() and then pass the handle to the thread(s) that I want to be signaled by that event. But there is nothing technically wrong with your approach. You do realize that WaitForMultipleObjects() returns the index of the first signaled handle in the handles array, right? For example, if your named event is the second one in the list, but the first handle is signaled the vast majority of the time (e.g., by a fast-acting thread or a manual reset event that is signaled but never reset), WaitForMultipleObjects() will always return WAIT\_OBJECT\_0. In other words, your consumer thread will never see the fact that your named event is signaled because the first handle is "always" signaled. If this is the case, put your named event first in the list. You don't happen to have the bWaitAll parameter to WaitForMultipleObjects() set to TRUE, do you? If you do, then all of the handles in the handles array have be signaled before the function returns. Who calls ResetEvent() for your named event? It should be the consumer. It's not accidentally being called by some third-party thread, is it? These are simply some things to double-check. If the event still doesn't behave as you expect, replace the WaitForMultipleObjects() with WaitForSingleObject() to see if your named event properly signals the consumer thread. Hope this helps.
If you just use several threads in one process, why don't you pass event handle from one to another? As I know named kernel objects created to share them between processes. Also you can try to use OpenEvent function to open already created event. This might give some ideas.
Creating/Opening Events in C++ and checking if they are fired
[ "", "c++", "windows", "multithreading", "events", "" ]
I often need to execute custom sql queries in django, and manually converting query results into objects every time is kinda painful. I wonder how fellow Slackers deal with this. Maybe someone had written some kind of a library to help dealing with custom SQL in Django?
Since the issue is "manually converting query results into objects," the simplest solution is often to see if your custom SQL can fit into an ORM .extra() call rather than being a pure-SQL query. Often it can, and then you let the ORM do all the work of building up objects as usual.
Not exactly sure what you're looking for, but you can always add a method onto a model to execute custom SQL per [the docs](http://docs.djangoproject.com/en/dev/topics/db/sql/#topics-db-sql "the docs"): ``` def my_custom_sql(self): from django.db import connection cursor = connection.cursor() cursor.execute("SELECT foo FROM bar WHERE baz = %s", [self.baz]) row = cursor.fetchone() return row ``` For something more generic, create an [abstract base model](http://docs.djangoproject.com/en/dev/topics/db/models/#id6) that defines a function like that with an "sql" parameter.
Tools to ease executing raw SQL with Django ORM
[ "", "python", "django", "orm", "" ]
I just moved a project from localhost over to my remote server, and noticed that some of my scripts stopped working. Most importantly was one that relied upon `file_get_contents()` to fetch JSON values from another script. PHP Version is 5.2.4 `allow_url_fopen` is ON > Warning: `file_get_contents()` `[function.file-get-contents]`: `php_network_getaddresses`: `getaddrinfo` failed: Name or service not known in `/var/www/html/2009/functions/functions.products.php` on `line 5` > > Warning: `file_get_contents(http://data.example.com/new-data.php) [function.file-get-contents]`: failed to open stream: `Success in /var/www/html/2009/functions/functions.products.php` on `line 5` The script is being ran from: `http://www.example.com` The location passed into the function is `http://data.example.com/new-data.php` **Note**: Same domain name, but two different servers. ``` function getData() { $location = "http://data.mysite.com/new-data.php"; $contents = file_get_contents($location); $jsonVars = json_decode($contents); return $jsonVars } ```
> Name or service not known DNS is broke. Can you `ping data.mysite.com` from a shell on the machine (assuming you have one)? Try replacing `data.mysite.com` with a fixed IP address for now.
Also you can try curl: ``` $curl = curl_init(); curl_setopt($curl, CURLOPT_RETURNTRANSFER, true); curl_setopt($curl, CURLOPT_URL, 'http://url.url'); $result = curl_exec($curl); curl_close($curl); ``` And you get what you want in `$result`.
PHP : file_get_contents($loc) fails
[ "", "php", "" ]
I’m attempting to convert PDF files into PNGs. It works great from the command line (I do have GhostScript 8.64 installed). But from PHP I’m having a problem: code: ``` $im = new Imagick($pdf_file); // this is where it throws the exception below ``` output: ``` Fatal error: Uncaught exception ‘ImagickException’ with message ‘Postscript delegate failed `23_1235606503.pdf’: No such file or directory @ pdf.c/ReadPDFImage/612′ in get_thumbnail.php:93 Stack trace: \#0 get_thumbnail.php(93): Imagick->__construct(’…’) ``` etc. etc. I'm not sure what I'm doing wrong here, but I suspect it has something to do with my server configuration somewhere. I'm running: Apache 2.2.11 PHP 5.2.8 ImageMagick 6.4.8-9 GhostScript 8.64
Finally figured this out. The GhostScript executable (`gs`) wasn't in Apache's environment path. It was in `/usr/local/bin`. Though I tried several ways to add `/usr/local/bin` to the path, I did not succeed. I ended up putting a symlink for `gs` in the `/usr/bin directory`. Now everything works perfectly.
I don't have the "reputation" on Stackoverflow to add a comment inline above, but there is an extra step I had to perform to get this working on my Mac with the latest Sierra update. When you enter the command: ``` sudo ln -s /usr/local/bin/gs /usr/bin/gs ``` On the Mac, you may get the error, "Operation not Permitted". Apparently Apple made a change that the "bin" directory is not editable, unless you disable SIP (System Integrity Protection). So here are the steps to do that: 1. Reboot your Mac into Recorvery Mode by restarting your computer and holding down "Command + R" until the Apple logo appears on your screen. 2. Click Utilities > Terminal 3. In the Terminal window, type in `crutil disable` and press "Enter" 4. Restart your Mac. I just went through these steps and now my Ghostscript works great and I successfully converted a PDF to JPG.
ImageMagick/Imagick convert PDF to JPG using native PHP API
[ "", "php", "image", "pdf", "pdf-generation", "ghostscript", "" ]
I've got a couple of solutions that represent a framework of code that I've built up at work. One solution called 'Framework' and another called 'Extensions'. The reason I split them is that the 'Extensions' solution contain projects that consist of extension methods, and the projects are organized so that the resulting assemblies mirror the .NET assemblies. I created a merge module project for 'Extensions', and an installer that uses it. It's all fine and dandy - but now I want to create an installer for 'Framework' that also uses that same merge module. Consequently, I'd like to modify the 'Extensions' installer to copy the 'Extensions' merge module file to C`:\Program Files\Common Files\Merge Modules` so that the 'Framework' installer will have a well-known path by which to reference the merge module for 'Extensions', rather than referencing whatever obscure path my VS solution lives in (the reason being that the Framework will at times be built on different machines on which paths to VS projects may vary). The MSDN documentation addresses this only briefly, and merely says that merge modules cannot be installed to the file system, but can only be "consumed". Fail. Does anyone know a way around this? Custom actions? How does the Visual Studio installer install it's merge modules?
Short answer is: there is no way to do this, short of writing your own installation framework.
When you create the MergeModule installer as part of the file system editor there is a folder called "Module Retargetable Folder" (this is the default name and can be changed, also additional ones can be added). Once the .msm has been built if you go to the main setup project and add the merge module to it you can select the .msm in the solution explorer and looking at the properties window there should be an entry called "MergeModuleProperties", expand this and you should see all the retargetable folders listed and you can then point them to the correct place.
Installing a merge module as well as "consuming"
[ "", "c#", "visual-studio", "installation", "merge-module", "" ]
Is C# a high level language? I see it as more like medium level, but I am still unsure about this. Would you consider it as high level as some of the popular scripting languages? Or does it accommodate more than one level?
From what I have read from various sources, **C#** is considered a **3GL** language. A higher level language 4GL would look more like natural language and a 5GL language is about constraints based programming (artificial intelligence) where you define the constraints of a problem but leave it up to the computer to solve it. If you judge the **C#** language by itself (not considering the .Net framework), it is not a huge departure from other C based languages so in that sense the 3GL definition fits. Here is a link to wikipedia for reference: <http://en.wikipedia.org/wiki/Third-generation_programming_language>
Yes, it is high-level. Very. You are the only one who recognizes 'medium-level language' as a term. :)
Is C# a high level language?
[ "", "c#", "programming-languages", "" ]
I don't think that was the most clear question, but an example should make it a little clearer. I have a table filled with movie names, some of which contain apostrophes. I have a search box which is used to find movies. If I perform searches via ``` mov_title = '$search_keywords' ``` it all works, but this method will not yield any results for partial searches, so I have to use this ``` mov_title LIKE '%$search_keywords%' ``` This method works fine for titles that are A-Za-z0-9, but if a title has an apostrophe, it's not able to find the movie, even if I do an exact match. Before the titles are stored in the DB, I put them through this: ``` $search_keywords = htmlspecialchars(mysql_escape_string($_GET["search_keywords"])); ``` So in the DB, there is a forward slash before every single apostrophe. The only way to match a movie title with an apostrophe is to physically put a forward slash in front of the apostrophe, in the search box. This seems so trivial, and I'm sure the solution is painfully obvious, but I'm just not seeing it.
Use `mysql_real_escape_string()`, and do not use `htmlspecialchars()`. The latter is not for database escaping, it's for HTML production.
This only happens because you escaped the data for html output before doing the output! You should only do it *right* before doing the output, i.e.: ``` <li><?php echo htmlspecialchars($some_var); ?></li> ``` Unescape the values in your database and change the application to escape only on output. You currently have no other way than also doing `htmlspecialchars(mysql_real_escape_string())` on the $search\_string. Even if it made sense to escape for HTML already on inserting into the database, `mysql_real_escape_string()` would be the outer function and not the inner function.
How do you encode an apostrophe so that it's searchable in mysql?
[ "", "php", "mysql", "" ]
I have the following class: ``` class SampleClass { private ArrayList mMyList; SampleClass() { // Initialize mMyList } public ArrayList MyList { get { return mMyList;} } } ``` I want users to be able to get mMyList which is why i exposed the "get" via a property however i don't want changes they make to the object (ie. MyList.Add(new Class());) to make its way back into my class. I guess i can return a copy of the object but that may be slow and i'm looking for a way that will provide a compile-time error informing the user that they shouldn't expect to be able to modify the returned value from the Property. Is this possible?
With an ArrayList you are fairly limited because there is no readonly non-generic collection class in the BCL. The quick and dirty solution is to return a type of IEnumerable. ``` public IEnumerable MyList { get { return mMyList;} } ``` This won't actually prevent someone from casting to ArrayList but it won't allow edits by default either You can return an effectively readonly list by calling ArrayList.ReadOnly. However it's return type is an ArrayList so the user would still be able to compile with .Add but it would produce a runtime error.
Use the [ArrayList.ReadOnly()](http://msdn.microsoft.com/en-us/library/system.collections.arraylist.readonly.aspx) method to construct and return a read-only wrapper around the list. it won't copy the list, but simply make a read-only wrapper around it. In order to get compile-time checking, you probably want to return the read-only wrapper as IEnumerable as @Jared suggests. ``` public IEnumerable MyList { get { return ArrayList.ReadOnly(mMyList); } } ``` > A collection that is read-only is > simply a collection with a wrapper > that prevents modifying the > collection. If changes are made to the > underlying collection, the read-only > collection reflects those changes. > > This method is an O(1) operation. [Reference](http://msdn.microsoft.com/en-us/library/e2abk8wd.aspx)
"Read only" Property Accessor in C#
[ "", "c#", ".net", "properties", "readonly", "accessor", "" ]
Does anyone have any simple code samples for Django + [SWFUpload](http://swfupload.org/)? I have it working perfectly in my PHP application but Django is giving me headaches.
Unfortunately I can't give you any very detailed code samples, but I have quite a bit of experience with working with SWFUpload + Django (for a photo sharing site I work on). Anyway, here are a few pointers that will hopefully help you on your quest for DjSWF happiness :) 1. You'll want to use the cookies plugin (if of course you are using some sort of session-based authentication [like `django.contrib.auth`, and care who uploaded what). The cookies plugin sends the data from cookies as POST, so you'll have to find some way of getting this back into `request.COOKIES` (`process_request` middleware that looks for a `settings.SESSION_COOKIE_NAME` in `request.POST` on specific URLs and dumps it into `request.COOKIES` works nicely for this :) 2. Also, remember that you *must* return something in the response body for SWFUpload to recognize it as a successful upload attempt. I believe this has changed in the latest beta of SWFUpload, but anyway it's advisable just to stick something in there like 'ok'. For failures, make use of something like `HttpResponseBadRequest` or the like. 3. Lastly, in case you're having trouble finding them, the uploaded file is in `request.FILES` :) If you have anything perplexing I haven't covered, feel free to post something more detailed and I'll be happy to help.
Django version of the samples for SWFUpload: <http://github.com/naltimari/django-swfupload-samples/tree/master> So long uploadify. Great idea but it is just buggy, especially on Windows.
Code samples for Django + SWFUpload?
[ "", "python", "django", "swfupload", "" ]
I'm writing a biorhythm app. To test it i have a form with a Button and a PictureBox. When I click on the button i do ``` myPictureBox.Image = GetBiorhythm2(); ``` Which runs ok for the first time, **but on the second click** it causes the following exception: ``` System.ArgumentException: Parameter is not valid. at System.Drawing.Graphics.CheckErrorStatus at System.Drawing.Graphics.FillEllipse at Larifari.Biorhythm.Biorhythm.GetBiorhythm2 in c:\delo\Horoskop\Biorhythm.cs:line 157 at Larifari.test.Button1Click in c:\delo\Horoskop\test.Designer.cs:line 169 at System.Windows.Forms.Control.OnClick at System.Windows.Forms.Button.OnClick at System.Windows.Forms.Button.OnMouseUp at System.Windows.Forms.Control.WmMouseUp at System.Windows.Forms.Control.WndProc at System.Windows.Forms.ButtonBase.WndProc at System.Windows.Forms.Button.WndProc at ControlNativeWindow.OnMessage at ControlNativeWindow.WndProc at System.Windows.Forms.NativeWindow.DebuggableCallback at ComponentManager.System.Windows.Forms.UnsafeNativeMethods.IMsoComponentManager.FPushMessageLoop at ThreadContext.RunMessageLoopInner at ThreadContext.RunMessageLoop at System.Windows.Forms.Application.Run at Larifari.test.Main in c:\delo\Horoskop\test.cs:line 20 ``` the cut-down function which causes the error is : ``` public static Image GetBiorhythm2() { Bitmap bmp = new Bitmap(600, 300); Image img = bmp; Graphics g = Graphics.FromImage(img); Brush brush = Brushes.Black; g.FillEllipse(brush, 3, 3, 2, 2); //Here the exception is thrown on the second call to the function brush.Dispose(); //If i comment this out, it works ok. return img; } ``` if I comment-out the brush disposal it works ok, but I am not happy with that and wish to find an alternative solution. Can you help me please ?
It looks like you're trying to dispose of a static, which causes some problems next time it's used: ``` Brush brush = Brushes.Black; g.FillEllipse(brush, 3, 3, 2, 2); //Here the exception is thrown on the second call to the function brush.Dispose(); //If i comment this out, it works ok. ``` When you set brush = Brushes.Black, you're actually setting brush as a reference (or pointer) to the static Brushes.Black. By disposing it, you're effectively writing: ``` Brushes.Black.dispose(); ``` When you come back around to use the black brush again, the runtime says you can't because it's already been disposed of, and isn't a valid argument to g.FillEllipse() A better way to write this might be just simply: ``` g.FillEllipse(Brushes.Black, 3, 3, 2, 2); ``` Or, if you want to be really complex about it: ``` Brush brush = Brushes.Black.Clone(); g.FillEllipse( brush, 3, 3, 2, 2 ); brush.Dispose(); ``` Or if you don't care about things looking wrong, just comment out the brush.Dispose(); line in your original code.
Bruhes.Black is a system resource, and is not intended for you to dispose. The runtime manages the brushes in the Brushes class, the Pens, and other such objects. It creates and disposes those objects as required, keeping frequently-used items alive so that it doesn't have to continually create and destroy them. The documentation for the Brushes class says: > The Brushes class contains static > read-only properties that return a > Brush object of the color indicated by > the property name. You typically do > not have to explicitly dispose of the > brush returned by a property in this > class, unless it is used to construct > a new brush. In short, do not call Dispose on the system-supplied objects.
Disposing a static brush
[ "", "c#", "asp.net", "dispose", "system.drawing", "brush", "" ]
In order to be able to properly debug release builds a PDB file is needed. Can the PDB file become less usable when the compiler uses different kinds of optimizations (FPO, PGO, intrinsic functions, inlining etc.)? If so, is the effect of optimization severe or merely cause adjacent lines of code to get mixed up? *(I'm using VC2005, and will always choose debugability over optimized performance - but the question is general)*
Yes, optimized code is less debuggable. Not only is some information missing, some information will be very misleading. The biggest issue in my opinion is local variables. The compiler may use the same stack address or register for multiple variables throughout a function. As other posters mentioned, sometimes even figuring out what the "this" pointer is can take a bit of time. When debugging optimized code you may see the current line jumping around as you single step, since the compiler reorganized the generated code. If you use PGO, this jumping around will probably just get worse. FPO shouldn't affect debuggability too much provided you have a PDB since the PDB contains all the info necessary to unwind the stack for FPO frames. FPO can be a problem when using tools that need to take stack traces without symbols. For many projects, the perf benefit of FPO nowadays doesn't outweigh the hit for diagnosability; for this reason, MS decided not to build Windows Vista with the FPO optimization (<http://blogs.msdn.com/larryosterman/archive/2007/03/12/fpo.aspx>). I prefer to debug unoptimized code but this isn't always possible - some problems only repro with optimized code, customer crash dumps are from the released build, and getting a debug private deployed sometimes isn't possible. Often when debugging optimized code, I use the disassembly view - dissasembly never lies. This all applies to windbg since I do all native code debugging with it. Visual Studio's debugger might handle some of these cases better.
Yes. It can be severe at times, although that's usually more the result of inlining or reordering of code. Local variables also may not be accurately displayed in the watch window as they may only exist in registers and may not be correctly displayed when you switch stack frames.
Can optimizations affect the ability to debug a VC++ app using its PDB?
[ "", "c++", "visual-studio", "optimization", "pdb-files", "" ]
I'm not even sure this is possible to do efficiently, but here's my problem: I'm writing what's essentially a blog engine where a blog post and all replies to each blog post can tagged. So, I could have a blog post tagged "stack", and a reply to that post tagged "overflow". Right now, I'm trying to generate a list of the most popular tags when a user hits a special page in my application. It should return not only the n most popular tags by descending number of blog posts, but also the number of blog posts associated with each tag, **even if a reply in that post but not the post itself is tagged with that tag**. So, if BlogPost A is tagged with "foo", and a reply in BlogPost B is tagged with "foo", the popular tag summary should count that as two blog posts in total, even though BlogPost B is not technically tagged. Here's a description of the tables/fields that might be relevant: ``` BlogPosts | id # Primary key for all tables, Rails-style BlogComments | id | blog_post_id Tags | id | name # 'foo' Taggings | id | tag_id | blog_post_id | blog_comment_id ``` There's some denormalization in Taggings for the sake of convenience. If someone tags BlogPost, it fills in the blog\_post\_id field, and blog\_comment\_id remains NULL. If someone tags a comment to a post, it fills in both blog\_post\_id and blog\_comment\_id. Is there some way to return a sorted list of the most popular tags in one or several SQL queries? I'm thinking I might need to just run a computationally-expensive script every few minutes on a cron job and render the cached output instead of running this every time somebody hits the page... Thanks!
So far I see nothing complicated in your request: ``` SELECT tag_id, COUNT(blog_post_id) + COUNT(blog_comment_id) tag_count FROM Taggings GROUP BY tag_id ORDER BY COUNT(blog_post_id) + COUNT(blog_comment_id) DESC ``` If you want to count "affected blog posts" only, I think that's the way: ``` SELECT t.id tag_id, t.name tag_name, COUNT(DISTINCT COALESCE(x.blog_post_id, c.blog_post_id)) tag_count FROM Tags t INNER JOIN Taggings x ON x.tag_id = t.id LEFT JOIN BlogComments c ON c.id = x.blog_comment_id GROUP BY t.id, t.name ORDER BY COUNT(DISTINCT COALESCE(x.blog_post_id, c.blog_post_id)) DESC ```
I may be missing something obvious but since you have "If someone tags a comment to a post, it fills in both blog\_post\_id and blog\_comment\_id", the following sql should do the trick. *I'm assuming here that Tags.name here will be unique.* ``` SELECT MIN(ts.tag_id), t.name, COUNT(ts.blog_post_id) as rank FROM Taggings ts INNER JOIN Tags t ON ts.tag_id = t.id GROUP BY t.name ORDER BY COUNT(ts.blog_post_id) DESC ``` Hope that's what your looking for.
A difficult SQL query: tag popularity for models with complex associations
[ "", "sql", "tags", "" ]
There is a try-catch thing about functions, which I think sometimes may be quite useful: ``` bool function() try { //do something } catch(exception_type & t) { //do something } ``` So the first part of the question: is this style considered bad in general case? And the concrete example I used this approach in: We had project with quite a lot of code in c and c++. And there we had custom exception types (not std::exception derived). I needed to integrate XML library and cast all exception to our types. So, basically, the last step was to catch all exceptions from XML library and convert them. Function before: ``` bool readEntity(...) { while(...) { if(...) { //lot's of code... } } } ``` after: ``` bool readEntity(...) try { while(...) { if(...) { //lot's of code... } } } catch(XMLBaseException & ex) { //create our exception and throw } ``` My thoughts went something like this: I clearly state my intentions to convert all exception derived from one type into custom type AND we keep our screen without horizontal scroll bar (cause horizontal scroll bars are bad). Well, I actually was quite criticized for this approach as for non-clear one during code review. So I'd like to hear you thoughts. **UPDATE**: just to be clear: refactoring the function wasn't an option. And actually it was good written one.
Really the only reason to function-level try blocks is for constructors, otherwise it's a somewhat obscure feature that doesn't buy you that much. It's just as easy to do it this way: ``` bool readEntity(...) { try { while(...) { if(...) { //lot's of code... } } } catch(XMLBaseException & ex) { //create our exception and throw } } ``` If you are having troubles with horizontal scrolling then the thing to do is to split up your code. try/catches are complexity and this should be represented in the nesting level, not hidden. In constructors, this is a different issue: there is no other way to catch exceptions in an initializer list: ``` SomeClass::SomeClass(parameter p1, parameter p2) : Member1(p1), Member2(p2) try { } catch(Exception &ex) { // handle and rethrow } ``` Of course, if you have an exception mid-construction, there's not likely much you can do to recover except log and and rethrow (it's going to get rethrown anyway in the constructor case). Your object isn't completely constructed yet and there's nothing you can really do with it. The only thing that you can trust to be valid are the parameters (although if the initialization failed, that will likely be due to bad parameters). See this [GOTW](http://www.gotw.ca/gotw/066.htm) for a discussion on this.
Just to be clear: rethrowing and repackaging exceptions isn't poor practice, it's a Good Thing since it minimizes exposure of external dependencies. However, function-level try-catch is intended for constructor initialization. Saving a couple of horizontal spaces in your code isn't worth using a relatively obscure language feature for. Better to refactor the nested code if you really are indenting that far!
Function-wide exception handling in c++ - is it a bad style?
[ "", "c++", "exception", "coding-style", "" ]
From what I have seen and read on blogs, PyPy is a very ambitious project. What are some advantages it will bring to the table over its siblings (CPython, Jython, and IronPython)? Is it speed, cross-platform compatibility (including mobile platforms), the ability to use c-extensions without the GIL, or is this more of a technical exercise on what can be done?
PyPy is really two projects: * An interpreter compiler toolchain allowing you to write interpreters in RPython (a static subset of Python) and have cross-platform interpreters compiled standalone, for the JVM, for .NET (etc) * An implementation of Python in RPython These two projects allow for *many* things. * Maintaining Python in Python is much easier than maintaining it in C * From a single codebase you can generate Python interpreters that run on the JVM, .NET and standalone - rather than having multiple slightly incompatible implementations * Part of the compiler toolchain includes an experimental JIT generator (now in its fifth incarnation and starting to work really well) - the *goal* is for a JITed PyPy to run *much* faster than CPython * It is much easier to experiment with fundamental language features - like removing the GIL, better garbage collection, integrating stackless and so on So there are really a lot of reasons for PyPy to be exciting, and it is finally starting to live up to all its promises.
The most important feature is of course the JIT compiler. In CPython files are compiled to bytecode (`.pyc`) or optimized bytecode (`.pyo`) and then interpreted. With PyPy they will be compiled to native code. PyPy also includes [Stackless Python](http://www.stackless.com/) patches, including it's impressive [features](http://codespeak.net/pypy/dist/pypy/doc/stackless.html) (tasklet serialization, light threads etc.)
What does PyPy have to offer over CPython, Jython, and IronPython?
[ "", "python", "interpreter", "pypy", "" ]
I'm developing a BlackBerry app in Java and I have an Options class where all user settings are stored. The problem is I need to check some conditions in order to know how to react. As I keep adding more features, more GUI options are shown to the user, more settings are stored in the Options class and more conditions need to be checked for. Take the following code for example: ``` private void doCallMonitoring(int callId){ /*This is the part that I want to avoid. Having multiple nested ifs. Here's just two conditions but as I add more, it will get unmantainable very quickly.*/ if(Options.isActive().booleanValue()){ callTime = new Timer(); TimerTask callTimeTask = new TimerTask(){ public void run(){ callTimeSeconds++; if((callTimeSeconds == Options.getSoftLimit().intValue()) && (Phone.getActiveCall().getStatus() == PhoneCall.STATUS_CONNECTED)){ injectDTMFTone(Phone.getActiveCall()); }else if((callTimeSeconds >= Options.getHardLimit().intValue()) && (Phone.getActiveCall().getStatus() == PhoneCall.STATUS_CONNECTED)){ injectEndCall(); } } }; callTime.schedule(callTimeTask, 0,1000); }else{ System.out.println("Service not active"); } } ``` How I would want it to work is to verify all options with a single call and from there determine the curse of action. How can I achieve such a design?
Another option is to make methods such as `injectDMTFTone()` check to see if they want to handle that condition, and return true or false depending on if it was handled or not. For instance: ``` public void run() { callTimeSeconds++; do { if (handleInjectDMTFTone()) break; if (handleInjectEndCall()) break; } while(false); callTime.schedule(callTimeTask, 0,1000); } boolean handleInjectDMTFTone() { if ((callTimeSeconds != Options.getSoftLimit().intValue()) || (Phone.getActiveCall().getStatus() != PhoneCall.STATUS_CONNECTED)) return false; injectDTMFTone(Phone.getActiveCall()); return true; } boolean handleInjectEndCall() { if ((callTimeSeconds < Options.getHardLimit().intValue()) || (Phone.getActiveCall().getStatus() != PhoneCall.STATUS_CONNECTED)) return false; injectEndCall(); return true; } ``` Of course, instead of calling another `injectDMTFTone()` method or `injectEndCall()` method, you would just inline that logic right in those methods. In that way you've grouped all the logic of how and when to deal with those conditions in the same place. This is one of my favorite patterns; use `if` statements as close to the top of methods as makes sense to eliminate conditions and return. The rest of the method is not indented many levels, and is easy and straightforward to read. You can further extend this by creating objects that all implement the same interface and are in a repository of handlers that your `run` method can iterate over to see which will handle it. That may or may not be overkill to your case.
You can use "extract method" refactoring and have all those checks into a one "readable" condition. See this related [answer](https://stackoverflow.com/questions/360063/is-there-an-alternative-to-hyper-indented-code/360251#360251) Is bit lengthy but the point is to replace constructs like this: ``` }else if((callTimeSeconds >= Options.getHardLimit().intValue()) && (Phone.getActiveCall().getStatus() == PhoneCall.STATUS_CONNECTED)){ injectEndCall(); } } ``` For something like this: ``` .... }else if(shouldInjectEndCall() ){ injectEndCall(); } } ... ``` Remember objects do have state and may use other object to help them do its work.
What's the best way to get rid of nested ifs in code while checking for conditions?
[ "", "java", "design-patterns", "blackberry", "" ]
I am looking at creating a small class generator for a project. I have been reading about CodeDOM so it the semantics of creating the classes does not appear to be an issue, but am unsure oh how to best integrate the generation into the development and deployment process. 1. How should I trigger the creation of the classes? I have read it should be part of the build process, how should I do this? 2. Where should the classes be created? I read that the files should not be edited by hand, and never checked into source control. Should I even worry about this and just generate the classes into the same directory as the generator engine?
The answers to your question depend partly on the purpose of your generated classes. If the classes are generated as a part of the *development*, they should be generated as text files and checked into your SCM like any other class. If your classes are generated dynamically at *runtime* as a part of the operation of your system, I wouldn't use the CodeDOM at all. I'd use Reflection.
Take a look at T4 templates (it's built in to VS2008). It allows you to create "template" classes that generate code for you. [Oleg Sych](http://www.olegsych.com/) is an invaluable resource for this. [Link for Oleg's tutorial on code generation](http://www.olegsych.com/2008/03/code-generation-with-visual-studio-templates/).
C# Code Generation
[ "", "c#", "build-process", "code-generation", "" ]
I'm creating a stored procedure for searching some data in my database according to some criteria input by the user. My sql code looks like this: ``` Create Procedure mySearchProc ( @IDCriteria bigint=null, ... @MaxDateCriteria datetime=null ) as select Col1,...,Coln from MyTable where (@IDCriteria is null or ID=@IDCriteria) ... and (@MaxDateCriteria is null or Date<@MaxDateCriteria) ``` **Edit** : I've around 20 possible parameters, and each combination of n non-null parameters can happen. Is it ok performance-wise to write this kind of code? (I'm using MS SQL Server 2008) Would generating SQL code containing only the needed where clauses be notably faster?
``` where (@IDCriteria is null or ID=@IDCriteria) and (@MaxDateCriteria is null or Date<@MaxDateCriteria) ``` If you write this criteria, then SQL server will not know whether it is better to use the index for IDs or the index for Dates. For proper optimization, it is far better to write separate queries for each case and use IF to guide you to the correct one. ``` IF @IDCriteria is not null and @MaxDateCriteria is not null --query WHERE ID = @IDCriteria and Date < @MaxDateCriteria ELSE IF @IDCriteria is not null --query WHERE ID = @IDCriteria ELSE IF @MaxDateCriteria is not null --query WHERE Date < @MaxDateCriteria ELSE --query WHERE 1 = 1 ``` If you expect to need different plans out of the optimizer, you need to write different queries to get them!! > Would generating SQL code containing only the needed where clauses be notably faster? Yes - if you expect the optimizer to choose between different plans. --- Edit: ``` DECLARE @CustomerNumber int, @CustomerName varchar(30) SET @CustomerNumber = 123 SET @CustomerName = '123' SELECT * FROM Customers WHERE (CustomerNumber = @CustomerNumber OR @CustomerNumber is null) AND (CustomerName = @CustomerName OR @CustomerName is null) ``` CustomerName and CustomerNumber are indexed. Optimizer says : "Clustered Index Scan with parallelization". You can't write a worse single table query. --- > Edit : I've around 20 possible parameters, and each combination of n non-null parameters can happen. We had a similar "search" functionality in our database. When we looked at the actual queries issued, 99.9% of them used an AccountIdentifier. In your case, I suspect either one column is -always supplied- or one of two columns are always supplied. This would lead to 2 or 3 cases respectively. It's not important to remove OR's from the whole structure. It is important to remove OR's from the column/s that you expect the optimizer to use to access the indexes.
OR clauses are notorious for causing performance issues mainly because they require table scans. If you can write the query without ORs you'll be better off.
Performance implications of sql 'OR' conditions when one alternative is trivial?
[ "", "sql", "sql-server", "optimization", "stored-procedures", "" ]
When setting up some reference type properties for a project I'm working on, I came accross a few properties that needed to be properly initialized to be used and should never be null. I've seen a few ways to handle this and can't really determine if there are any major drawbacks to any of the primary ways I've seen to handle this. I'd like to get the community's opinion about the best way to handle this and what the potential drawbacks to each method might be. Given a simple class, I've seen several ways to handle making sure a property never has a null version of this class in a property ``` public class MyClass { //Some collection of code } ``` Option 1 - Initialize the backing store ``` public class OtherClass1 { private MyClass _mC = new MyClass(); public MyClass MC { get { return _mC; } set { _mC = value; } } } ``` Option 2 - Initialize the property in the constructor ``` public class OtherClass2 { public MyClass MC { get; set; } public OtherClass2() { MC = new MyClass(); } } ``` Option 3 - Handle initialization as needed in the Getter ``` public class OtherClass3 { private MyClass _mC; public MyClass MC { get { if (_mC == null) _mC = new MyClass(); return _mC; } set { _mC = value; } } } ``` I'm sure there are other ways, but these are the ones that come to mind and I have seen. I'm mostly trying to determine if there's a well established best practice on this or if there's a specific concern with any of the above. Cheers, Steve
Best option unless you really *can* get away with just creating a new instance yourself: only provide constructors which take all the required values, and validate them at that point.
As far as I know, there is *not* an established best practice here for a simple reason: each of your options has a different performance/memory footprint profile. The first option is appropriate for a reference to an object that you know must be instantiated in a class that you are sure will be used. Honestly, though, I never take this approach because I think that #2 is just more appropriate; just a sense that this is what a constructor is *for*. The last option is appropriate when you are *not* sure whether an option will be used. It permits you take up the resource only as needed. BTW, this question is right "next door" to a number of other issues such as the appropriate use of the Singleton pattern, the use of abstract classes or interfaces for your deferred object, etc. that might be useful for you to explore to gain greater insight. Update: It strikes me that there is at least one case where initializing an instance in the class definition is appropriate (your Option #1). If the instance will be static then this is the only appropriate place *to* initialize it: ``` private static readonly DateTime firstClassDate = DateTime.Parse("1/1/2009 09:00:00 AM"); ``` I thought of this when creating the above line of code in some Unit tests I was writing today (the *readonly* being optional wrt my point but appropriate in my case).
Handling C# Properties that shouldn't be null
[ "", "c#", "properties", "" ]
Just wondering what the general consensus was about returning enums from methods indicating the status. The notion of returning codes (ints) is pretty innate in oldschool systems programming (C) but I am wondering if the methodology for indicating the status has changed. thanks **Edit:** I am aware that enums are basically int values. I'm inquiring about the sheer practice of having methods start throwing around status codes (be them enums or ints) to indicate statuses altogether. It seems like a good practice but I was wondering if there is any negative feelings about the idea. I would always choose an enum over an int return for a status code.
If you are returning enums to indicate errors instead of throwing exceptions you are, of course, in a state of sin. Are callers really interested in the state of something? I've generally found that sort of thing in C-style function libraries where there are no objects to represent state. This depends on what you are trying to do, but if you are (say) creating code to manage communication sockets, I'd strongly recommend wrapping them in classes and allow client code to detect their state through properties rather than as the result of an Open() method.
if All you want is a status returned then an enum is a much better way to go vs just an int for a number of reasons, it forces consistency on what means what so you don't have to remember what return code 1 means, making your code much more maintainable.
Using enums as status returned status indicators in C#
[ "", "c#", "design-patterns", "enums", "" ]
Here is a fake code sample ``` vector<Fred> gFred; { // init gFred Fred &fred = gFred[0]; size_t z = 0; do { fred = gFred[z]; // do odd processing with fred z++; } while (fred.lastElementInSet == 0); } ``` The thing that caught my attention was the fact that gFred[0] was being overwritten. This leads me to think that rather than init fred as a reference to a new element, ``` fred = gFred[z]; ``` What is actually happening is that gFred[1] is overwriting gFred[0]. I'm thinking the correct thing to do here, is whap myself upsida head a few times, and turn this into a pointer implementation and move on with my life. Have I diagnosed this correctly? or do I need more education?
Yes, you are getting a structure copy there. References cannot be rebound, i.e., they stay the same once they are initialized. And your solution is also appropriate. Dunno about smacking yourself in the head though.
One way of looking at references is to think of them as implicitly de-referenced pointers. Simply put, they are pointers, but you can use the normal variable access syntax to use them ``` Fred &fred = gFred[0]; ``` This creates a reference to the first element of your vector gFred. (Incidentally, do you have anything inside this?) The compiler will do something like this: ``` Fred *pFred = &gFred[0]; ``` Now, when you do: ``` fred = gFred[z]; ``` the compiler will actually do something like this: ``` *pFred = gFred[z]; ``` which translated stands as: ``` gFred[0] = gFred[z]; ``` And you are doing this `N` times, if you have N elements to start off in your `vector`. If you are trying to initialize all elements of your `vector` try this constructor: ``` vector(size_type n, const T& t) ``` where, ``` n = size of vector t = gFred[0] ```
Am I geting a structure copy here?
[ "", "c++", "reference", "overwrite", "" ]
``` <a href="" id="someId" onclick="SomeMethod(self);"></a> ``` Where SomeMethod could have: ``` function SomeMethod(item) { item.setAttribute('name', item.id); } ``` Instead of: ``` function SomeMethod(itemId) { var someItem; someItem = document.getElementById(itemId); someItem .setAttribute('name', someItem .id); } ``` Silly example, but the idea is not to send in the id itself, but the actual control calling the method. I swear this can be done but have had no luck searching... partially because I'm not even sure what to search on. I thought it was self, but self doesn't seem to be what I want when the script I have runs.
Use the `this` Keyword. ``` <a href="" id="someId" onclick="SomeMethod(this);"></a> ```
You actually don't need to pass **this** as an argument to your function, because you've got a **click event object** that you can access. So: ``` <a href="" id="someId" onclick="clickEventHandler()"></a> <script> function clickEventHandler(event) { if (!event) { event = window.event; // Older versions of IE use // a global reference // and not an argument. }; var el = (event.target || event.srcElement); // DOM uses 'target'; // older versions of // IE use 'srcElement' el.setAttribute('name', el.id); } </script> ```
Javascript: How to make a Control send itself in a method
[ "", "javascript", "onclick", "" ]
Why is my `X` method below not being called?! ``` static class Program { private static void Main() { X((IEnumerable<int>)null); } public static IEnumerable<T> X<T>(IEnumerable<T> e) { if (e == null) throw new ArgumentNullException(); yield break; } } ``` I tried stepping into the debugger but it doesn't enter `X`! Is the `yield break` keyword causing some side effect I am not aware of? If it's worth anything, I'm using Visual Studio 2008 Express with .NET 3.5 SP1.
X2 is an iterator and is delayed executed. It won't be actually run until you attempt to get a value from the returned IEnumerable instance. You can fix this to get the behavior you are actually wanting by breaking the function up into 2 parts. ``` public static IEnumerable<T> X2<T>(IEnumerable<T> e) { if (e == null) throw new ArgumentNullException(); return X2Helper(e); } private static IEnumerable<T> X2Helper<T>(IEnumerable<T> e) { yield break; } ``` Eric has a great blog post on this subject: <http://blogs.msdn.com/ericlippert/archive/2008/09/08/high-maintenance.aspx>
Yes the method doesn't get called until the IEnumerable's GetEnumerator method is called.
Why is my C# method not called?
[ "", "c#", ".net", "" ]
I am trying to create an array of pointers. These pointers will point to a Student object that I created. How do I do it? What I have now is: ``` Student * db = new Student[5]; ``` But each element in that array is the student object, not a pointer to the student object. Thanks.
``` Student** db = new Student*[5]; // To allocate it statically: Student* db[5]; ```
``` #include <vector> std::vector <Student *> db(5); // in use db[2] = & someStudent; ``` The advantage of this is that you don't have to worry about deleting the allocated storage - the vector does it for you.
How do I create an array of pointers?
[ "", "c++", "arrays", "pointers", "" ]
Putting aside whether the use of [isinstance is harmful](http://www.canonical.org/~kragen/isinstance/), I have run into the following conundrum when trying to evaluate isinstance after serializing/deserializing an object via Pickle: ``` from __future__ import with_statement import pickle # Simple class definition class myclass(object): def __init__(self, data): self.data = data # Create an instance of the class x = myclass(100) # Pickle the instance to a file with open("c:\\pickletest.dat", "wb") as f: pickle.dump(x, f) # Replace class with exact same definition class myclass(object): def __init__(self, data): self.data = data # Read an object from the pickled file with open("c:\\pickletest.dat", "rb") as f: x2 = pickle.load(f) # The class names appear to match print x.__class__ print x2.__class__ # Uh oh, this fails...(why?) assert isinstance(x2, x.__class__) ``` Can anyone shed some light on why isinstance would fail in this situation? In other words, why does Python think these objects are of two different classes? When I remove the second class definition, `isinstance` works fine.
This is how the unpickler works (site-packages/pickle.py): ``` def find_class(self, module, name): # Subclasses may override this __import__(module) mod = sys.modules[module] klass = getattr(mod, name) return klass ``` To find and instantiate a class. So of course if you replace a class with an identically named class, the `klass = getattr(mod, name)` will return the new class, and the instance will be of the new class, and so isinstance will fail.
The obvious answer, because its not the same class. Its a similar class, but not the same. ``` class myclass(object): pass x = myclass() class myclass(object): pass y = myclass() assert id(x.__class__) == id(y.__class__) # Will fail, not the same object x.__class__.foo = "bar" assert y.__class__.foo == "bar" # will raise AttributeError ```
Why do I get unexpected behavior in Python isinstance after pickling?
[ "", "python", "pickle", "" ]
I have an [adopted](http://staceyw.spaces.live.com/blog/cns!f4a38e96e598161e!482.entry) implementation of a simple (no upgrades or timeouts) ReaderWriterLock for Silverlight, I was wondering anyone with the right expertise can validate if it is good or bad by design. To me it looks pretty alright, it works as advertised, but I have limited experience with multi-threading code as such. ``` public sealed class ReaderWriterLock { private readonly object syncRoot = new object(); // Internal lock. private int i = 0; // 0 or greater means readers can pass; -1 is active writer. private int readWaiters = 0; // Readers waiting for writer to exit. private int writeWaiters = 0; // Writers waiting for writer lock. private ConditionVariable conditionVar; // Condition variable. public ReaderWriterLock() { conditionVar = new ConditionVariable(syncRoot); } /// <summary> /// Gets a value indicating if a reader lock is held. /// </summary> public bool IsReaderLockHeld { get { lock ( syncRoot ) { if ( i > 0 ) return true; return false; } } } /// <summary> /// Gets a value indicating if the writer lock is held. /// </summary> public bool IsWriterLockHeld { get { lock ( syncRoot ) { if ( i < 0 ) return true; return false; } } } /// <summary> /// Aquires the writer lock. /// </summary> public void AcquireWriterLock() { lock ( syncRoot ) { writeWaiters++; while ( i != 0 ) conditionVar.Wait(); // Wait until existing writer frees the lock. writeWaiters--; i = -1; // Thread has writer lock. } } /// <summary> /// Aquires a reader lock. /// </summary> public void AcquireReaderLock() { lock ( syncRoot ) { readWaiters++; // Defer to a writer (one time only) if one is waiting to prevent writer starvation. if ( writeWaiters > 0 ) { conditionVar.Pulse(); Monitor.Wait(syncRoot); } while ( i < 0 ) Monitor.Wait(syncRoot); readWaiters--; i++; } } /// <summary> /// Releases the writer lock. /// </summary> public void ReleaseWriterLock() { bool doPulse = false; lock ( syncRoot ) { i = 0; // Decide if we pulse a writer or readers. if ( readWaiters > 0 ) { Monitor.PulseAll(syncRoot); // If multiple readers waiting, pulse them all. } else { doPulse = true; } } if ( doPulse ) conditionVar.Pulse(); // Pulse one writer if one waiting. } /// <summary> /// Releases a reader lock. /// </summary> public void ReleaseReaderLock() { bool doPulse = false; lock ( syncRoot ) { i--; if ( i == 0 ) doPulse = true; } if ( doPulse ) conditionVar.Pulse(); // Pulse one writer if one waiting. } /// <summary> /// Condition Variable (CV) class. /// </summary> public class ConditionVariable { private readonly object syncLock = new object(); // Internal lock. private readonly object m; // The lock associated with this CV. public ConditionVariable(object m) { lock (syncLock) { this.m = m; } } public void Wait() { bool enter = false; try { lock (syncLock) { Monitor.Exit(m); enter = true; Monitor.Wait(syncLock); } } finally { if (enter) Monitor.Enter(m); } } public void Pulse() { lock (syncLock) { Monitor.Pulse(syncLock); } } public void PulseAll() { lock (syncLock) { Monitor.PulseAll(syncLock); } } } } ``` If it is good, it might be helpful to others too as Silverlight currently lacks a reader-writer type of lock. Thanks.
I go in depth on explaining [Vance Morrison's ReaderWriterLock](http://blogs.msdn.com/vancem/archive/2006/03/28/563180.aspx) (which became ReaderWriterLockSlim in .NET 3.5) [on my blog](http://www.moserware.com/2008/09/how-do-locks-lock.html) (down to the x86 level). This might be helpful in your design, especially understanding how things really work.
Both of your IsReadorLockHeld and IsWriterLockHeld methods are flawed at a conceptual level. While it is possible to determine that at a given point in time a particular lock is or is not held, there is absolutely nothing you can safely do without this information unless you continue to hold the lock (not the case in your code). These methods would be more accurately named WasReadLockHeldInThePast and WasWriterLockHeldInThePast. Once you rename the methods to a more accurate representation of what they do, it becomes clearer that they are not very useful.
Silverlight ReaderWriterLock Implementation Good/Bad?
[ "", "c#", "silverlight", "multithreading", "" ]
This might sound like a noob question, but are: ``` string var; if (var == null) ``` and ``` string var; if (var == string.Empty) ``` The same? > ### **Duplicate** > > [What's the Difference between String.Empty and Null?](https://stackoverflow.com/questions/151472/what-is-the-difference-between-string-empty-and) and [In C#, should I use String.Empty or Null?](https://stackoverflow.com/questions/263191/in-c-should-i-use-string-empty-or-string-empty-or)
@Jay is correct, they are not the same. `String.IsNullOrEmpty()` is a convenient method to check for both null and "".
No, they are not the same. `string.Empty` is the same as `""`, which is an actual object: a string of 0 length. `null` means there is no object.
Checking null and string.Empty in C#
[ "", "c#", ".net", "" ]
I have a cell in a spreadsheet that is a date object in Excel but becomes a double (something like 39820.0 for 1/7/2009) when it comes out of C1's xls class. I read this is a Julian date format. Can someone tell me how to parse it back into a DateTime in C#? Update: It looks like I might not have a Julian date, but instead the number of days since Dec 30, 1899.
I think Excel is just using the standard OLE Automation [DATE](http://msdn.microsoft.com/en-us/library/cc237601.aspx) type which can be converted with the `DateTime.FromOADate` method. This block of code, ``` using System; namespace DateFromDouble { class Program { static void Main(string[] args) { Console.WriteLine(DateTime.FromOADate(39820.0)); } } } ``` outputs: ``` 1/7/2009 12:00:00 AM ```
There's a JulianCalendar class in System.Globalization; Here's how you would use it: ``` JulianCalendar c = new JulianCalendar(); DateTime time = c.ToDateTime(2009, 1, 7, 0, 0, 0, 0); Console.WriteLine(time.ToShortDateString()); ``` EDIT: If it is in fact days since "1900" here's how you can do it: ``` public static DateTime DaysSince1900(int days) { return new DateTime(1900, 1, 1).AddDays(days); } DateTime time = DaysSince1900(39820); Console.WriteLine(time.ToShortDateString()); //will result in "1/9/2009" ```
C# Julian Date Parser
[ "", "c#", ".net", "julian-date", "" ]
> **Possible Duplicate:** > [Can anyone explain IEnumerable and IEnumerator to me?](https://stackoverflow.com/questions/558304/can-anyone-explain-ienumerable-and-ienumerator-to-me) What are the differences between IEnumerator and IEnumerable?
[IEnumerable](http://msdn.microsoft.com/en-us/library/system.collections.ienumerable.aspx) is an interface that defines one method **GetEnumerator** which returns an **[IEnumerator](http://msdn.microsoft.com/en-us/library/system.collections.ienumerator.aspx)** interface, this in turn allows readonly access to a collection. A collection that implements IEnumerable can be used with a foreach statement. Definition ``` IEnumerable public IEnumerator GetEnumerator(); IEnumerator public object Current; public void Reset(); public bool MoveNext(); ```
An `IEnumerator` is a thing that can enumerate: it has the `Current` property and the `MoveNext` and `Reset` methods (which in .NET code you probably won't call explicitly, though you could). An `IEnumerable` is a thing that can be enumerated...which simply means that it has a GetEnumerator method that returns an `IEnumerator`. Which do you use? The only reason to use `IEnumerator` is if you have something that has a nonstandard way of enumerating (that is, of returning its various elements one-by-one), and you need to define how that works. You'd create a new class implementing `IEnumerator`. But you'd still need to return that `IEnumerator` in an `IEnumerable` class. For a look at what an enumerator (implementing `IEnumerator<T>`) looks like, see any `Enumerator<T>` class, such as the ones contained in `List<T>`, `Queue<T>,` or `Stack<T>`. For a look at a class implementing `IEnumerable`, see any standard collection class.
What is the difference between IEnumerator and IEnumerable?
[ "", "c#", ".net", "ienumerable", "ienumerator", "" ]
I want to start writing a http proxy that will modify responses according to some rules/filters I will configure. However, before I start coding it, I want to make sure I'm making the right choice in going with Python. Later, this tool would have to be able to process a lot of requests, so, I would like to know I can count on it later on to be able to perform when "push comes to shove".
Yes, I think you will find Python to be perfectly adequate for your needs. There's a huge number of web frameworks, WSGI libraries, etc. to choose from, or learn from when building your own. There's an interesting [post](http://python-history.blogspot.com/2009/01/microsoft-ships-python-code-in-1996.html) on the [Python History blog](http://python-history.blogspot.com/) about how Python was supporting high performance websites in 1996.
As long as the bulk of the processing uses Python's built-in modules it should be fine as far as performance. The biggest strength of Python is its clear syntax and ease of testing/maintainability. If you find that one section of your code is slowing down the process, you can rewrite that section and use it as a C module, while keeping the bulk of your control code in Python. However if you're looking to make the most optimized Python Code you may want to check out [this SO post](https://stackoverflow.com/questions/172720/speeding-up-python).
Will python provide enough performance for a proxy?
[ "", "python", "performance", "proxy", "" ]
I have a WPF `ListView` (`GridView`) and the cell template contains a `TextBlock`. If I add: `TextTrimming="CharacterEllipsis" TextWrapping="NoWrap"` on the `TextBlock`, an ellipsis will appear at the end of my string when the column gets smaller than the length of the string. What I need is to have the ellipsis at the beginning of the string. I.e. if I have the string `Hello World!`, I would like `...lo World!`, instead of `Hello W...`. Any ideas?
You could try to use a ValueConverter (cf. [*IValueConverter* interface](http://msdn.microsoft.com/en-us/library/system.windows.data.ivalueconverter.aspx)) to change the strings that should be displayed in the list box yourself. That is, in the implementation of the Convert method, you would test if the strings are longer than the available space, and then change them to ... plus the right side of the string.
I was facing the same problem and wrote an attached property to solve this (or to say, provide this feature). Donate my code here: ## USAGE ``` <controls:TextBlockTrimmer EllipsisPosition="Start"> <TextBlock Text="Excuse me but can I be you for a while" TextTrimming="CharacterEllipsis" /> </controls:TextBlockTrimmer> ``` Don't forget to add a namespace declaration at your Page/Window/UserControl root: ``` xmlns:controls="clr-namespace:Hillinworks.Wpf.Controls" ``` `TextBlockTrimmer.EllipsisPosition` can be `Start`, `Middle` (mac style) or `End`. Pretty sure you can figure out which is which from their names. ## CODE ### TextBlockTrimmer.cs ``` using System; using System.Collections.Generic; using System.ComponentModel; using System.Linq; using System.Text; using System.Windows; using System.Windows.Controls; using System.Windows.Markup; namespace Hillinworks.Wpf.Controls { enum EllipsisPosition { Start, Middle, End } [DefaultProperty("Content")] [ContentProperty("Content")] internal class TextBlockTrimmer : ContentControl { private class TextChangedEventScreener : IDisposable { private readonly TextBlockTrimmer _textBlockTrimmer; public TextChangedEventScreener(TextBlockTrimmer textBlockTrimmer) { _textBlockTrimmer = textBlockTrimmer; s_textPropertyDescriptor.RemoveValueChanged(textBlockTrimmer.Content, textBlockTrimmer.TextBlock_TextChanged); } public void Dispose() { s_textPropertyDescriptor.AddValueChanged(_textBlockTrimmer.Content, _textBlockTrimmer.TextBlock_TextChanged); } } private static readonly DependencyPropertyDescriptor s_textPropertyDescriptor = DependencyPropertyDescriptor.FromProperty(TextBlock.TextProperty, typeof(TextBlock)); private const string ELLIPSIS = "..."; private static readonly Size s_inifinitySize = new Size(double.PositiveInfinity, double.PositiveInfinity); public EllipsisPosition EllipsisPosition { get { return (EllipsisPosition)GetValue(EllipsisPositionProperty); } set { SetValue(EllipsisPositionProperty, value); } } public static readonly DependencyProperty EllipsisPositionProperty = DependencyProperty.Register("EllipsisPosition", typeof(EllipsisPosition), typeof(TextBlockTrimmer), new PropertyMetadata(EllipsisPosition.End, TextBlockTrimmer.OnEllipsisPositionChanged)); private static void OnEllipsisPositionChanged(DependencyObject d, DependencyPropertyChangedEventArgs e) { ((TextBlockTrimmer)d).OnEllipsisPositionChanged((EllipsisPosition)e.OldValue, (EllipsisPosition)e.NewValue); } private string _originalText; private Size _constraint; protected override void OnContentChanged(object oldContent, object newContent) { var oldTextBlock = oldContent as TextBlock; if (oldTextBlock != null) { s_textPropertyDescriptor.RemoveValueChanged(oldTextBlock, TextBlock_TextChanged); } if (newContent != null && !(newContent is TextBlock)) // ReSharper disable once LocalizableElement throw new ArgumentException("TextBlockTrimmer access only TextBlock content", nameof(newContent)); var newTextBlock = (TextBlock)newContent; if (newTextBlock != null) { s_textPropertyDescriptor.AddValueChanged(newTextBlock, TextBlock_TextChanged); _originalText = newTextBlock.Text; } else _originalText = null; base.OnContentChanged(oldContent, newContent); } private void TextBlock_TextChanged(object sender, EventArgs e) { _originalText = ((TextBlock)sender).Text; this.TrimText(); } protected override Size MeasureOverride(Size constraint) { _constraint = constraint; return base.MeasureOverride(constraint); } protected override Size ArrangeOverride(Size arrangeBounds) { var result = base.ArrangeOverride(arrangeBounds); this.TrimText(); return result; } private void OnEllipsisPositionChanged(EllipsisPosition oldValue, EllipsisPosition newValue) { this.TrimText(); } private IDisposable BlockTextChangedEvent() { return new TextChangedEventScreener(this); } private static double MeasureString(TextBlock textBlock, string text) { textBlock.Text = text; textBlock.Measure(s_inifinitySize); return textBlock.DesiredSize.Width; } private void TrimText() { var textBlock = (TextBlock)this.Content; if (textBlock == null) return; if (DesignerProperties.GetIsInDesignMode(textBlock)) return; var freeSize = _constraint.Width - this.Padding.Left - this.Padding.Right - textBlock.Margin.Left - textBlock.Margin.Right; // ReSharper disable once CompareOfFloatsByEqualityOperator if (freeSize <= 0) return; using (this.BlockTextChangedEvent()) { // this actually sets textBlock's text back to its original value var desiredSize = TextBlockTrimmer.MeasureString(textBlock, _originalText); if (desiredSize <= freeSize) return; var ellipsisSize = TextBlockTrimmer.MeasureString(textBlock, ELLIPSIS); freeSize -= ellipsisSize; var epsilon = ellipsisSize / 3; if (freeSize < epsilon) { textBlock.Text = _originalText; return; } var segments = new List<string>(); var builder = new StringBuilder(); switch (this.EllipsisPosition) { case EllipsisPosition.End: TextBlockTrimmer.TrimText(textBlock, _originalText, freeSize, segments, epsilon, false); foreach (var segment in segments) builder.Append(segment); builder.Append(ELLIPSIS); break; case EllipsisPosition.Start: TextBlockTrimmer.TrimText(textBlock, _originalText, freeSize, segments, epsilon, true); builder.Append(ELLIPSIS); foreach (var segment in ((IEnumerable<string>)segments).Reverse()) builder.Append(segment); break; case EllipsisPosition.Middle: var textLength = _originalText.Length / 2; var firstHalf = _originalText.Substring(0, textLength); var secondHalf = _originalText.Substring(textLength); freeSize /= 2; TextBlockTrimmer.TrimText(textBlock, firstHalf, freeSize, segments, epsilon, false); foreach (var segment in segments) builder.Append(segment); builder.Append(ELLIPSIS); segments.Clear(); TextBlockTrimmer.TrimText(textBlock, secondHalf, freeSize, segments, epsilon, true); foreach (var segment in ((IEnumerable<string>)segments).Reverse()) builder.Append(segment); break; default: throw new NotSupportedException(); } textBlock.Text = builder.ToString(); } } private static void TrimText(TextBlock textBlock, string text, double size, ICollection<string> segments, double epsilon, bool reversed) { while (true) { if (text.Length == 1) { var textSize = TextBlockTrimmer.MeasureString(textBlock, text); if (textSize <= size) segments.Add(text); return; } var halfLength = Math.Max(1, text.Length / 2); var firstHalf = reversed ? text.Substring(halfLength) : text.Substring(0, halfLength); var remainingSize = size - TextBlockTrimmer.MeasureString(textBlock, firstHalf); if (remainingSize < 0) { // only one character and it's still too large for the room, skip it if (firstHalf.Length == 1) return; text = firstHalf; continue; } segments.Add(firstHalf); if (remainingSize > epsilon) { var secondHalf = reversed ? text.Substring(0, halfLength) : text.Substring(halfLength); text = secondHalf; size = remainingSize; continue; } break; } } } } ```
Ellipsis at start of string in WPF ListView
[ "", "c#", "wpf", "string", "listview", "ellipsis", "" ]
I'm writing a piece of software over on github. It's basically a tray icon with some extra features. I want to provide a working piece of code without actually having to make the user install what are essentially dependencies for optional features and I don't actually want to import things I'm not going to use so I thought code like this would be "good solution": ``` ---- IN LOADING FUNCTION ---- features = [] for path in sys.path: if os.path.exists(os.path.join(path, 'pynotify')): features.append('pynotify') if os.path.exists(os.path.join(path, 'gnomekeyring.so')): features.append('gnome-keyring') #user dialog to ask for stuff #notifications available, do you want them enabled? dlg = ConfigDialog(features) if not dlg.get_notifications(): features.remove('pynotify') service_start(features ...) ---- SOMEWHERE ELSE ------ def service_start(features, other_config): if 'pynotify' in features: import pynotify #use pynotify... ``` There are some issues however. If a user formats his machine and installs the newest version of his OS and redeploys this application, features suddenly disappear without warning. The solution is to present this on the configuration window: ``` if 'pynotify' in features: #gtk checkbox else: #gtk label reading "Get pynotify and enjoy notification pop ups!" ``` But if this is say, a mac, how do I know I'm not sending the user on a wild goose chase looking for a dependency they can never fill? The second problem is the: ``` if os.path.exists(os.path.join(path, 'gnomekeyring.so')): ``` issue. Can I be sure that the file is always called gnomekeyring.so across all the linux distros? How do other people test these features? The problem with the basic ``` try: import pynotify except: pynotify = disabled ``` is that the code is global, these might be littered around and even if the user doesn't want pynotify....it's loaded anyway. So what do people think is the best way to solve this problem?
You might want to have a look at the [imp module](http://docs.python.org/library/imp.html), which basically does what you do manually above. So you can first look for a module with `find_module()` and then load it via `load_module()` or by simply importing it (after checking the config). And btw, if using except: I always would add the specific exception to it (here ImportError) to not accidently catch unrelated errors.
The `try:` method does not need to be global — it can be used in any scope and so modules can be "lazy-loaded" at runtime. For example: ``` def foo(): try: import external_module except ImportError: external_module = None if external_module: external_module.some_whizzy_feature() else: print("You could be using a whizzy feature right now, if you had external_module.") ``` When your script is run, no attempt will be made to load `external_module`. The first time `foo()` is called, `external_module` is (if available) loaded and inserted into the function's local scope. Subsequent calls to `foo()` reinsert `external_module` into its scope without needing to reload the module. In general, it's best to let Python handle import logic — it's been doing it for a while. :-)
What's Python good practice for importing and offering optional features?
[ "", "python", "python-import", "" ]
The standard PHP way to test whether a string `$str` ends with a substring `$test` is: ``` $endsWith = substr( $str, -strlen( $test ) ) == $test ``` Is this the fastest way?
What Assaf said is correct. There is a built in function in PHP to do exactly that. ``` substr_compare($str, $test, strlen($str)-strlen($test), strlen($test)) === 0; ``` If `$test` is longer than `$str` PHP will give a warning, so you need to check for that first. ``` function endswith($string, $test) { $strlen = strlen($string); $testlen = strlen($test); if ($testlen > $strlen) return false; return substr_compare($string, $test, $strlen - $testlen, $testlen) === 0; } ```
This method is a tiny bit more memory-expensive, but it is faster: ``` stripos(strrev($haystack), $reversed_needle) === 0; ``` This is best when you know exactly what the needle is, so you can hard-code it reversed. If you reverse the needle programmatically, it becomes slower than the earlier method. ***Edit (12 years later)***: LOL, this is a super-old answer that I wrote when I didn't know what I was actually talking about. I'd like the think I've grown since then. @DavidHarkness is right, it is not very efficient in the negative case. Probably much faster to just iterate in reverse and bail early if you really need as much perf as possible. Also, php probably has better ways to do this now. Honestly, I haven't written php in nearly a decade, so I'll leave it up to others now.
What's the most efficient test of whether a PHP string ends with another string?
[ "", "php", "string", "performance", "" ]
Using c# 3 and .Net Framework 3.5, I have a Person object ``` public Person { public int Id { get; set; } public string FirstName { get; set; } public string LastName { get; set; } public int SSN { get; set; } } ``` and I've got a List of them: ``` List<Person> persons = GetPersons(); ``` How can I get all the Person objects in persons where SSN is not unique in the list and remove them from the persons list and ideally add them to another list called "`List<Person> dupes`"? The original list might look something like this: ``` persons = new List<Person>(); persons.Add(new Person { Id = 1, FirstName = "Chris", LastName="Columbus", SSN=111223333 }); // Is a dupe persons.Add(new Person { Id = 1, FirstName = "E.E.", LastName="Cummings", SSN=987654321 }); persons.Add(new Person { Id = 1, FirstName = "John", LastName="Steinbeck", SSN=111223333 }); // Is a dupe persons.Add(new Person { Id = 1, FirstName = "Yogi", LastName="Berra", SSN=123456789 }); ``` And the end result would have Cummings and Berra in the original persons list and would have Columbus and Steinbeck in a list called dupes. Many thanks!
This gets you the duplicated SSN: ``` var duplicatedSSN = from p in persons group p by p.SSN into g where g.Count() > 1 select g.Key; ``` The duplicated list would be like: ``` var duplicated = persons.FindAll( p => duplicatedSSN.Contains(p.SSN) ); ``` And then just iterate over the duplicates and remove them. ``` duplicated.ForEach( dup => persons.Remove(dup) ); ```
Thanks to gcores for getting me started down a correct path. Here's what I ended up doing: ``` var duplicatedSSN = from p in persons group p by p.SSN into g where g.Count() > 1 select g.Key; var duplicates = new List<Person>(); foreach (var dupeSSN in duplicatedSSN) { foreach (var person in persons.FindAll(p => p.SSN == dupeSSN)) duplicates.Add(person); } duplicates.ForEach(dup => persons.Remove(dup)); ```
Find object data duplicates in List of objects
[ "", "c#", ".net", "" ]
Can I create an empty iframe as a placeholder to later insert html into it? In otherwords, suppose I have an empty iframe with an id, How do I insert html in it? I'm using jquery, if that makes it easier.
You can do it without jQuery also: ``` var iframe = document.getElementById('iframeID'); iframe = iframe.contentWindow || ( iframe.contentDocument.document || iframe.contentDocument); iframe.document.open(); iframe.document.write('Hello World!'); iframe.document.close(); ``` jQuery's html strips body, html and head tags from the inserted HTML.
View the source of this page: <http://mg.to/test/dynaframe.html> It appears to do exactly what you want to do. ``` $(function() { var $frame = $('<iframe style="width:200px; height:100px;">'); $('body').html( $frame ); setTimeout( function() { var doc = $frame[0].contentWindow.document; var $body = $('body',doc); $body.html('<h1>Test</h1>'); }, 1 ); }); ```
putting html inside an iframe (using javascript)
[ "", "javascript", "jquery", "html", "iframe", "" ]
``` <noscript><div id="example">I want to get this innerHTML</div></noscript> <script type="text/javascript"> alert($('example').innerHTML);</script> ``` This javascript snippet just returns an empty string. Is there a way of getting the contents of a noscript node? p.s. I'm using prototype on this particular project.
If scripting is enabled, [the noscript element is defined as containing only text](https://html.spec.whatwg.org/multipage/scripting.html#the-noscript-element) - though it must be *parsable* text, with some restrictions on content. With that in mind, you should be able to extract the text, parse it, and then find your desired element. A rudimentary example of this follows: ``` var nos = document.getElementsByTagName("noscript")[0]; // in some browsers, contents of noscript hang around in one form or another var nosHtml = nos.textContent||nos.innerHTML; if ( nosHtml ) { var temp = document.createElement("div"); temp.innerHTML = nosHtml; // lazy man's query library: add it, find it, remove it document.body.appendChild(temp); var ex = document.getElementById("example"); document.body.removeChild(temp); alert(ex.innerHTML); } ``` Note that when I originally wrote this answer, the above failed in Google Chrome; access to noscript content appears to be somewhat better-supported these days, but it still strikes me as an edge-case that is perhaps somewhat more likely than other elements to exhibit bugs - I would avoid it if you've other options.
I'm not sure about prototype, but this works in Chrome with jQuery: ``` $('noscript').before($('noscript').text()); ```
Access Contents of <noscript> with Javascript
[ "", "javascript", "innerhtml", "noscript", "" ]
I like instantiating my WCF service clients within a `using` block as it's pretty much the standard way to use resources that implement `IDisposable`: ``` using (var client = new SomeWCFServiceClient()) { //Do something with the client } ``` But, as noted in [this MSDN article](http://msdn.microsoft.com/en-us/library/aa355056.aspx), wrapping a WCF client in a `using` block could mask any errors that result in the client being left in a faulted state (like a timeout or communication problem). Long story short, when `Dispose()` is called, the client's `Close()` method fires, but throws an error because it's in a faulted state. The original exception is then masked by the second exception. Not good. The suggested workaround in the MSDN article is to completely avoid using a `using` block, and to instead instantiate your clients and use them something like this: ``` try { ... client.Close(); } catch (CommunicationException e) { ... client.Abort(); } catch (TimeoutException e) { ... client.Abort(); } catch (Exception e) { ... client.Abort(); throw; } ``` Compared to the `using` block, I think that's ugly. And a lot of code to write each time you need a client. Luckily, I found a few other workarounds, such as this one on the (now defunct) IServiceOriented blog. You start with: ``` public delegate void UseServiceDelegate<T>(T proxy); public static class Service<T> { public static ChannelFactory<T> _channelFactory = new ChannelFactory<T>(""); public static void Use(UseServiceDelegate<T> codeBlock) { IClientChannel proxy = (IClientChannel)_channelFactory.CreateChannel(); bool success = false; try { codeBlock((T)proxy); proxy.Close(); success = true; } finally { if (!success) { proxy.Abort(); } } } } ``` Which then allows: ``` Service<IOrderService>.Use(orderService => { orderService.PlaceOrder(request); }); ``` That's not bad, but I don't think it's as expressive and easily understandable as the `using` block. The workaround I'm currently trying to use I first read about on [blog.davidbarret.net](http://blog.davidbarrett.net/archive/2007/11.aspx). Basically, you override the client's `Dispose()` method wherever you use it. Something like: ``` public partial class SomeWCFServiceClient : IDisposable { void IDisposable.Dispose() { if (this.State == CommunicationState.Faulted) { this.Abort(); } else { this.Close(); } } } ``` This appears to be able to allow the `using` block again without the danger of masking a faulted state exception. So, are there any other gotchas I have to look out for using these workarounds? Has anybody come up with anything better?
Actually, although I [blogged](http://marcgravell.blogspot.com/2008/11/dontdontuse-using.html) (see [Luke's answer](https://stackoverflow.com/questions/573872/what-is-the-best-workaround-for-the-wcf-client-using-block-issue/573917#573917)), I think [this](http://web.archive.org/web/20100703123454/http://old.iserviceoriented.com/blog/post/Indisposable+-+WCF+Gotcha+1.aspx) is better than my IDisposable wrapper. Typical code: ``` Service<IOrderService>.Use(orderService=> { orderService.PlaceOrder(request); }); ``` --- (edit per comments) Since `Use` returns void, the easiest way to handle return values is via a captured variable: ``` int newOrderId = 0; // need a value for definite assignment Service<IOrderService>.Use(orderService=> { newOrderId = orderService.PlaceOrder(request); }); Console.WriteLine(newOrderId); // should be updated ```
Given a choice between the solution advocated by IServiceOriented.com and the solution advocated by [David Barret's blog](http://blog.davidbarrett.net/archive/2007/11.aspx), I prefer the simplicity offered by overriding the client's Dispose() method. This allows me to continue to use the using() statement as one would expect with a disposable object. However, as @Brian pointed out, this solution contains a race condition in that the State might not be faulted when it is checked but could be by the time Close() is called, in which case the CommunicationException still occurs. So, to get around this, I've employed a solution that mixes the best of both worlds. ``` void IDisposable.Dispose() { bool success = false; try { if (State != CommunicationState.Faulted) { Close(); success = true; } } finally { if (!success) Abort(); } } ```
What is the best workaround for the WCF client `using` block issue?
[ "", "c#", "vb.net", "wcf", "using", "wcf-client", "" ]
What is the relationship between using virtual functions and C++ inheritance mechanisms versus using templates and something like boost concepts? It seems like there is quite an overlap of what is possible. Namely, it appears to be possible to achieve polymorphic behavior with either approach. So, when does it make sense to favor one over the other? The reason why I bring this up is because I have a templated container, where the containers themselves have a hierarchical relationship. I would like to write algorithms that use these containers without caring about which specific container it is. Also, some algorithms would benefit from knowing that the template type satisfied certain concepts (Comparable, for example). So, on one hand, I want containers to behave polymorphicly. On the other, I still have to use concepts if I want to correctly implement some algorithms. What is a junior developer to do?
I think of concepts as a kind of meta-interface. They categorize types after their abilities. The next C++ version supplies native concepts. I hadn't understood it until i came across C++1x's concepts and how they allow putting different yet unrelated types together. Imagine you have a `Range` interface. You can model that with two ways. One is a *subtype relationship*: ``` class Range { virtual Iterator * begin() = 0; virtual Iterator * end() = 0; virtual size_t size() = 0; }; ``` Of course, every class that derives from that implements the Range interface and can be used with your functions. But now you see it is limited. What about an array? It's a range too! ``` T t[N]; begin() => t end() => t + size() size() => N ``` Sadly, you cannot derive an array from that Range class implementing that interface. You need an extra method (*overloading*). And what about third party containers? A user of your library might want to use their containers together with your functions. But he can't change the definition of their containers. Here, concepts come into game: ``` auto concept Range<typename T> { typename iterator; iterator T::begin(); iterator T::end(); size_t T::size(); } ``` Now, you say something about the supported operations of some type which can be fulfilled if `T` has the appropriate member functions. In your library, you would write the function generic. This allows you accept any type *so long as* it supports the required operations: ``` template<Range R> void assign(R const& r) { ... iterate from r.begin() to r.end(). } ``` It's a great kind of *substitutability*. *Any* type will fit the bill that adheres to the concept, and not only those types that actively implement some interface. The next C++ Standard goes further: It defines a `Container` concept that will be fit by plain arrays (by something caled *concept map* that defines how some type fits some concept) and other, *existing* standard containers. > The reason why I bring this up is because I have a templated container, where the containers themselves have a hierarchical relationship. I would like to write algorithms that use these containers without caring about which specific container it is. Also, some algorithms would benefit from knowing that the template type satisfied certain concepts (Comparable, for example). You can actually do both with templates. You can keep having your hierarchical relationship to share code, and then write the algorithms in a generic fashion. For example, to communicate that your container is comparable. That's like standard random-access/forward/output/input iterator categories are implemented: ``` // tag types for the comparator cagetory struct not_comparable { }; struct basic_comparable : not_comparable { }; template<typename T> class MyVector : public BasicContainer<T> { typedef basic_comparable comparator_kind; }; /* Container concept */ T::comparator_kind: comparator category ``` It's a reasonable simple way to do it, actually. Now you can call a function and it will forward to the correct implementation. ``` template<typename Container> void takesAdvantage(Container const& c) { takesAdvantageOfCompare(c, typename Container::comparator_kind()); } // implementation for basic_comparable containers template<typename Container> void takesAdvantage(Container const& c, basic_comparable) { ... } // implementation for not_comparable containers template<typename Container> void takesAdvantage(Container const& c, not_comparable) { ... } ``` There are actually different techniques that can be used to implement that. Another way is to use `boost::enable_if` to enable or disable different implementations each time.
Yes, polymorphic behavior is possible with both mechanisms. In fact, both are *called* polymorphism too. Virtual functions give you dynamic polymorphism (because it's decided at runtime), while templates give you static polymorphism (everything is decided at compile-time). And that should answer the question of which to prefer as well. Whenever possible, prefer to move work to compile-time. So when you can get away with it, use templates to solve your polymorphism needs. And when that's not possible (because you need to use runtime type information, because the exact types aren't known at compile-time), fall back to dynamic polymorphism. (Of course there may be other reasons to prefer one or the other. In particular, templates require you to move a lot of code to header files which may or may not be a problem, and compilation speed tends to suffer, which also may or may not be a problem.)
C++ concept check vs inheritance
[ "", "c++", "templates", "polymorphism", "" ]
I am currently trying to implement a Swing component, inheriting from `JLabel` which should simply represent a label that can be oriented vertically. Beginning with this: ``` public class RotatedLabel extends JLabel { public enum Direction { HORIZONTAL, VERTICAL_UP, VERTICAL_DOWN } private Direction direction; ``` I thought it's be a nice idea to just alter the results from `getPreferredSize()`: ``` @Override public Dimension getPreferredSize() { // swap size for vertical alignments switch (getDirection()) { case VERTICAL_UP: case VERTICAL_DOWN: return new Dimension(super.getPreferredSize().height, super .getPreferredSize().width); default: return super.getPreferredSize(); } } ``` and then simply transform the `Graphics` object before I offload painting to the original `JLabel`: ``` @Override protected void paintComponent(Graphics g) { Graphics2D gr = (Graphics2D) g.create(); switch (getDirection()) { case VERTICAL_UP: gr.translate(0, getPreferredSize().getHeight()); gr.transform(AffineTransform.getQuadrantRotateInstance(-1)); break; case VERTICAL_DOWN: // TODO break; default: } super.paintComponent(gr); } ``` It seems to work—somehow—in that the text is now displayed vertically. However, placement and size are off. Actually, the width of the background (orange in this case) is identical with the height of the surrounding `JFrame` which is ... not quite what I had in mind. Any ideas how to solve that in a proper way? Is delegating rendering to superclasses even encouraged?
I got it to work now with a little help of a coworker. Basically I now have a field that indicates whether to swap height/width which is only active for the time when the original `JLabel` does its painting. ``` private boolean needsRotate; @Override public Dimension getSize() { if (!needsRotate) { return super.getSize(); } Dimension size = super.getSize(); switch (getDirection()) { case VERTICAL_DOWN: case VERTICAL_UP: return new Dimension(size.height, size.width); default: return super.getSize(); } } @Override public int getHeight() { return getSize().height; } @Override public int getWidth() { return getSize().width; } @Override protected void paintComponent(Graphics g) { Graphics2D gr = (Graphics2D) g.create(); switch (getDirection()) { case VERTICAL_UP: gr.translate(0, getSize().getHeight()); gr.transform(AffineTransform.getQuadrantRotateInstance(-1)); break; case VERTICAL_DOWN: gr.transform(AffineTransform.getQuadrantRotateInstance(1)); gr.translate(0, -getSize().getWidth()); break; default: } needsRotate = true; super.paintComponent(gr); needsRotate = false; } ```
I don't know if it is relevant by now, But while searching for the same thing, i found a very good implementation in the web, <http://macdevcenter.com/pub/a/mac/2002/03/22/vertical_text.html> Check it out, it is an implementation over the TabbedPane with vertical text, See if it suites you purposes.
Rotate a Swing JLabel
[ "", "java", "swing", "jlabel", "" ]
Are there particular instances where I should (or shouldn't?) be using "using" blocks: ``` using(SomeType t = new SomeType()){ ... } ```
When the `SomeType` class implements [`IDisposable`](http://msdn.microsoft.com/en-us/library/system.idisposable.aspx).
Some objects need some action to be taken when you have finished with them. Usually this is because the object uses some kind of resource that needs to be disposed of. For example, if you have a file object of class File, and this object opens a file from the file system, the file in the file system will need to be closed again. If you just left the file object, and forgot to call file.Close() it wouldn't be cleaned up until the Garbage Collector (GC) ran and worked out nothing was still using the file object. When the Garbage Collector runs should be left to the Common Language Runtime (CLR) to decide. If the GC doesn't run for quite a while after you have finished with the file, the file could remain open potentially for a long time. This can pose a big problem if there are many file objects, or if something wants to open a file, but can't because the file object you left is still hanging around. To solve this problem, C# has the IDisposable interface. This has one method called Dispose. Classes that require some cleanup implement this Dispose method. This gives you a standard way for cleaning up any objects that use resources. There are a lot of classes that need to have Dispose called. The problem with this is that code gets covered with calls to Dispose, and they are tricky to follow because the place where you new'ed the object and call Dispose to clean it up are different. So, you had to look around the code a lot and be very careful to check there were calls to Dispose in the right place. To solve this problem C# introduced the 'using' keyword. You can put a 'using' keyword around where you new an object, and this ensures Dispose will be called on it for you. It guarantees that Dispose will be called whatever happens... even if there is an exception thrown within the body of the using statement. So, you should use 'using' when you want to be sure an object that allocates resources will be cleaned up. --- using can only be used for objects that are declared on the stack, i.e. in a function. It doesn't work for objects that are declared as members of a class. For them, you have to call Dispose yourself. You may have to implement Dispose in your class so that in can call Dispose on any member objects it has that require it. --- Common objects that need using called on them are: Files, Database connections, Graphics objects such as Pen and Brush. --- Sometimes it is also used when you want two operations to happen together. For example if you want to write a log statement when a block of code is entered and when it exits you could write a log class that you could use like this: ``` using( Log log = new Log("Doing stuff") ) { // Stuff } ``` The constructor for the log class could be made to write out the message, and the Dispose method could also write it out. Implement the finalizer (~Log) to assert if the Dispose method doesn't get called to ensure the 'using' is remembered around the 'new Log'.
When should I use "using" blocks in C#?
[ "", "c#", ".net", "using", "" ]
Are there any easy to use python components that could be used in a GUI? It would be great to have something like JSyntaxPane for Python. I would like to know of python-only versions ( not interested in jython ) .
If you're using gtk+, there's a binding of gtksourceview for Python in [gnome-python-extras](http://ftp.gnome.org/pub/GNOME/sources/gnome-python-extras/2.25/). It seems to work well in my experience. The downside: the documentation is less than perfect. There's also a binding of [QScintilla](http://www.riverbankcomputing.co.uk/software/qscintilla/intro) for Python if PyQt is your thing.
Other than pygments? <http://pygments.org/>
do you know of any python component(s) for syntax highlighting?
[ "", "python", "components", "syntax-highlighting", "" ]
C++ newbie question. Please, verify I'm doing it right. I have a global application class spawning it's little kids and I need to give the kids access to some of the application facilities. So I decided to pass them to children by reference. I tested the idea as show below. It seems to work fine. I just wanted to make sure I'm not doing something dangerous. Might be there any pitfalls I overlooked? *Dad creates children and gives them his car keys:* ``` #include <iostream> using namespace std; class CCarKeys { public: CCarKeys(const string& Name) : _Name(Name) {} string _Name; }; class CChild { public: CChild(CCarKeys& CarKeys) : _Name("Child"), _CarKeys(CarKeys) {} string _Name; CCarKeys& _CarKeys; void TestHasKeys() {cout << "I got " << _CarKeys._Name << endl;} }; class CDad { public: CDad() : _Name("Dad"), _HondaCarKeys("Honda keys"), _ChevyCarKeys("Chevy keys") {} string _Name; CCarKeys _HondaCarKeys; CCarKeys _ChevyCarKeys; CChild *_Boy; CChild *_Girl; void MakeBoy() {_Boy= new CChild(_HondaCarKeys);} void MakeGirl() {_Girl= new CChild(_ChevyCarKeys);} }; int main () { CDad Dad; Dad.MakeBoy(); Dad.MakeGirl(); Dad._Boy->TestHasKeys(); Dad._Girl->TestHasKeys(); } ```
Looks good to me (if keys is all they need). They might need some other services from Dad which are requested later - like: ``` Wallet += MyDad.GasMoney(REQUEST_MAX_AND_PROMISE_TO_BE_HOME_BY_10PM) ; ``` But they don't have a reference to Dad, so they won't be able to do that. So I would have the CChild constructor take a `this` reference, too. ``` class ICashProvider { public: virtual money Request(IPerson,CashRequestFlags) ; }; class IChaffeur { public: virtual void Drive(IPerson[]) ; }; ``` etc. And then `CChild` constructor would need to take `ICashProvider` and `IChaffeur`, as would `CWife` and `CGirlfriend` (and `CBoyfriend`, perhaps). At this point, I think you might realize that this level of granularity is pointless in the face of `Dad`'s responsibilities and you just give everyone `this` and have `Dad` authenticate requests by forcing callers to send their own `this` on some methods, so you don't have `Dad` performing incest or changing the `CWife`'s diaper.
Passing by reference is quite the same that passing by pointer except for semantics and the fact that you can do nothing with the pointer itself if passing by reference. Your code is OK.
Giving to child access to parent's member by reference - is it OK?
[ "", "c++", "class", "class-design", "pass-by-reference", "" ]
I have what amounts to a multi-dimensional array. ``` int[][][] MyValues; ``` What I want is to access the indexes via a strongly typed equivelent, such as an enumeration. I'm aware that you can get the enumeration values from the Enum type, but it's a bit long winded for my tastes. I'd rather have a way to Strongly type the indexes. For example: ``` int CarNumber = MyValues[Racetrack.Daytona][Race.Daytona500][Driver.JGordon]; ``` This would, by virtue of it being enum-like, prevent any out of bounds exceptions from being thrown, plus it gives all the indexes a nice human readable meaning. I've implemented this using a dictionary approach, but it seems kind of heavy handed: ``` Dictionary<Racetrack,Dictionary<Race,<Dictionary<Driver,int>>> = new Dictionary<Racetrack,Dictionary<Race,<Dictionary<Driver,int>>>(); ``` which I can then access via enums, but I don't really like this approach. It seems "ugly". **I'm looking for some alternate methods to represent what is essentially a multi-dimensional array while using human readable indexers, while maintaining type safety (can't accidently use a Driver for a Race, for instance, so simply using consts is not a good approach).** Any suggestions? This will be a compile time array (example above is not real, just an illustration) so I don't have to worry about inserts or deletes or other manipulations of the array. It will stay immutable, both in values, size and layout. Using a static class with const values is not a good approach either, since it doesn't enforce that only the set of values defined can be passed as indexers.
It sounds to me that you want to use [indexers](http://www.csharp-station.com/Tutorials/Lesson11.aspx) rather than an array. Assuming the following enums (Formula 1 based!): ``` public enum Track { Spielberg, Adelaide, Casablanca, Liverpool, Melbourne, Berlin, Sakhir, } public enum Constructor { BMW, Ferrari, McLaren, Toyota, Williams } public enum Driver { Hamilton, Kovalainen, Raikkonen, Nakajima, Glock } ``` the basic structure is as follows: ``` public class Race { int Year { get; set; } Track Track { get; set; } Driver[] Placings { get; set; } public int this[Driver driver] { } // placing by driver } public class Results { YearResults this[int index] { } DriverResults this[Driver index] { } TrackResults this[Track index] { } ConstructorResults this[Constructor index] { } } public class YearResults { YearDriverResults this[Driver index] { } } ``` This of course is a partial implementation but you can do some pretty cool things with indexers this way. Like you can access your information with any combination of values **in any order** (assuming you set up all the intermediate classes). Its wordier than a multidimensional array or a tuple-keyed Dictionary but I think will give you far more elegant code.
How about using a triple `<Racetrack,Race,Driver>` as the key (define your own class) in the `Dictionary`? If you really need to use an array, I don't think you can do better than wrapping it in a custom class that allows access only using `Racetrack`, `Race`, `Driver` enums.
Best hybrid approach to a multi-dimensional array with strong typed indexing
[ "", "c#", "data-structures", "collections", "" ]
How can I insert a blog (not created yet) into an already existing 'static' webpage? The webpage is written mostly in PHP. I'm considering using something like WordPress.org (host install version) and using it to update the website's news page. From what I've read, its sounds like I would need to do a lot of theme tweaking to get WordPress to display correctly with our website's template. This sounds a bit daunting to me.
I did the exact same thing on [my site](http://echoreply.us). I had about 20 static pages, wanted to add a blog and wanted to add content from the WP pages to the static pages. It was not hard to find a theme that (almost) matched my static pages. Everything outside of /tech/ is a static page. You can also get a very minimalistic theme and then make it match your design. It's one big heaping cut and paste of CSS, re-labling elements to match what WP wants then a little tweaking. I've done it in under 8 hours on other sites. Read up on using [the Wordpress loop](http://codex.wordpress.org/The_Loop). This is so much easier than you think it's going to be, especially if your stuff is already done in PHP. **Edit:** Here's a snippet of the code that I use in my static pages, which allows me to then use all of the other WP functions in the existing code: ``` <?php if ( empty( $wp ) ) require_once( "tech/wp-config.php" ); wp(); ?> ``` Then, getting a list of recent posts is as easy as: ``` <?php get_archives( 'postbypost', 8 ); ?> ``` Just look out for using deprecated functions, I've got a few still left to clean out from when I integrated WP 2 years ago.
Greg is right, an iframe is an easy way to do this. However, I've run into situations where the iframe will throw off session variables in IE, not sure if this impacts WordPress or not. If you're going to create a page to house a WordPress install in an iframe, why not just have the link you would use to show the page with the iframe just link to a separate sub-domain where the WordPress install will reside? My guess is you're not wanting to do a lot with theme development if you're wanting to throw WordPress into an iframe. If this is the case you have a few choices: (a) google for a blank wordpress theme, (b) develop a theme that looks like you're current site so that when a user clicks on a link, they won't know they're on a different platform, (c) don't hide anything and make the WordPress install show up with a different theme. Consider American Express in their OPEN Forum site (<http://www.openforum.com/>), with their blog at <http://blogs.openforum.com/> - same header, slightly different body and layout. Issues w/ going the iframe route is that a WordPress site will grow in height, where you'll have to set the height of an iframe. You can control this by setting the height to something very large, but then your page will be very large, or you can control the amount of posts that show up in the WordPress admin. My suggestions, scrap the iframe, install your WordPress on a sub-domain and then link to that sub domain instead of linking to your iframe page
Add a blog to an existing webpage
[ "", "php", "wordpress", "" ]
Namely, how would you tell an archive (jar/rar/etc.) file from a textual (xml/txt, encoding-independent) one?
There's no guaranteed way, but here are a couple of possibilities: 1. Look for a header on the file. Unfortunately, headers are file-specific, so while you might be able to find out that it's a RAR file, you won't get the more generic answer of whether it's text or binary. 2. Count the number of character vs. non-character types. Text files will be mostly alphabetical characters while binary files - especially compressed ones like rar, zip, and such - will tend to have bytes more evenly represented. 3. Look for a regularly repeating pattern of newlines.
Using Java 7 Files class <http://docs.oracle.com/javase/7/docs/api/java/nio/file/Files.html#probeContentType(java.nio.file.Path)> ``` boolean isBinaryFile(File f) throws IOException { String type = Files.probeContentType(f.toPath()); if (type == null) { //type couldn't be determined, assume binary return true; } else if (type.startsWith("text")) { return false; } else { //type isn't text return true; } } ```
Determining binary/text file type in Java?
[ "", "java", "file", "text", "binary", "" ]
I usually prefer to code with a black background and white/coloured text but I had never taken the time to change my syntax-highlighting in Visual Studio. Yesterday, when I finally got around to it one of my changes was to change User Types and User Types (Value Types) to different colors. Without realizing it, I had been using a struct type to pass and return data from methods more than I would have liked. This change in syntax-highlighting made it very apparent what was going on. So it made me wonder if there were other settings which could provide similar help. I also usually set my documentation and comment colours to something more washed out and passive so that actual code jumps more at you and makes quickly skimming through code faster. Do you have any other tips like this which can help spot issues or makes things more readable? Note: (I've seen [this post](https://stackoverflow.com/questions/174121/syntax-highlighting-what-colors-do-you-like), but I'm looking more for tips which are functional and provide help rather than purely cosmetic preferences.)
I make strings look horrible. Yellow background. Bold. Red foreground. To remind me that hardcoding strings is generally bad and to try as much as possible to minimize it!
* For readability - I recommend a dark (but not black) background, and light (but not white) text. The higher contrast is easy on the eyes, but too much contrast gives me (personally) a headache over time. I also 100% agree on your comment about using a washed out color for docs and comments. * For code understanding - Definitely use different (even if just slightly) different colors for User types, delegates, and value types. This makes them pop, and really helps when you're trying to understand other people's code quickly. Also, I second the comment about making string literals stand out. I don't necessarily use horrible colors, but I have them a color separate from all of my other colors so they are very noticable. I also like having Numbers be a separate color. Many times, numbers have some of the same issues as string literals, and having them colored separately makes them stand out for me and helps clue me into places where I should replace a numeric literal with a constant, property, etc. * For refactoring - I've found that it helps me to have a separate theme available where the colors are all exaggerated (bright blue instead of muted blue coloring for interfaces, for example) for refactoring. I use the same basic color scheme I normally use, but with "brighter" colors for each of the types. (I don't like changing color schemes - my brain is kind of wired to my standard set of colors - switching defeats the purpose for me). Having brighter versions makes everything "pop" a bit more, though, so it's nice for refactoring or for trying to understand somebody else's code.
Know any C# syntax-highlighting tricks?
[ "", "c#", "visual-studio", "syntax-highlighting", "" ]
I am working on an idea wherein I have to identify lines in a JPG or PNG file. The PNG file contains one graph of a single value - one x,y combination. For example, the graph looks like Y = mx+c. I would like to identify the line in the graph - if I can indentify the position of the pixel in the frame, I believe I can get back the values (x,y) which were used to plot the graph. The assumptions here is - I know the scale i.e 1 pixel = what unit of Y ? Can somebody please help me to write a code which will identify the pixels of a specific color in one PNG file? EDIT Lets take an example to make it clear. Lets say I have a set of data values X and Y like this - ``` X = 1, Y = 10 X = 2, Y = 20 X = 3, Y = 30 X = 4, Y = 40 X = 5, Y = 50 X = 6, Y = 60 ``` In this case, if I use jfreechart type of charting tool and make a chart, it tools like a straight line. So the input is the dataset and output is the .PNG file (using jchart or jfreechart) containing this line graph for Y values. Question is - if we reverse the flow, can we develop one program which will take the PNG file (that contains the graph) and give me back the dataset. The context of the problem is -- I want to store the PNG files in my application and not the raw dataset. But given a PNG file, I should be able to get back my dataset.
I'm a bit confused as to whether your problem is simply *determining the colour* of pixels in an image, or if the problem is the *mathematics* of what you're trying to do. For the former, do something such as the following: ``` BufferedImage bimg = ImageIO.read(new File("whatever.png")); // get the colour of the pixel at position (x, y) int col = bimg.getRGB(x, y); // decode red, green and blue components of colour if necessary int r = (col >> 16) & 0xff; int g = (col >> 8) & 0xff; int b = col & 0xff; ``` If from the graph you just want to get back the *dataset* (i.e. not derive an equation from that data), then you essentially loop through each X position and find the Y position where there's a pixel of the colour that the graph plotting program uses. If the graph has axes, antialiasing etc, then the task will be more complex. The task of deriving an *equation* from the data is potentially much more complex, but you can start by checking for certain suspected formulae such as *y = mx + c* as you mention. For example, you can loop through checking the difference between each Y position for the last; if that difference is always the same, then you've got a straight line graph (and at that point, deriving the formula should be trivial). For testing for other equations, it helps to know a bit of calculus. Then, a starting point is to see if the *differences in the differences* match the derivative for the equation in question. (Just as an example, if the equation is *y = ax^2 + c*, then for every increase in X, the increase in Y will itself increase by *2a*.)
Generally, on a computer monitor you don't have a central `(0,0)` point, as that point is defined to be the top left corner of the screen. So if we are looking at a function `f(x)=ax+b` then `b` parameter is defined to be the `y` value of the function at `x=0`, meaning the left border of the screen. So it's important to define exactly what you are offsetting from. To find the slope, just take some central point on the screen you know you have a point of the function there, go `x` pixels to the left or the right, find the `y` height delta, and `y/x` is the function slope, or parameter `a` in the function aforementioned.
Use Java to identify lines in a graph
[ "", "java", "image", "" ]
My dilemma: I'm passing my function a string that I need to then perform numerous regex manipulations on. The logic is if there's a match in the first regex, do one thing. If no match, check for a match with the second and do something else, if not check the third, and so forth. I could do something like this: ``` if re.match('regex1', string): match = re.match('regex1', string) # Manipulate match.group(n) and return elif re.match('regex2', string): match = re.match('regex2', string) # Do second manipulation [etc.] ``` However, this feels unnecessarily verbose, and usually when that's the case it means there's a better way that I'm either overlooking or don't yet know about. Does anyone have a suggestion for a better way to do this (better from a code-appearance standpoint, a memory usage standpoint, or both)?
Similar question from back in september: [How do you translate this regular-expression idiom from Perl into Python?](https://stackoverflow.com/questions/122277) Using global variables in a module maybe not the best way to do it, but converting it into a class: ``` import re class Re(object): def __init__(self): self.last_match = None def match(self,pattern,text): self.last_match = re.match(pattern,text) return self.last_match def search(self,pattern,text): self.last_match = re.search(pattern,text) return self.last_match gre = Re() if gre.match(r'foo',text): # do something with gre.last_match elif gre.match(r'bar',text): # do something with gre.last_match else: # do something else ```
Generally speaking, in these sorts of situations, you want to make the code "data driven". That is, put the important information in a container, and loop through it. In your case, the important information is (string, function) pairs. ``` import re def fun1(): print('fun1') def fun2(): print('fun2') def fun3(): print('fun3') regex_handlers = [ (r'regex1', fun1), (r'regex2', fun2), (r'regex3', fun3) ] def example(string): for regex, fun in regex_handlers: if re.match(regex, string): fun() # call the function break example('regex2') ```
How to concisely cascade through multiple regex statements in Python
[ "", "python", "regex", "coding-style", "" ]
To clarify the title question, I have code such as: ``` <img src='gallery/main.php?g2_view=core.DownloadItem&g2_itemId=8161&g2_serialNumber=2&g2_GALLERYSID=5b24df90ee45f0e01795a3c01a1d634b'> ``` with the file actually residing in the file system under my webroot. Is there any way in PHP to retrieve the images real path being served such as: ``` <img src='images/gallery/album1/file1.jpg'> ``` Eg someFunction(*longURL*) ==> images/gallery/album1/file1.jpg Thank you,
I'm quite sure, Gallery2 (which you are apparently using) has an internal method for this - at least it does this resolving at some place. You'd have to find that piece of code and could either use it directly (if it's e.g. a static method) or abstract it from there. You may want to ask on the Gallery2 forums - it may even have been answered there already. Using Google Codesearch, I've found that main.php appears to have the code to do this already: ``` $path = GalleryDataCache::getCachePath( array('type' => 'fast-download', 'itemId' => $itemId)); ``` `$path` appears to be a file, which, when included provides maybe the vars you'll need.
Given that url, it's quite easy to understand that it's using the `g2_itemId=8161`, which means that it's probably getting the path of the image from a database. With that logic in mind, you can query the database and get the path you want programmatically. ``` function someFunction($id) { // select path_name from g2_items where item_id = $id; } someFunction("8161"); ```
How can I programmatically obtain an image's REAL URL/Path?
[ "", "php", "url", "image", "" ]
This post is a follow-up of this answered question: [Best method for storing a list of user IDs](https://stackoverflow.com/questions/620645/best-method-for-storing-a-list-of-user-ids). I took cletus and Mehrdad Afshari's epic advice of using a normalized database approach. Are the following tables properly set up for proper optimization? I'm kind of new to MySQL efficiency, so I want to make sure this is effective. Also, when it comes to finding the average rating for a game and the total number of votes should I use the following two queries, respectively? ``` SELECT avg(vote) FROM votes WHERE uid = $uid AND gid = $gid; SELECT count(uid) FROM votes WHERE uid = $uid AND gid = $gid; CREATE TABLE IF NOT EXISTS `games` ( `id` int(8) NOT NULL auto_increment, `title` varchar(50) NOT NULL, PRIMARY KEY (`id`) ) AUTO_INCREMENT=1 ; CREATE TABLE IF NOT EXISTS `users` ( `id` int(8) NOT NULL auto_increment, `username` varchar(20) NOT NULL, PRIMARY KEY (`id`) ) AUTO_INCREMENT=1 ; CREATE TABLE IF NOT EXISTS `votes` ( `uid` int(8) NOT NULL, `gid` int(8) NOT NULL, `vote` int(1) NOT NULL, KEY `uid` (`uid`,`gid`) ) ; ```
average votes for a game: `SELECT avg(vote) FROM votes WHERE gid = $gid;` number of votes for a game: `SELECT count(uid) FROM votes WHERE gid = $gid;` As you will not have any user or game ids smaller then `0` you could make them unsigned integers (`int(8) unsigned NOT NULL`). If you want to enforce that a user can only make a single vote for a game, then create a primary key over `uid` and `gid` in the `votes` table instead of just a normal index. ``` CREATE TABLE IF NOT EXISTS `votes` ( `uid` int(8) unsigned NOT NULL, `gid` int(8) unsigned NOT NULL, `vote` int(1) NOT NULL, PRIMARY KEY (`gid`, `uid`) ) ; ``` The order of the primary key's fields (first `gid`, then `uid`) is important so the index is sorted by `gid` first. That makes the index especially useful for selects with a given `gid`. If you want to select all the votes a given user has made then add another index with just `uid`. I would recommend InnoDB for storage engine because especially in high load settings the table locks will kill your performance. For read performance you can implement a caching system using APC, Memcached or others.
Looks good. I would have used users\_id & games\_id instead of gid and uid which sounds like global id and unique id
Efficient MySQL table structure for rating system
[ "", "php", "mysql", "optimization", "performance", "project-planning", "" ]
I have a c# windows form app in which we did a rapid prototype for the client. The extent of my c# experience has been with ASP.NET/C# so its a bit different when it comes to the actual structure of the software. Time is a factor in this project since it is a school project and we have a little under 2 months to be done. *The software will be mainly database driven.* **Current Project Design** Currently there is 1 main windows form and a bunch of class files. One idea we had is that since most of the actual code of the program will be in classes, should we just split the windows form up into a bunch of tiny windows forms with one main driver so to speak? or just leave it as one large form and try to have as little code as possible in the main win form? **What are your suggestions on the design of a c# database driven windows form app?** Please give pro's and con's to each.
Personally I favor an MDI (Multiple Document Interface) over a single form because a single form can become cluttered without careful planning. Also most clients like the idea of a MDI for database driven apps because it reminds them of Microsoft Access. I also like the idea of breaking everything into classes. The rule of thumb that I use for deciding weather or not to make a class is "Will I ever in my programing career need to do X again?". In this case X is what you would put in that class (e.g. Determine Leap Year). If you think that your form won't become cluttered then totally go for a Single form. Hope this helps you decide.
While I agree that you almost certainly want more than one form, I'm not at all a fan of MDI. Rather, I prefer to think about the common tasks that users will perform and design application around the user's expected work-flow, such that there is a natural progression from piece to piece. Of course, that also means providing a way for power users to get past this expected flow when they really want to.
Design of a C# Windows Form App
[ "", "c#", "winforms", "" ]
I have this code: ``` public IEnumerable<int> Iterator { get { if (false) yield return -1; } } ``` It is fairly ugly, but when you try to refactor it to: ``` public IEnumerable<int> Iterator { get { return null; } } ``` The following code breaks: ``` foreach (var item in obj.Iterator) { } ``` How would you go about cleaning this up?
The .NET framework already has a method to do exactly this, by the way (making Jared's code redundant): [`System.Enumerable.Empty<T>`](http://msdn.microsoft.com/en-us/library/bb341042.aspx).
``` public IEnumerable<int> Iterator { get { yield break; } } ```
Implementing empty iterators
[ "", "c#", "refactoring", "iterator", "" ]
I Have a string that is equal to a date, represented as the number of milliseconds since the Unix epoch. I am trying to output it into d-m-Y. The string I was given was "1227643821310", and I am told that the result should be equal to *2-12-2008*, but I keep getting a result of *25-11-2008* My code is as follows: ``` $mil = 1227643821310; $seconds = $mil / 1000; echo date("d-m-Y", $seconds); ``` Any ideas as to why this might be?
You are already doing it right, 1227643821 is simply not 02-12-2008, it is indeed 25-11-2008.
I just added H:i:s like in the below example: ``` $mil = 1227643821310; $seconds = $mil / 1000; echo date("d/m/Y H:i:s", $seconds); ```
php: convert milliseconds to date
[ "", "php", "datetime", "time", "" ]
If you don't control the invocation of your code, but would like to know if it's being executed by an event handler, is this possible?
You could maybe do this if you had a way to look up the stack, but that's not a very good way to do it. This article shows how to access the stack. [A Javascript stacktrace in any browser](http://eriwen.com/javascript/js-stack-trace/)
Javascript has [event](http://www.w3schools.com/htmldom/dom_obj_event.asp) object that can help you identify the source of the event.
Is it possible to detect that your javascript code is executing because of an event?
[ "", "javascript", "events", "" ]
Instead of me spending the next day (or year) reading about them all, are there any suggestions for templating engines that I should look into in more detail?
Best suggestion: try them all. It won't take long. My favourite: [Jinja2](https://jinja.palletsprojects.com) (by a mile) It has decent syntax, can trace errors through it, and is sandboxable.
If you're doing code generation, you might find [Cog](http://nedbatchelder.com/code/cog/) useful - it's specifically for code generation, rather than being a generally applicable templating language.
Best Python templating library to facilitate code generation
[ "", "python", "templates", "code-generation", "" ]
I have a class (Foo) which lazy loads a property named (Bar). What is your preferred way to protect against mistaken use (due to intellisense or inexperienced staff) of the uninitialized backing field? I can think of 3 options: ``` class Foo { // option 1 - Easy to use this.bar by mistake. string bar; string Bar { get { // logic to lazy load bar return bar; } } // option 2 - Harder to use this._bar by mistake. It is more obscure. string _bar2; string Bar2 { get { // logic to lazy load bar2 return _bar2; } } //option 3 - Very hard to use the backing field by mistake. class BackingFields { public string bar; } BackingFields fields = new BackingFields(); string Bar3 { get { // logic to lazy load bar return fields.bar; } } } ``` Keep in mind, the only place I want people mucking around with the backing field bar is in setter and getter of the property. Everywhere else in the class they should always use this.Bar **Update** I am currently using [the following Lazy implementation](http://code.google.com/p/videobrowser/source/browse/branches/big_refactor/MediaBrowser/Library/Util/Lazy.cs) (not for all properties with backing fields, but for select ones that require lazy loading, synchronization and notification). It could be extended to support futures as well (force evaluation in a separate thread in a later time) **Note** My implementation locks on read, cause it supports an external set. Also, I would like to mention that I think this is a language limitation which can be overcome in [Ruby for example](http://innig.net/software/ruby/closures-in-ruby.rb). You can [implement lazy](http://moonbase.rydia.net/software/lazy.rb/0.9.5/doc/) in this way. ``` x = lazy do puts "<<< Evaluating lazy value >>>" "lazy value" end puts x # <<< Evaluating lazy value >>> # lazy value ```
Option 5 [`Lazy<T>`](http://msdn.microsoft.com/en-us/vcsharp/bb870976.aspx) works quite nicely in several situations, though option 1 should really be just fine for most projects so long as the developers aren't idiots. Adding [EditorBrowsable(EditorBrowsableState.Never)] to the field won't help if it is private since this logic only kicks in for intellisense generated from metadata rather than the current code (current project and anything done via project references rather than dlls). Note: `Lazy<T>` is not thread safe (this is good, there's no point locking if you don't need to) if you require thread safety either use one of the [thread safe ones from Joe Duffy](http://www.bluebytesoftware.com/blog/2007/06/09/ALazyInitializationPrimitiveForNET.aspx) or the [Parallel Exetensions CTP](http://www.microsoft.com/downloads/details.aspx?FamilyId=348F73FD-593D-4B3C-B055-694C50D2B0F3&displaylang=en)
How about use of `ObsoleteAttribute` and `#pragma` - hard to miss it then! ``` void Test1() { _prop = ""; // warning given } public string Prop { #pragma warning disable 0618 get { return _prop; } set { _prop = value; } #pragma warning restore 0618 } [Obsolete("This is the backing field for lazy data; do not use!!")] private string _prop; void Test2() { _prop = ""; // warning given } ```
Best way of protect a backing field from mistaken use in C#
[ "", "c#", "design-guidelines", "" ]
For some reason [FXCop seems to think](http://msdn.microsoft.com/en-au/library/ms182269.aspx#) I should be calling GC.SuppressFinalize in Dispose, regardless of whether I have a finalizer or not. Am I missing something? Is there a reason to call GC.SuppressFinalize on objects that have no finalizer defined?
There is always a finalizer in IL - System.Object.Finalize() exists in every class, so if you make a custom class, it has a finalizer you want to suppress. That being said, not all objects are put on the finalization queue, so you only techncially should need to suppress finalization if you implement your own finalizer. If you're implementing `IDisposable` to wrap unmanaged resources, you should include a finalizer, and you should prevent this from running, since in theory you're doing the cleanup already when `Dispose` is called.
There's no need to call `GC.SuppressFinalize(this)` in Dispose, unless: * You are the base class that implements virtual Dispose methods intended for overriding (again, it might not be your responsibility even here, but you might want to do it in that case) * You have a finalizer yourself. Technically, every class in .NET has a finalizer, but if the only finalizer present is the one in `Object`, then the object is not considered to need finalizing and isn't put on the finalization list upon GC I would say, assuming you don't have any of the above cases, that you can safely ignore that message.
Should GC.SuppressFinalize be called on objects that do not have a finalizer?
[ "", "c#", "fxcop", "finalizer", "" ]
I have a method, that perfectly works in Firefox, with which I can determine the name of an instance of a particular javascript object (please don't ask why I need it...). Fr example: ``` var temp = new String("hello!"); var theName = getVarName(temp); //returns "temp" ``` This method uses "window.hasOwnProperty()" which doen't work in Internet Explorer: any suggestions?
> I have a method, that perfectly works in Firefox, with which I can determine the name of an instance of a particular javascript object I don't think you do, because that's not possible in JavaScript. JS is a call-by-value language; when you write: ``` var temp= 'hello'; getVarName(temp); ``` that's exactly the same as saying: ``` getVarName('hello'); ``` At which point the reference to ‘temp’ as a variable is lost. What I'm guessing your getVarName function does is basically this: ``` function getVarName(value) { for (var name in window) { if (window[name]===value) return name; } } ``` This will work on IE and other browsers without Object.hasOwnProperty(); it will simply return the name of any global variable that matches the argument. The hasOwnProperty() call can be added to this function to refine it a little by only allowing direct properties of window (which act as global variables, including the ones you set explicitly), and not of any of its prototypes. I'm guessing that's what your version of the function is doing, but in practice it has very little effect since almost nothing inherits into ‘window’ by prototype. You're confusing things a little bit by boxing your 'hello' in an explicit String object (which is very unusual and rarely a good idea), which makes it possible to have two different 'hello' objects that are disinguishable using the === identity comparator, so this will work: ``` var a= new String('hello!'); var b= new String('hello!'); getVarName(a); // 'a' getVarName(b); // 'b' - distinguishable object from a ``` But that still doesn't stop you from doing: ``` var a= new String('hello!'); var b= a; getVarName(a); // 'a' or 'b', depending on implementation getVarName(b); // the same 'a' or 'b' as in the previous call ``` So, whilst you can fairly harmlessly lose the hasOwnProperty() call as above, what you're doing can't really work properly and you should probably look at a better way of achieving whatever it is you're up to.
If for whatever reason you do need to use window, use: ``` Object.prototype.hasOwnProperty.call(obj, p) ```
Is there a way to use window.hasOwnProperty() in Internet Explorer?
[ "", "javascript", "internet-explorer", "" ]
Which is the more efficient way to create the following, hardcoded HTML or Javascript manipulation: A pre-defined grid, i.e. constant number of rows and columns, with boarders and styles and event handlers. Is it better to have many *div*s written in HTML. ``` <div class='tl' id='c00' style='top:0px; left:0px;'>A</div> <div class='tl' id='c01' style='top:0px; left:20px;'>A</div> <div class='tl r' id='c02' style='top:0px; left:40px;'>A</div> ``` Or use a Javascript to generate the *div*s. ``` d1 = "<div class='" d2 = "' id='" d3 = "' style='" d4 = "'>" d5 = "</div>" sHTML = "" for (j = 1; j < 5; ++j) for (i = 1; i < 5; ++i) { sClass = "t l "; if ((i+1 % 3) = 0) sClass = sClass + "r "; if ((j+1 % 3) = 0) sClass = sClass + "b "; sLine = d1 + sClass; sLine = sLine + d2 + "c" + n; sLine = sLine + d3 + "top:" + (i*20) + "px; left:" + (i*20) + "px;"; sLine = sLine + d4 + "A"; sLine = sLine + d5; document.writeln(sLine); } ``` The time is takes to create the HTML is not an issue, but as it will be ran locally and probably on a mobile device the pages performance matters more. Thanks for the help.
Hard coding it will be faster. The browser needs to parse the html either way. All you are doing by using Javascript is making it very confusing and adding overhead.
Maintenance will also be significantly easier if you just write the markup out rather than using JavaScript. If you *were* going to do it in JS... at least do it using via the [HTML DOM](http://www.w3schools.com/js/js_obj_htmldom.asp)
Table/Grid Layout - hardcoded HTML vs generated by Javascript
[ "", "javascript", "html", "" ]
I'm using VB.NET I need same control (ListBox) to be displayed on 2 different tabs. Is it mandatory to create 2 different ListBox instances?
If you don't need design-time support you can simply, at runtime, change the ListBox instance's Parent from one tab to the other (making sure to set the positioning appropriately, of course). Essentially, it's: `listBox1.Parent = tabControl1.TabPages[1];` In the end though, you'll probably find it easier to just have two ListBox's with the same data source.
Yes, I think you'll need a ListBox control on each tab. If they have the same data you can use the same DataSource for both though.
How to display the same control on two different tabs?
[ "", "c#", "vb.net", "winforms", "controls", "tabcontrol", "" ]
I am using `GridView` control in [asp.net](/questions/tagged/asp.net "show questions tagged 'asp.net'") 2005 [c#](/questions/tagged/c%23 "show questions tagged 'c#'") using . How can I delete a particular row from `GridView`. I have written the following code. But it's not working... ``` DataRow dr = dtPrf_Mstr.NewRow(); dtPrf_Mstr.Rows.Add(dr); GVGLCode.DataSource = dtPrf_Mstr; GVGLCode.DataBind(); int iCount = GVGLCode.Rows.Count; for (int i = 0; i <= iCount; i++) { GVGLCode.DeleteRow(i); } GVGLCode.DataBind(); ```
You are deleting the row from the gridview but you are then going and calling databind again which is just refreshing the gridview to the same state that the original datasource is in. Either remove it from the datasource and then databind, or databind and remove it from the gridview without redatabinding.
You're deleting the row from the gridview and then rebinding it to the datasource (which still contains the row). Either delete the row from the datasource, or don't rebind the gridview afterwards.
How to delete a row from GridView?
[ "", "c#", "asp.net", "gridview", "delete-row", "" ]
How can I configure JPA/Hibernate to store a date/time in the database as UTC (GMT) time zone? Consider this annotated JPA entity: ``` public class Event { @Id public int id; @Temporal(TemporalType.TIMESTAMP) public java.util.Date date; } ``` If the date is 2008-Feb-03 9:30am Pacific Standard Time (PST), then I want the UTC time of 2008-Feb-03 5:30pm stored in the database. Likewise, when the date is retrieved from the database, I want it interpreted as UTC. So in this case 530pm is 530pm UTC. When it's displayed it will be formatted as 9:30am PST.
Since Hibernate 5.2, you can now force the UTC time zone by adding the following configuration property into the `properties.xml` JPA configuration file: ``` <property name="hibernate.jdbc.time_zone" value="UTC"/> ``` If you're using Spring Boot, then add this property to your `application.properties` file: ``` spring.jpa.properties.hibernate.jdbc.time_zone=UTC ```
To the best of my knowledge, you need to put your entire Java app in UTC timezone (so that Hibernate will store dates in UTC), and you'll need to convert to whatever timezone desired when you display stuff (at least we do it this way). At startup, we do: ``` TimeZone.setDefault(TimeZone.getTimeZone("Etc/UTC")); ``` And set the desired timezone to the DateFormat: ``` fmt.setTimeZone(TimeZone.getTimeZone("Europe/Budapest")) ```
How to store date/time and timestamps in UTC time zone with JPA and Hibernate
[ "", "java", "hibernate", "datetime", "jpa", "timezone", "" ]
Is it possible on a web server to determine the users local time and timezone when they sent the request? Could it be done using javascript to capture it and post it back to the server? My company want to track how many users use our site outside of office hours (no I dont really know why either!). Thanks.
You need to put the client's JavaScript time into a hidden field, and assume their clock is set properly. ``` myTime = new Date() >> Tue Feb 03 2009 12:24:28 GMT-0500 (Eastern Standard Time) ```
You should consider reverse DNS resolution of your existing web server logs. You can then use geocoding to guess user's time zone (and their local time relative to your web server's time). You would not need to write new JavaScript code, write custom server-side code to log user times, and then deploy an updated website. This Perl script might be a helpful example: [Perl Script that Does Bulk Reverse-DNS Lookups](http://blog.unixlore.net/2006/03/perl-script-that-does-bulk-reverse-dns.html)
Is it possible to determine the users time from a web request?
[ "", "javascript", "time", "" ]
In Jesse Liberty's Programming C# (p.142) he provides an example where he casts an object to an interface. ``` interface IStorable { ... } public class Document : IStorable { ... } ... IStorable isDoc = (IStorable) doc; ... ``` What is the point of this, particularly if the object's class implements the inteface anyway? EDIT1: To clarify, I'm interested in the **reason for the cast (if any)**, *not* the reason for implementing interfaces. Also, the book is his 2001 First Edition (based on C#1 so the example may not be germane for later versions of C#). EDIT2: I added some context to the code
There is only one reason when you actually need a cast: When doc is of a base type of an actual object that implements IStorable. Let me explain: ``` public class DocBase { public virtual void DoSomething() { } } public class Document : DocBase, IStorable { public override void DoSomething() { // Some implementation base.DoSomething(); } #region IStorable Members public void Store() { // Implement this one aswell.. throw new NotImplementedException(); } #endregion } public class Program { static void Main() { DocBase doc = new Document(); // Now you will need a cast to reach IStorable members IStorable storable = (IStorable)doc; } } public interface IStorable { void Store(); } ```
Because you want to restrict yourself to only methods provided by the interface. If you use the class, you run the risk of calling a method (inadvertently) that's not part of the interface.
Why cast to an interface?
[ "", "c#", "interface", "casting", "" ]
**EDIT** *Solution Found*: See my post below. We are writing a library that reads in a TIF file from a scanner. Basically, its a scantron. We are examining the form and reading values from it. Currently we have a windows test app, and we give it a filepath as a string ("c:\testing\image.tif"). It loads up the tif, reads the document correctly and parses the values. We also have an ASP.NET web application. We have a test page that does exactly what the windows app does, we hand it an identical string, and i calls the same function on the same class from the same library. It however does NOT read the form correctly. We have verified that it does it fact load up the tif file, and it is actually filled with data (pixels we expect to be white/black are white/black when we examine the Bitmap obect in the immediate window of Visual Studio). The specific problem is in a library called **DataMatrix** we use to scan a bar-code off the document. This function is supposed to return a `List<string>,` each of which is a barcode the library found on the document. In the windows app, this function (DataMatrixDecoder.DecodeBarcode(bitmap)) correctly returns with a Count=1. When using the asp.net app, this returns with Count=0. Because its the exact same image file, I cannot imagine the problem is in DataMatrix. I can only assume its something with ASP.NET or something. This isn't even my project, but another guy and I are helping our coworker figure this out, and we are just pulling our hair out. All signs indicate that ASP.NET is correctly loading and handing the image off disk to the "processor" class (which is a class library that uses the DataMatrix stuff, we are not doing ANY code in ASP.NET except for opening/handing the file to the function.). Does anyone have any ideas as to what it might be, or different things we can check? I'm not even sure what kind of information to give so I tried to say it all, if you have any questions please ask I'd be more than happy to elaborate on anything. Thanks. edit: this is the code on the ascx.cs code-behind, in a button-click event: ``` if (formReader.ReadTIFF(@"c:\testing\image.tif")) { messages.Controls.Add(HtmlHelper.DivSuccess("Read successful.")); } ``` The `formReader` class then open the file with a `FileStream`, and uses that to create a `Bitmap`. The ASP.NET application is not actually opening the file at all (we were uploading it through a `FormUpload` control, but after experiencing problems we dummied it down to this). This is the most perplexing thing, that it works in the windows app but not from this web site. ASP.NET has full permissions on that folder to do whatever it wants. It can open the image fine, and the bitmap it creates from the FileStream is the actual image. edit: Also, the ReadTIFF function right now copies the FileStream into a MemoryStream, ensuring its a not a problem streaming from disk (the entire file is in memory).
**EDIT** *Solution Found*: The problem was that the open file dialog was changing the CurrentWorkingDirectory. The reason the website never worked, was because the `Environment.CurrentDirectory` was set incorrectly. When I manually set the CurrentDirectory to the websites' `bin` folder, parsing works correctly. --- Small update. Using the Windows App, and selecting the file via `OpenFileDialog`, will cause the barcode decoder to fail. Technically, I am using the *exact* same string to hand to the parser ("c:\testing\image.tif"), yet when I use the `OpenFileDialog` to get the string, the decoder fails. Is there a clue in this? update: In fact, even if I don't use the string the `OpenFileDialog` gives me, if I just open the file dialog *at all*, it will fail. I don't get this. It's something simple. I need to debug the C++ DataMatrix library, really.
How are you passing the filepath to the web application? It is possible that the function which Decodes might be swallowing some exception. Use reflector to examine the library (if you have not written it).
Reading a Bitmap from a TIF file - Works in WinForms not in ASP.NET
[ "", "c#", "asp.net", "image", "" ]
My memory is 4G physical, but why I got out of memory exception even if I create just 1.5G memory object. Any ideas why? (I saw at the same time, in the performance tab of task manager the memory is not full occupied, and I could also type here -- so memory is not actually low, so I think I hit some other memory limitations)? ``` using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace TestBigMemoryv1 { class MemoryHolderFoo { static Random seed = new Random(); public Int32 holder1; public Int32 holder2; public Int64 holder3; public MemoryHolderFoo() { // prevent from optimized out holder1 = (Int32)seed.NextDouble(); holder2 = (Int32)seed.NextDouble(); holder3 = (Int64)seed.NextDouble(); } } class Program { static int MemoryThreshold = 1500; //M static void Main(string[] args) { int persize = 16; int number = MemoryThreshold * 1000 * 1000/ persize; MemoryHolderFoo[] pool = new MemoryHolderFoo[number]; for (int i = 0; i < number; i++) { pool[i] = new MemoryHolderFoo(); if (i % 10000 == 0) { Console.Write("."); } } return; } } } ```
In a normal 32 bit windows app, the process only has 2GB of addressable memory. This is irrelevant to the amount of physical memory that is available. So 2GB available but 1.5 is the max you can allocate. The key is that your code is not the only code running in the process. The other .5 GB is probably the CLR plus fragmentation in the process. **Update:** in .Net 4.5 in 64 bit process you can have large arrays if [gcAllowVeryLargeObjects](http://msdn.microsoft.com/en-us/library/hh285054%28v=vs.110%29.aspx) setting is enabled: > On 64-bit platforms, enables arrays that are greater than 2 gigabytes (GB) in total size. > The maximum number of elements in an array is UInt32.MaxValue. ``` <configuration> <runtime> <gcAllowVeryLargeObjects enabled="true" /> </runtime> </configuration> ```
Just additional to the other points; if you want access to a dirty amount of memory, consider x64 - but be aware that the maximum **single** object size is still 2GB. And because references are larger in x64, this means that you actually get a *smaller* maximum array/list size for reference-types. Of course, by the time you hit that limit you are probably doing things wrong anyway! Other options: * use files * use a database (obviously both has a performance difference compared to in-process memory) --- Update: In versions of .NET prior to 4.5, the maximum object size is 2GB. From 4.5 onwards you can allocate larger objects if [gcAllowVeryLargeObjects](http://msdn.microsoft.com/en-us/library/hh285054%28v=vs.110%29.aspx) is enabled. Note that the limit for `string` is not affected, but "arrays" should cover "lists" too, since lists are backed by arrays.
Why am I getting an Out Of Memory Exception in my C# application?
[ "", "c#", "memory-management", "out-of-memory", "" ]
Is it possible in hibernate to have an entity where some IDs are assigned and some are generated? For instance: Some objects have an ID between 1-10000 that are generated outside of the database; while some entities come in with no ID and need an ID generated by the database.
You could use 'assigned' as the Id generation strategy, but you would have to give the entity its id before you saved it to the database. Alternately you could build your own implementation of org.hibernate.id.IdentifierGenerator to provide the Id in the manner you've suggested. I have to agree w/ Cade Roux though, and doing so seems like it be much more difficult than using built in increment, uuid, or other form of id generation.
I would avoid this and simply have an auxiliary column for the information about the source of the object and a column for the external identifier (assuming the external identifier was an important value you wanted to keep track of). It's generally a bad idea to use columns for mixed purposes - in this case to infer from the nature of a surrogate key the source of an object.
Hibernate and IDs
[ "", "sql", "hibernate", "jpa", "" ]
I'm using JAXB to read and write XML. What I want is to use a base JAXB class for marshalling and an inherited JAXB class for unmarshalling. This is to allow a sender Java application to send XML to another receiver Java application. The sender and receiver will share a common JAXB library. I want the receiver to unmarshall the XML into a receiver specific JAXB class which extends the generic JAXB class. Example: This is the common JAXB class which is used by the sender. ``` @XmlRootElement(name="person") public class Person { public String name; public int age; } ``` This is the receiver specific JAXB class used when unmarshalling the XML. The receiver class has logic specific to the receiver application. ``` @XmlRootElement(name="person") public class ReceiverPerson extends Person { public doReceiverSpecificStuff() ... } ``` Marshalling works as expected. The problem is with unmarshalling, it still unmarshals to `Person` despite the JAXBContext using the package name of the subclassed `ReceiverPerson`. ``` JAXBContext jaxbContext = JAXBContext.newInstance(package name of ReceiverPerson); ``` What I want is to unmarshall to `ReceiverPerson`. The only way I've been able to do this is to remove `@XmlRootElement` from `Person`. Unfortunately doing this prevents `Person` from being marshaled. It's as if JAXB starts at the base class and works its way down until it finds the first `@XmlRootElement` with the appropriate name. I've tried adding a `createPerson()` method which returns `ReceiverPerson` to `ObjectFactory` but that doesn't help.
You're using JAXB 2.0 right? (since JDK6) There is a class: ``` javax.xml.bind.annotation.adapters.XmlAdapter<ValueType,BoundType> ``` which one can subclass, and override following methods: ``` public abstract BoundType unmarshal(ValueType v) throws Exception; public abstract ValueType marshal(BoundType v) throws Exception; ``` Example: ``` public class YourNiceAdapter extends XmlAdapter<ReceiverPerson,Person>{ @Override public Person unmarshal(ReceiverPerson v){ return v; } @Override public ReceiverPerson marshal(Person v){ return new ReceiverPerson(v); // you must provide such c-tor } } ``` Usage is done by as following: ``` @Your_favorite_JAXB_Annotations_Go_Here class SomeClass{ @XmlJavaTypeAdapter(YourNiceAdapter.class) Person hello; // field to unmarshal } ``` I'm pretty sure, by using this concept you can control the marshalling/unmarshalling process by yourself (including the choice the correct [sub|super]type to construct).
The following snippet is a method of a Junit 4 test with a green light: ``` @Test public void testUnmarshallFromParentToChild() throws JAXBException { Person person = new Person(); int age = 30; String name = "Foo"; person.name = name; person.age= age; // Marshalling JAXBContext context = JAXBContext.newInstance(person.getClass()); Marshaller marshaller = context.createMarshaller(); StringWriter writer = new StringWriter(); marshaller.marshal(person, writer); String outString = writer.toString(); assertTrue(outString.contains("</person")); // Unmarshalling context = JAXBContext.newInstance(Person.class, RecieverPerson.class); Unmarshaller unmarshaller = context.createUnmarshaller(); StringReader reader = new StringReader(outString); RecieverPerson reciever = (RecieverPerson)unmarshaller.unmarshal(reader); assertEquals(name, reciever.name); assertEquals(age, reciever.age); } ``` The important part is the use of the `JAXBContext.newInstance(Class... classesToBeBound)` method for the unmarshalling context: ``` context = JAXBContext.newInstance(Person.class, RecieverPerson.class); ``` With this call, JAXB will compute a reference closure on the classes specified and will recognize `RecieverPerson`. The test passes. And if you change the parameters order, you'll get a `java.lang.ClassCastException` (so they **must** be passed in this order).
JAXB inheritance, unmarshal to subclass of marshaled class
[ "", "java", "xml", "jaxb", "" ]
I need to access the DOM tree and get the elements just 1 level below the current element. Read the following code: ``` <div id="node"> <div id="a"> <div id="aa"> <div id="ab"> <div id="aba"></div> </div> </div> </div> <div id="b"> <div id="ba"> <div id="bb"> <div id="bba"></div> </div> </div> </div> <div id="c"> <div id="ca"> <div id="cb"> <div id="cba"></div> </div> </div> </div> </div> ``` I want to get the 3 elements "a", "b", "c" under "node". What should I do? var nodes = node.getElementsByTagName("div") <---- I get all the divs but not the 3 divs I need. var nodes = node.childNodes; <---- works in IE, but FF contains Text Node Does anyone know how to solve the problem?
You could use a function that rules out all non-element nodes: ``` function getChildNodes(node) { var children = new Array(); for(var child in node.childNodes) { if(node.childNodes[child].nodeType == 1) { children.push(child); } } return children; } ```
Try this (late answer, but can be useful for others): ``` var list; list=document.getElementById("node").querySelectorAll("#node>div"); ```
Get elements just 1 level below the current element by javascript
[ "", "javascript", "dom", "" ]
I have two methods that I'm using as custom tags in a template engine: ``` # Renders a <select> form field def select_field(options, selected_item, field_name): options = [(str(v),str(v)) for v in options] html = ['<select name="%s">' % field_name] for k,v in options: tmp = '<option ' if k == selected_item: tmp += 'selected ' tmp += 'value="%s">%s</option>' % (k,v) html.append(tmp) html.append('</select>') return '\n'.join(html) # Renders a collection of <select> fields for datetime values def datetime_field(current_dt, field_name): if current_dt == None: current_dt = datetime.datetime.now() day = select_field(range(1, 32), current_dt.day, field_name + "_day") month = select_field(range(1, 13), current_dt.month, field_name + "_month") year = select_field(range(datetime.datetime.now().year, datetime.datetime.now().year + 10), current_dt.year, field_name + "_year") hour = select_field(range(1, 13), current_dt.hour, field_name + "_hour") minute = select_field(range(1, 60), current_dt.minute, field_name + "_minute") period = select_field(['AM', 'PM'], 'AM', field_name + "_period") return "\n".join([day, '/', month, '/', year, ' at ', hour, ':', minute, period]) ``` As you can see in the comments, I'm attempting to generate select fields for building a date and time value. My first question is, how do I make the day, month, hour, and minute ranges use two digits? For example, the code generates "1, 2, 3..." while I would like "01, 02, 03". Next, when the fields get generated, the values of the supplied datetime are not selected automatically, even though I'm telling the select\_field method to add a "selected" attribute when the value is equal to the supplied datetime's value. Finally, how do I get the 12-hour period identifier (AM/PM) for the supplied datetime? For the moment, I'm simply selecting 'AM' by default, but I would like to supply this value from the datetime itself.
To turn an integer range into two digit strings: ``` >>> range(13) [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12] >>> [ '%02d' % i for i in range(13) ] ['00', '01', '02', '03', '04', '05', '06', '07', '08', '09', '10', '11', '12'] ``` Then to get the AM/PM indicator: ``` >>> import datetime >>> current_dt = datetime.datetime.now() >>> current_dt datetime.datetime(2009, 2, 4, 22, 2, 14, 390000) >>> ['AM','PM'][current_dt.hour>=12] 'PM' ``` Voila!
Question 1: ``` >>> '%02d' % 2 '02' >>> '%02d' % 59 '59' ```
Various Python datetime issues
[ "", "python", "datetime", "" ]
I know I can get an imagebutton's X&Y, but how do I get it's ID? I'd like to start by printing it to a label at first. later I would like to use it in a switch case - any different case would change the imagebutton.imageurl to a different image, but speicifically do it for the imagebutton I just clicked on. I tried ``` Label1.Text = Convert.ToString((ImageButton)sender); ``` But this is the result ``` System.Web.UI.WebControls.ImageButton ``` As a result which isn't a lot of help, because I need the specific control's ID. Thanks!
``` ImageButton b = sender as ImageButton; if (b != null) { string theId = b.ClientID; // Change the URL b.ImageUrl = "/whatever/you/like.png"; } ```
Do you mean this? ``` Label1.Text = ((ImageButton)sender).ID ``` **Update** (as per your comment): To change the ImageURL, you would use this: ``` ((ImageButton)sender).ImageUrl ="correct_quiz.gif"; ``` Or if you'd want to combine the two things, I'd recommend this: ``` ImageButton button = ((ImageButton)sender); Label1.Text = button.ID; button.ImageUrl = "correct_quiz.gif"; ```
How do I get an imagebutton ID in C#?
[ "", "c#", "imagebutton", "" ]
Is there a way in C# where I can use reflection to set an object property? Ex: ``` MyObject obj = new MyObject(); obj.Name = "Value"; ``` I want to set `obj.Name` with reflection. Something like: ``` Reflection.SetProperty(obj, "Name") = "Value"; ``` Is there a way of doing this?
Yes, you can use `Type.InvokeMember()`: ``` using System.Reflection; MyObject obj = new MyObject(); obj.GetType().InvokeMember("Name", BindingFlags.Instance | BindingFlags.Public | BindingFlags.SetProperty, Type.DefaultBinder, obj, "Value"); ``` This will throw an exception if `obj` doesn't have a property called `Name`, or it can't be set. Another approach is to get the metadata for the property, and then set it. This will allow you to check for the existence of the property, and verify that it can be set: ``` using System.Reflection; MyObject obj = new MyObject(); PropertyInfo prop = obj.GetType().GetProperty("Name", BindingFlags.Public | BindingFlags.Instance); if(null != prop && prop.CanWrite) { prop.SetValue(obj, "Value", null); } ```
You can also do: ``` Type type = target.GetType(); PropertyInfo prop = type.GetProperty("propertyName"); prop.SetValue (target, propertyValue, null); ``` where target is the object that will have its property set.
Set object property using reflection
[ "", "c#", ".net", "reflection", "properties", "" ]
I have a source code of about 500 files in about 10 directories. I need to refactor the directory structure - this includes changing the directory hierarchy or renaming some directories. I am using svn version control. There are two ways to refactor: one preserving svn history (using svn move command) and the other without preserving. I think refactoring preserving svn history is a lot easier using eclipse CDT and SVN plugin (visual studio does not fit at all for directory restructuring). But right now since the code is not released, we have the option to not preserve history. Still there remains the task of changing the include directives of header files wherever they are included. I am thinking of writing a small script using python - receives a map from current filename to new filename, and makes the rename wherever needed (using something like sed). Has anyone done this kind of directory refactoring? Do you know of good related tools?
If you're having to rewrite the `#include`s to do this, you did it wrong. Change all your `#includes` to use a very simple directory structure, at mot two levels deep and only using a second level to organize around architecture or OS dependencies (like `sys/types.h`). Then change your make files to use `-I` include paths. Voila. You'll never have to hack the code again for this, and compiles will blow up instantly if something goes wrong. As far as the history part, I personally find it easier to make a clean start when doing this sort of thing; archive the old one, make a new repository v2, go from there. The counterargument is when there is a whole lot of history of changes, or lots of open issues against the existing code. Oh, and you do have good tests, and you're not doing this with a release coming right up, right?
I would preserve the history, even if it takes a small amount of extra time. There's a lot of value in being able to read through commit logs and understand why function X is written in a weird way, or that this really is an off-by-one error because it was written by Oliver, who always gets that wrong. The argument against preserving the history can be made for the following users: * your code might have embarrassing things, like profanity and fighting among developers * you don't care about the commit history of your code, because it's not going to change or be maintained in the future I did some directory refactoring like this last year on our code base. If your code is reasonable structured at the beginning, you can do about 75-90% of the work using scripts written in your language of choice (I used Perl). In my case, we were moving from set of files all in one big directory, to a series of nested directories depending on namespaces. So, a file that declared the class protocols::serialization::SerializerBase was located in src/protocols/serialization/SerializerBase. The mapping from the old name to the new name was trivial, so that doing a find and replace on #includes in every source file in the tree was trivial, although it was a big change. There were a couple of weird edge cases that we had to fix by hand, but that seemed a lot better than either having to do everything by hand or having to write our own C++ parser.
C++ Directory Restructuring
[ "", "c++", "refactoring", "" ]
In C++, whenever a function creates many (hundreds or thousands of) values, I used to have the caller pass an array that my function then fills with the output values: ``` void computeValues(int input, std::vector<int>& output); ``` So, the function will fill the vector `output` with the values it computes. But this is not really good C++ style, as I'm realizing now. The following function signature is better because it doesn't commit to using a `std::vector`, but could use any container: ``` void computeValues(int input, std::insert_iterator<int> outputInserter); ``` Now, the caller can call with some `inserter`: ``` std::vector<int> values; // or could use deque, list, map, ... computeValues(input, std::back_inserter(values)); ``` Again, we don't commit to using `std::vector` specifically, which is nice, because the user might just need the values in a `std::set` etc. (Should I pass the `iterator` by value or by reference?) My question is: Is the `insert_iterator` the right or standard way to do it? Or is there something even better? **EDIT:** I edited the question to make it clear that I'm not talking about returning two or three values, but rather hundreds or thousands. (Imagine you have return all the files you find in a certain directory, or all the edges in a graph etc.)
**Response to Edit:** Well, if you need to return hundreds and thousands if values, a tuple of course would not be the way to go. Best pick the solution with the iterator then, but it's best not use any specific iterator type. --- If you use iterators, you should use them as generic as possible. In your function you have used an insert iterator like `insert_iterator< vector<int> >`. You lost any genericity. Do it like this: ``` template<typename OutputIterator> void computeValues(int input, OutputIterator output) { ... } ``` Whatever you give it, it will work now. But it will not work if you have different types in the return set. You can use a tuple then. Also available as `std::tuple` in the next C++ Standard: ``` boost::tuple<int, bool, char> computeValues(int input) { .... } ``` If the amount of values is variadic and the type of the values is from a fixed set, like (int, bool, char), you can look into a container of `boost::variant`. This however implies changes only on the call-side. You can keep the iterator style of above: ``` std::vector< boost::variant<int, bool, char> > data; computeValues(42, std::back_inserter(data)); ```
You could return a smart pointer to a vector. That should work and no copy of the vector will be made. If you don't want to keep the smart pointer for the rest of your program, you could simply create a vector before calling the function, and swap both vectors.
How do I return hundreds of values from a C++ function?
[ "", "c++", "stl", "vector", "iterator", "return-value", "" ]
I frequently encounter situations, especially with sorting in C++, where I am comparing a series of fields in order to compare a larger structure. A simplified example: ``` struct Car{ Manufacturer make; ModelName model; Year year; }; bool carLessThanComparator( const Car & car1, const Car & car2 ){ if( car1.make < car2.make ){ return true; }else if( car1.make == car2.make ){ if( car1.model < car2.model ){ return true; }else if( car1.model == car2.model ){ if( car1.year < car2.year ){ return true; } } } return false; } ``` My instinctive approach seems cumbersome, especially for more than 3 fields. How would you structure this series of comparisons in C++? Do other languages provide a more succinct or elegant syntax?
Well, if your function hits a return in the **if** clause, there's no need for an explicit **else**, since it would have already bailed out. That can save on the "indent valley": ``` bool carLessThanComparator( const Car & car1, const Car & car2 ) { if( car1.make < car2.make ) return true; if ( car1.make != car2.make ) return false; if( car1.model < car2.model ) return true; if( car1.model != car2.model ) return false; if( car1.year < car2.year ) return true; return false; } ``` I like MarkusQ's LISPish short-circuiting approach as well.
If this happens a lot you could put a template like this into a common header: ``` template<typename T, typename A1, typename A2, typename A3> bool do_less_than( const typename T& t1, const typename T& t2, const typename A1 typename T::* a1, const typename A2 typename T::* a2, const typename A3 typename T::* a3) { if ((t1.*a1) < (t2.*a1)) return true; if ((t1.*a1) != (t2.*a1)) return false; if ((t1.*a2) < (t2.*a2)) return true; if ((t1.*a2) != (t2.*a2)) return false; return (t1.*a3) < (t2.*a3); } ``` Add other templates for different numbers of arguments as required. For each less than function, you can then do something like this: ``` bool carLessThanComparator(const Car& car1, const Car& car2) { return do_less_than(car1, car2, &Car::make, &Car::model, &Car::year); } ```
How do you structure your comparison functions?
[ "", "c++", "coding-style", "language-features", "lexicographic", "stdtuple", "" ]
In every ASP.NET application I have written, I would make number of request to the database before outputting the information onto the web page. For example: ``` var DataTable1 = GetDataTable("Select * From Customers"); var DataTable2 = GetDataTable("Select * From Products"); var DataTable3 = GetDataTable("Select * From Orders"); ``` As far as I'm aware, the code above would make 3 separate trips to the database and would do them one after the other. Is there anyway I can gather together my parameterized SQL statements and make only 1 trip to the database server?
``` var SqlString = "SELECT * FROM Customers; SELECT * FROM Products; SELECT * FROM ORDERS;"); var ds = GetDataSet(SqlString); var DataTable1 = ds.Tables(0); var DataTable2 = ds.Tables(1); var DataTable3 = ds.Tables(2); ```
My solution: ``` SqlConnection con = new SqlConnection("Server=CLASS-III-WKS10\\SQLEXPRESS;Initial Catalog=wind;Integrated Security=True"); int[] id=new int[9]; int i = 0; page_load() { con.Open(); SqlCommand cmd = new SqlCommand("select *from windmill", con); SqlDataReader rd = cmd.ExecuteReader(); while (rd.Read()) { id[i] = rd.GetInt32(9); i++; //MessageBox.Show(id.ToString()); } rd.close(); SqlCommand cmd1 = new SqlCommand("Update windmill set State='Stopped',PmState='Not Available'where Id=0", con); cmd1.ExecuteNonQuery(); } ```
How can I execute multiple database requests in ASP.NET C#?
[ "", "asp.net", "sql", "" ]
I know it lets visual studio to separate WinForms UI code from the UI events, etc. But are there any practical uses for it?
The partial keyword is typically used in code generation utilities to allow developers to add additional functionality to the generated code without the fear of that code being erased if the code is generated again. With C# 3 the partial keyword can be applied to methods to allow users of generated code to fill in blanks left by the generator. The Linq To Sql designer for example provides partial methods that allow you to add logic to the classes that the framework will call if implemented. The benefit here is that the C# compiler will completely remove unimplemented partial methods, so there is no performance hit *at all* for not implementing them. Partial classes can also be used to organize very large classes into separate code files, although this sort of usage is usually a sign that your classes are too large and are taking on too many responsibilities.
The best use for the partial keyword that I can think of is nested classes. Without the partial keyword all nested classes must go in the same code file as the containing class.
Practical usage of partial keyword in C#
[ "", "c#", ".net", "class", "" ]
I am currently working on a project for school which will be a webapp where the gui will be programmed in Adobe Flex and the backend will be programmed in java, probably as servlets running in Tomcat... I am working primarily on the backend things, with another guy in my group heading up the Flex stuff. He is convinced that to communicate with the Java code he'll need to jump through all sorts of hoops. I was under the impression that you could probably just query the servlet and render the response into the application? I haven't really been able to find any good documentation on Flex (Haven't look that hard either), I just wondered if this is as daunting as he is making it out to be. Any resources / comments would be greatly appreciated. Thanks!
Blaze Data Services is the way to go. You register a "MessageBroker" servlet in web.xml which acts as a front controller to your other services. <http://opensource.adobe.com/wiki/display/blazeds/BlazeDS/> <http://livedocs.adobe.com/blazeds/1/blazeds_devguide/> <http://livedocs.adobe.com/blazeds/1/javadoc/>
As cliff.meyers stated, BlazeDS is definitely what you want for a Java backend. It integrates very nicely with Flex. As far as Flex documentation goes, Adobe has a [ton of free video tutorials](http://www.adobe.com/devnet/flex/videotraining/) online, and the [official Flex language reference](http://livedocs.adobe.com/flex/3/langref/index.html) will be helpful, too.
What is the best way for Adobe flex to communicate with Java?
[ "", "java", "apache-flex", "blazeds", "" ]
I'm making an ASP.NET Web page that generates pages from SQL when they're not cached. Their loading time could be between 300ms to 1.5seconds (No fix you database replies please). I personally find these values to be too long, and was looking for a solution that allows me to inform the user that the page he is visiting will require a bit of time to load. I was looking for a solution that could be via the Page\_Load function, if that's possible. The perfect solution for me in this case is showing the user either an animated GIF or text saying the page is being generated. On a side note I come from programming mostly Windows applications.
Here is an example of how to use the Response object to flush content to the browser and continue processing: ``` using System; using System.Web.UI; using System.Threading; public partial class _Default : Page { protected override void OnLoad(EventArgs e) { base.OnLoad(e); Response.Write("<h1>please wait...</h1>"); Response.Flush(); // simulate load time Thread.Sleep(2000); Response.Write("<h1>finished</h1>"); } } ```
You're going to want to first output the loading graphic and then flush the output buffer so the content so far is sent to the user's browser by using `Response.Flush()`. When you output the rest of the content, you will need to have a bit of javascript in there to remove the first page elements sent so the loading graphic goes away.
Page loading picture
[ "", "c#", "asp.net", "pageload", "" ]
What is the difference between the following set of pointers? When do you use each pointer in production code, if at all? Examples would be appreciated! 1. `scoped_ptr` 2. `shared_ptr` 3. `weak_ptr` 4. `intrusive_ptr` Do you use boost in production code?
### Basic properties of smart pointers It's easy when you have properties that you can assign each smart pointer. There are three important properties. * **no ownership at all** * **transfer of ownership** * **share of ownership** The first means that a smart pointer cannot delete the object, because it doesn't own it. The second means that only one smart pointer can ever point to the same object at the same time. If the smart pointer is to be returned from functions, the ownership is transferred to the returned smart pointer, for example. The third means that multiple smart pointers can point to the same object at the same time. This applies to a *raw pointer* too, however raw pointers lack an important feature: They do not define whether they are *owning* or not. A share of ownership smart pointer will delete the object if every owner gives up the object. This behavior happens to be needed often, so shared owning smart pointers are widely spread. Some owning smart pointers support neither the second nor the third. They can therefore not be returned from functions or passed somewhere else. Which is most suitable for `RAII` purposes where the smart pointer is kept local and is just created so it frees an object after it goes out of scope. Share of ownership can be implemented by having a copy constructor. This naturally copies a smart pointer and both the copy and the original will reference the same object. Transfer of ownership cannot really be implemented in C++ currently, because there are no means to transfer something from one object to another supported by the language: If you try to return an object from a function, what is happening is that the object is copied. So a smart pointer that implements transfer of ownership has to use the copy constructor to implement that transfer of ownership. However, this in turn breaks its usage in containers, because requirements state a certain behavior of the copy constructor of elements of containers which is incompatible with this so-called "moving constructor" behavior of these smart pointers. C++1x provides native support for transfer-of-ownership by introducing so-called "move constructors" and "move assignment operators". It also comes with such a transfer-of-ownership smart pointer called `unique_ptr`. ### Categorizing smart pointers `scoped_ptr` is a smart pointer that is neither transferable nor sharable. It's just usable if you locally need to allocate memory, but be sure it's freed again when it goes out of scope. But it can still be swapped with another scoped\_ptr, if you wish to do so. `shared_ptr` is a smart pointer that shares ownership (third kind above). It is reference counted so it can see when the last copy of it goes out of scope and then it frees the object managed. `weak_ptr` is a non-owning smart pointer. It is used to reference a managed object (managed by a shared\_ptr) without adding a reference count. Normally, you would need to get the raw pointer out of the shared\_ptr and copy that around. But that would not be safe, as you would not have a way to check when the object was actually deleted. So, weak\_ptr provides means by referencing an object managed by shared\_ptr. If you need to access the object, you can lock the management of it (to avoid that in another thread a shared\_ptr frees it while you use the object) and then use it. If the weak\_ptr points to an object already deleted, it will notice you by throwing an exception. Using weak\_ptr is most beneficial when you have a cyclic reference: Reference counting cannot easily cope with such a situation. `intrusive_ptr` is like a shared\_ptr but it does not keep the reference count in a shared\_ptr but leaves incrementing/decrementing the count to some helper functions that need to be defined by the object that is managed. This has the advantage that an already referenced object (which has a reference count incremented by an external reference counting mechanism) can be stuffed into an intrusive\_ptr - because the reference count is not anymore internal to the smart pointer, but the smart pointer uses an existing reference counting mechanism. `unique_ptr` is a transfer of ownership pointer. You cannot copy it, but you can move it by using C++1x's move constructors: ``` unique_ptr<type> p(new type); unique_ptr<type> q(p); // not legal! unique_ptr<type> r(move(p)); // legal. p is now empty, but r owns the object unique_ptr<type> s(function_returning_a_unique_ptr()); // legal! ``` This is the semantic that std::auto\_ptr obeys, but because of missing native support for moving, it fails to provide them without pitfalls. unique\_ptr will automatically steal resources from a temporary other unique\_ptr which is one of the key features of move semantics. auto\_ptr will be deprecated in the next C++ Standard release in favor of unique\_ptr. C++1x will also allow stuffing objects that are only movable but not copyable into containers. So you can stuff unique\_ptr's into a vector for example. I'll stop here and reference you to [a fine article](https://devblogs.microsoft.com/cppblog/rvalue-references-c0x-features-in-vc10-part-2/) about this if you want to read more about this.
**scoped\_ptr** is the simplest. When it goes out of scope, it is destroyed. The following code is illegal (scoped\_ptrs are non-copyable) but will illustrate a point: ``` std::vector< scoped_ptr<T> > tPtrVec; { scoped_ptr<T> tPtr(new T()); tPtrVec.push_back(tPtr); // raw T* is freed } tPtrVec[0]->DoSomething(); // accessing freed memory ``` **shared\_ptr** is reference counted. Every time a copy or assignment occurs, the reference count is incremented. Every time an instance's destructor is fired, the reference count for the raw T\* is decremented. Once it is 0, the pointer is freed. ``` std::vector< shared_ptr<T> > tPtrVec; { shared_ptr<T> tPtr(new T()); // This copy to tPtrVec.push_back and ultimately to the vector storage // causes the reference count to go from 1->2 tPtrVec.push_back(tPtr); // num references to T goes from 2->1 on the destruction of tPtr } tPtrVec[0]->DoSomething(); // raw T* still exists, so this is safe ``` **weak\_ptr** is a weak-reference to a shared pointer that requires you to check to see if the pointed-to shared\_ptr is still around ``` std::vector< weak_ptr<T> > tPtrVec; { shared_ptr<T> tPtr(new T()); tPtrVec.push_back(tPtr); // num references to T goes from 1->0 } shared_ptr<T> tPtrAccessed = tPtrVec[0].lock(); if (tPtrAccessed[0].get() == 0) { cout << "Raw T* was freed, can't access it" } else { tPtrVec[0]->DoSomething(); // raw } ``` **intrusive\_ptr** is typically used when there is a 3rd party smart ptr you must use. It will call a free function to add and decrement the reference count.See the [link](http://www.boost.org/doc/libs/1_38_0/libs/smart_ptr/smart_ptr.htm) to boost documentation for more info.
smart pointers (boost) explained
[ "", "c++", "boost", "smart-pointers", "" ]
I'm trying to determine the relative performance of two different queries and have two ways of measuring this available to me: 1. Run both and time each query 2. Run both and get "Query Cost" from the actual execution plan Here is the code I run to time the queries... ``` DBCC FREEPROCCACHE GO DBCC DROPCLEANBUFFERS GO DECLARE @start DATETIME SET @start = getDate() EXEC test_1a SELECT getDate() - @start AS Execution_Time GO DBCC FREEPROCCACHE GO DBCC DROPCLEANBUFFERS GO DECLARE @start DATETIME SET @start = getDate() EXEC test_1b SELECT getDate() - @start AS Execution_Time GO ``` What I get is the following: ``` Stored_Proc Execution_Time Query Cost (Relative To Batch) test_1a 1.673 seconds 17% test_1b 1.033 seconds 83% ``` The results of the execution time directly contradict the results of the Query Cost, but I'm having difficulty determining what "Query Cost" actually means. My best guess is that it is an aggregate of Reads/Writes/CPU\_Time/etc, so I guess I have a couple of questions: 1. Is there a definative source to explain what this measure means? 2. What other "Query Performance" metrics do people use, and what are their relative merits? It may be important to note that this is a medium sized SQL Server, running MS SQL Server 2005 on MS Server 2003 Enterprise Edition with multiple processors and 100+ concurrent users. EDIT: After some bother I managed to get Profiler access on that SQL Server, and can give extra info (Which supports Query Cost being related to system resources, not Execution Time itself...) ``` Stored_Proc CPU Reads Writes Duration test_1a 1313 3975 93 1386 test_1b 2297 49839 93 1207 ``` Impressive that taking more CPU with MANY more Reads takes less time :)
The profiler trace puts it into perspective. * Query A: 1.3 secs CPU, 1.4 secs duration * Query B: 2.3 secs CPU, 1.2 secs duration Query B is using parallelism: CPU > duration eg the query uses 2 CPUs, average 1.15 secs each Query A is probably not: CPU < duration This explains cost relative to batch: 17% of the for the simpler, non-parallel query plan. The optimiser works out that query B is more expensive and will benefit from parallelism, even though it takes extra effort to do so. Remember though, that query B uses 100% of 2 CPUS (so 50% for 4 CPUs) for one second or so. Query A uses 100% of a single CPU for 1.5 seconds. The peak for query A is lower, at the expense of increased duration. With one user, who cares? With 100, perhaps it makes a difference...
``` SET STATISTICS TIME ON SELECT * FROM Production.ProductCostHistory WHERE StandardCost < 500.00; SET STATISTICS TIME OFF; ``` And see the message tab it will look like this: ``` SQL Server Execution Times: CPU time = 0 ms, elapsed time = 10 ms. (778 row(s) affected) SQL Server parse and compile time: CPU time = 0 ms, elapsed time = 0 ms. ```
Measuring Query Performance : "Execution Plan Query Cost" vs "Time Taken"
[ "", "sql", "sql-server", "sql-server-2005", "optimization", "sql-execution-plan", "" ]
My form: ``` class PlanForm(forms.ModelForm): owner = forms.ModelChoiceField(label="", queryset=Profile.objects.all(), widget=forms.HiddenInput()) etc... class Meta: model = Plan ``` Owner, in the model, is a ForeignKey to a Profile. When I set this form, I set the value of "owner" to be a Profile object. But when this comes out on the form, it seems to contain the *name* of the Profile like this: ``` <input type="hidden" name="owner" value="phil" id="id_owner" /> ``` When the form is submitted and gets back to my views.py I try to handle it like this: ``` form = PlanForm(request.POST) ... if form.is_valid(): plan = form.save() return HttpResponseRedirect('/plans/%s'%plan.id) # Redirect after POST ``` However, what I get is a type-conversion error as it fails to turn the string "phil" (the user's name that was saved into the "owner" field) into an Int to turn it into the ForeignKey. So what is going on here. Should a ModelForm represent a foreign key as a number and transparently handle it? Or do I need to extract the id myself into the owner field of the form? And if so, how and when do I map it back BEFORE I try to validate the form?
I suspect that the `__unicode__` method for the Profile model instance, or the `repr` thereof is set to return a value other than `self.id`. For example, I just set this up: ``` # models.py class Profile(models.Model): name = models.CharField('profile name', max_length=10) def __unicode__(self): return u'%d' % self.id class Plan(models.Model): name = models.CharField('plan name', max_length=10) profile = models.ForeignKey(Profile, related_name='profiles') def __unicode__(self): return self.name # forms.py class PlanForm(forms.ModelForm): profile = forms.ModelChoiceField(queryset=Profile.objects.all(), widget=forms.HiddenInput()) class Meta: model = Plan # views.py def add_plan(request): if request.method == 'POST': return HttpResponse(request.POST['profile']) profile = Profile.objects.all()[0] form = PlanForm(initial={'profile':profile}) return render_to_response('add_plan.html', { 'form':form, }, context_instance=RequestContext(request)) ``` With that, I see PlanForm.profile rendered thus in the template: ``` <input type="hidden" name="profile" value="1" id="id_profile" /> ```
Hmm... This might actually be a security hole. Suppose a malicious attacker crafted a POST (say, by using XmlHttpRequest from FireBug) and set the profile term to some wacky value, like, your profile ID. Probably not what you wanted? If possible, you may want to get the profile from the request object itself, rather than what's being submitted from the POST values. ``` form = PlanForm(request.POST) if form.is_valid(): plan = form.save(commit=False) plan.owner = request.user.get_profile() plan.save() form.save_m2m() # if neccesary ```
Django Forms: Foreign Key in Hidden Field
[ "", "python", "django", "forms", "" ]