Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
Given a series of URLs ``` http://www.anydotcom.com/myfolder/some-url.html http://www.anydotcom.com/myfolder2/index.html# http://www.anydotcom.com/myfolder3/index.html?someParam=aValue http://www.anydotcom.com/foldername/index.html?someParam=anotherValue ``` First, how could I strip anything off the end of the URL so that I end up with ``` http://www.anydotcom.com/myfolder/some-url.html http://www.anydotcom.com/myfolder2/index.html http://www.anydotcom.com/myfolder3/index.html http://www.anydotcom.com/foldername/index.html ``` or, ideally, I would like it to return ``` /myfolder/some-url.html /myfolder2/index.html /myfolder3/index.html /foldername/index.html ``` I've tried ``` var thisUrl = "" + window.location; var myRegExp = new RegExp("([^(\?#)]*)"); thisUrl = myRegExp.exec(thisUrl); ``` but this returns ``` http://www.anydotcom.com/foldername/index.html,http://www.anydotcom.com/foldername/index.html ``` and I don't quite understand why. I appreciate any help here!
If you are using `window.location`, you can simply access the wanted data by using: ``` var thisUrl = window.location.pathname; ``` If you are extracting stuff from links, the following regular expression will get you what you need: ``` // Supports all protocols (file, ftp, http, https, whatever) var pathExtract = /^[a-z]+:\/\/\/?[^\/]+(\/[^?]*)/i; var thisUrl = (pathExtract.exec(someUrl))[1]; ```
Well, to answer your question directly, here's the regular expression to do that. ``` thisUrl = thisUrl.replace( /^https?:\/\/[^\/]|\?.*$/g, '' ); ``` However, since you mention window.location in your code, you can actually get this data straight from the [location object](http://www.w3schools.com/HTMLDOM/dom_obj_location.asp). ``` thisUrl = top.location.pathname; ```
How to strip all parameters and the domain name from a URL using javascript?
[ "", "javascript", "regex", "url", "" ]
Is there a way to find out what the ContentType of an image is from only the original bytes? At the moment I have a database column that stores only the byte[], which I use to display an image on a web page. ``` MemoryStream ms = new MemoryStream(imageBytes); Image image = Image.FromStream(ms); image.Save(context.HttpContext.Response.OutputStream, <--ContentType-->); ``` I could of course just save the ContentType in another column in the table, but just wondered if there was another way e.g. maybe .Net has a way to interrogate the data to get the type.
Check out this [file signatures table](http://www.garykessler.net/library/file_sigs.html).
File/magic signatures was the way to go. Below is the working version of the code. Ref: [Stackoverflow - Getting image dimensions without reading the entire file](https://stackoverflow.com/questions/111345/getting-image-dimensions-without-reading-the-entire-file) ``` ImageFormat contentType = ImageHelper.GetContentType(this.imageBytes); MemoryStream ms = new MemoryStream(this.imageBytes); Image image = Image.FromStream(ms); image.Save(context.HttpContext.Response.OutputStream, contentType); ``` And then the helper class: ``` public static class ImageHelper { public static ImageFormat GetContentType(byte[] imageBytes) { MemoryStream ms = new MemoryStream(imageBytes); using (BinaryReader br = new BinaryReader(ms)) { int maxMagicBytesLength = imageFormatDecoders.Keys.OrderByDescending(x => x.Length).First().Length; byte[] magicBytes = new byte[maxMagicBytesLength]; for (int i = 0; i < maxMagicBytesLength; i += 1) { magicBytes[i] = br.ReadByte(); foreach (var kvPair in imageFormatDecoders) { if (magicBytes.StartsWith(kvPair.Key)) { return kvPair.Value; } } } throw new ArgumentException("Could not recognise image format", "binaryReader"); } } private static bool StartsWith(this byte[] thisBytes, byte[] thatBytes) { for (int i = 0; i < thatBytes.Length; i += 1) { if (thisBytes[i] != thatBytes[i]) { return false; } } return true; } private static Dictionary<byte[], ImageFormat> imageFormatDecoders = new Dictionary<byte[], ImageFormat>() { { new byte[]{ 0x42, 0x4D }, ImageFormat.Bmp}, { new byte[]{ 0x47, 0x49, 0x46, 0x38, 0x37, 0x61 }, ImageFormat.Gif }, { new byte[]{ 0x47, 0x49, 0x46, 0x38, 0x39, 0x61 }, ImageFormat.Gif }, { new byte[]{ 0x89, 0x50, 0x4E, 0x47, 0x0D, 0x0A, 0x1A, 0x0A }, ImageFormat.Png }, { new byte[]{ 0xff, 0xd8 }, ImageFormat.Jpeg }, }; ```
Finding out the ContentType of a Image from the byte[]
[ "", "c#", "asp.net", "asp.net-mvc", "" ]
I would like to simplify my JSP's even further by transparently including them. For instance, this is the line I would like to remove: ``` <%@ include file="/jsp/common/include.jsp"%> ``` The include.jsp file basically declares all the tag libraries I am using. I am running this on WebSphere 6.0.2 I believe and have already tried this configuration: ``` <!-- Include this for every JSP page so we can strip an extra line from the JSP --> <jsp-config> <jsp-property-group> <url-pattern>*.htm</url-pattern> <!--<include-prelude>/jsp/common/include.jsp</include-prelude>--> <include-coda>/jsp/common/include.jsp</include-coda> </jsp-property-group> </jsp-config> ``` Both the `include-prelude` and `include-coda` did not work. I was reading that other WebSphere users were not able to get this up and running; however, tomcat users were able to.
The `jsp-property-group` was introduced in JSP 2.0 (i.o.w. Servlet 2.4). Websphere 6.0 is Servlet 2.3. So you have 3 options: 1. Forget it. 2. Upgrade Websphere. 3. Replace Websphere.
I'm not sure which version of the Servlet spec this was introduced... is it possible that Websphere's servlet container doesn't support it? Either way, for this sort of task there's a much nicer 3rd-party tool called [SiteMesh](http://www.opensymphony.com/sitemesh/). It allows you to compose pages in exactly the sort of way you describe, but in a very flexible way. Recommended.
Automatically include a JSP in every JSP
[ "", "java", "jsp", "websphere", "" ]
I have a large xml file and want to get a defined number of `<Cooperation>` nodes from it. What's the best way to handle this. Currently, I'm using this code ``` public string FullCooperationListChunkGet(int part, int chunksize) { StringBuilder output_xml = new StringBuilder(); IEnumerable<XElement> childList = from el in xml.Elements("Cooperations").Skip(part * chunksize).Take(chunksize) select el; foreach (XElement x in childList.Elements()) { output_xml.Append(x.ToString()); } return output_xml.ToString(); } ``` `Skip(part * chunksize).Take(chunksize)` doesn't work (seems to be only valid for the Cooperations Tag and not the Cooperation Tags) Can somebody point me in the right direction. Thanks, rAyt Edit: The Background is this: I'm pushing these xml parts via a webservice to a Blackberry. Unfortunately, the http request size on a blackberry enterprise server is limited to 256 kb by default. **Part of the XML File:** ``` <?xml version="1.0" encoding="utf-8" standalone="yes"?> <Cooperations> <Cooperation> <CooperationId>xxx</CooperationId> <CooperationName>xxx</CooperationName> <LogicalCustomers> <LogicalCustomer> <LogicalCustomerId>xxx</LogicalCustomerId> <LogicalCustomerName>xxx</LogicalCustomerName> <Customers> <Customer> <CustomerId>xxx</CustomerId> <CustomerName>xxx/CustomerName> </Customer> <Customer> <CustomerId>xxx</CustomerId> <CustomerName>xxx</CustomerName> </Customer> </Customers> </LogicalCustomer> <LogicalCustomer> <LogicalCustomerId>xxx</LogicalCustomerId> <LogicalCustomerName>xxx</LogicalCustomerName> <Customers> <Customer> <CustomerId>xxx</CustomerId> <CustomerName>xxx</CustomerName> </Customer> <Customer> <CustomerId>xxx</CustomerId> <CustomerName>xxx</CustomerName> </Customer> </Customers> </LogicalCustomer> <LogicalCustomer> <LogicalCustomerId>xxx</LogicalCustomerId> <LogicalCustomerName>xxx</LogicalCustomerName> <Customers> <Customer> <CustomerId>xxx</CustomerId> <CustomerName>xxx</CustomerName> </Customer> </Customers> </LogicalCustomer> </LogicalCustomers> </Cooperation> <Cooperation> ... ```
For using `XDocument`, I expect you want something like: ``` var qry = doc.Root.Elements("Cooperation").Skip(part*chunksize).Take(chunksize); ``` however, if the data is *large*, you might have to drop down to `XmlReader` instead... I'll try to do an example... (update; 512kb probably isn't worth it...) The problem with your code is that you are using `.Elements()` here: ``` foreach (XElement x in childList.Elements()) { output_xml.Append(x.ToString()); } ``` Just remove that: ``` foreach (XElement x in childList) { output_xml.Append(x.ToString()); } ``` For info - you are also using query syntax unnecessarily: ``` IEnumerable<XElement> childList = from el in xml.Elements("Cooperations") .Skip(part * chunksize).Take(chunksize) select el; ``` is 100% identical to: ``` IEnumerable<XElement> childList = xml.Elements("Cooperations") .Skip(part * chunksize).Take(chunksize); ``` (since the compiler ignores an obvious `select`, without mapping it to the `Select` LINQ method)
Do you have an xml document or a fragment, i.e do you have more than 1 "Cooperations" nodes? If you have more, which Coopertation's are you expecting to get? From just 1 Cooperations or across multiple, reason for asking is that you have written xml.Element**s**("Cooperations"). Wouldn't this do the trick: ``` xml.Element("Cooperations").Elements("Cooperation").Skip(...).Take(...) ```
XDocument Get Part of XML File
[ "", "c#", "xml", "" ]
I'd like to initialize an SD card with FAT16 file system. Assuming that I have my SD reader on drive G:, how I can easily format it to FAT16 ? **UPDATE:** To clarify, I'd like to do that on .net platform using C# in a way that I can detect errors and that would work on Windows XP and above.
I tried the answers above, unfortunately it was not simple as it seems... The first answer, using the management object looks like the correct way of doing so but unfortunately the "Format" method is not supported in windows xp. The second and the third answers are working but require the user to confirm the operation. In order to do that without any intervention from the user I used the second option with redirecting the input and output streams of the process. When I redirecting only the input stream the process failed. The following is an example: ``` DriveInfo[] allDrives = DriveInfo.GetDrives(); foreach (DriveInfo d in allDrives) { if (d.IsReady && (d.DriveType == DriveType.Removable)) { ProcessStartInfo startInfo = new ProcessStartInfo(); startInfo.FileName = "format"; startInfo.Arguments = "/fs:FAT /v:MyVolume /q " + d.Name.Remove(2); startInfo.UseShellExecute = false; startInfo.CreateNoWindow = true; startInfo.RedirectStandardOutput = true; startInfo.RedirectStandardInput = true; Process p = Process.Start(startInfo); StreamWriter processInputStream = p.StandardInput; processInputStream.Write("\r\n"); p.WaitForExit(); } } ```
You could use [pinvoke to call SHFormatDrive](http://www.pinvoke.net/default.aspx/shell32/SHFormatDrive.html). ``` [DllImport("shell32.dll")] static extern uint SHFormatDrive(IntPtr hwnd, uint drive, uint fmtID, uint options); public enum SHFormatFlags : uint { SHFMT_ID_DEFAULT = 0xFFFF, SHFMT_OPT_FULL = 0x1, SHFMT_OPT_SYSONLY = 0x2, SHFMT_ERROR = 0xFFFFFFFF, SHFMT_CANCEL = 0xFFFFFFFE, SHFMT_NOFORMAT = 0xFFFFFFD, } //(Drive letter : A is 0, Z is 25) uint result = SHFormatDrive( this.Handle, 6, // formatting C: (uint)SHFormatFlags.SHFMT_ID_DEFAULT, 0 ); // full format of g: if ( result == SHFormatFlags.SHFMT_ERROR ) MessageBox.Show( "Unable to format the drive" ); ```
How to programatically format an SD card with FAT16?
[ "", "c#", ".net", "windows", "" ]
(SQL 2005) Is it possible for a raiserror to terminate a stored proc. For example, in a large system we've got a value that wasn't expected being entered into a specific column. In an update trigger if you write: if exists (select \* from inserted where testcol = 7) begin raiseerror('My Custom Error', 16, 1) end the update information is still applied. however if you run if exists (select \* from inserted where testcol = 7) begin select 1/0 end a divide by 0 error is thrown that actually terminates the update. is there any way i can do this with a raiseerror so i can get custom error messages back?
In a trigger, issue a ROLLBACK, RAISERROR and then RETURN. see [Error Handling in SQL Server - Trigger Context by Erland Sommarskog](http://www.sommarskog.se/error-handling-I.html#triggercontext)
Can you not just add a **CHECK** constraint to the column to prevent it from being inserted in the first place? ``` ALTER TABLE YourTable ADD CONSTRAINT CK_No_Nasties CHECK (testcol <> 7) ``` Alternatively you could start a transaction in your insert sproc (if you have one) and roll it back if an error occurs. This can be implemented with **TRY**, **CATCH** in SQL Server 2005 and avoids having to use a trigger.
SQL Statement Termination using RAISERROR
[ "", "sql", "sql-server-2005", "triggers", "raiserror", "" ]
I want to validate string containing only numbers. Easy validation? I added RegularExpressionValidator, with ValidationExpression="/d+". Looks okay - but nothing validated when only space is entered! Even many spaces are validated okay. I don't need this to be mandatory. I can trim on server, but cannot regular expression do everything!
This is by design and tends to throw many people off. The RegularExpressionValidator does not make a field mandatory and allows it to be blank and accepts whitespaces. The \d+ format is correct. Even using ^\d+$ will result in the same problem of allowing whitespace. The only way to force this to disallow whitespace is to also include a RequiredFieldValidator to operate on the same control. This is per the [RegularExpressionValidator documentation](http://msdn.microsoft.com/en-us/library/system.web.ui.webcontrols.regularexpressionvalidator.aspx), which states: > Validation succeeds if the input > control is empty. If a value is > required for the associated input > control, use a RequiredFieldValidator > control in addition to the > RegularExpressionValidator control. A regular expression check of the field in the code-behind would work as expected; this is only an issue with the RegularExpressionValidator. So you could conceivably use a CustomValidator instead and say `args.IsValid = Regex.IsMatch(txtInput.Text, @"^\d+$")` and if it contained whitespace then it would return false. But if that's the case why not just use the RequiredFieldValidator per the documentation and avoid writing custom code? Also a CustomValidator means a mandatory postback (unless you specify a client validation script with equivalent javascript regex).
try to use Ajax FilteredTextbox, this will not allow space....... <http://www.asp.net/ajaxLibrary/AjaxControlToolkitSampleSite/FilteredTextBox/FilteredTextBox.aspx>
RegularExpressionValidator not firing on white-space entry
[ "", "c#", "asp.net", "validation", "" ]
Can a J2ME app be triggered by a message from a remote web server. I want to perform a task at the client mobile phone as soon as the J2ME app running on it receives this message. I have read of HTTP connection, however what I understand about it is a client based protocol and the server will only reply to client requests. Any idea if there is any protocol where the server can send a command to the client without client initiating any request?. How about Socket/Stream based(TCP) or UDP interfaces?.
If the mobile device doesnt allow you to make TCP connections, and you are limited to HTTP requests, then you're looking at implementing "long polling". One POST a http request and the web-server will wait as long time as possible (before things time out) to answer. If something arrives while the connection is idling it can receive it directly, if something arrives between long-polling requests it is queued until a request comes in. If you can make TCP connections, then just set up a connection and let it stay idle. I have icq and irc applications that essentially just sit there waiting for the server to send it something.
You should see PushRegistry feature where you can send out an SMS to a specific number have the application started when the phone receives that SMS and then make the required HTTP connection or whatever. However, the downside of it is that you might have to sign the application to have it working on devices and you also need an SMS aggregator like [SMSLib](http://www.smslib.org) or [Kannel](http://kannel.org)
web server sending command to a J2ME app
[ "", "java", "http", "java-me", "mobile", "sockets", "" ]
I have a div that needs to be moved from one place to another in the DOM. So at the moment I am doing it like so: ``` flex.utils.get('oPopup_About').appendChild(flex.utils.get('oUpdater_About')); ``` But, IE, being, well, IE, it doesn't work. It works all other browsers, just not in IE. I need to do it this way as the element (div) **'oUpdater\_About'** needs to be reused as it is populated over and over. So i just need to be able to move the div around the DOM, appendChild will let this happen in all browsers, but, IE. Thanks in advance!
You have to remove the node first, before you can append it anywhere else. One node cannot be at two places at the same time. ``` var node = flex.utils.get('oUpdater_About') node.parentNode.removeChild(node); flex.utils.get('oPopup_About').appendChild(node); ```
make sure to clone the oUpdater\_About (with node.cloneNode(true)) this way you get a copy and can reuse the dom-snippet as often as you want (in any browser)
appendChild in IE6/IE7 does not work with existing elements
[ "", "javascript", "internet-explorer", "dom", "appendchild", "" ]
I read somewhere that the `isset()` function treats an empty string as `TRUE`, therefore `isset()` is not an effective way to validate text inputs and text boxes from a HTML form. So you can use `empty()` to check that a user typed something. 1. Is it true that the `isset()` function treats an empty string as `TRUE`? 2. Then in which situations should I use `isset()`? Should I always use `!empty()` to check if there is something? For example instead of ``` if(isset($_GET['gender']))... ``` Using this ``` if(!empty($_GET['gender']))... ```
isset() checks if a variable has a value, including `False`, `0` or empty string, but not including NULL. Returns TRUE if var exists and is not NULL; FALSE otherwise. empty() does a reverse to what `isset` does (i.e. `!isset()`) and an additional check, as to whether a value is "empty" which includes an empty string, 0, NULL, false, or empty array or object False. Returns FALSE if var is set and has a non-empty and non-zero value. TRUE otherwise
In the most general way : * [`isset`](http://php.net/isset) tests if a variable (or an element of an array, or a property of an object) **exists** (and is not null) * [`empty`](http://php.net/empty) tests if a variable is either not set or contains an empty-like value. To answer **question 1** : ``` $str = ''; var_dump(isset($str)); ``` gives ``` boolean true ``` Because the variable `$str` exists. And **question 2** : You should use `isset` to determine whether a variable **exists**; for instance, if you are getting some data as an array, you might need to check if a key is set in that array (and its value is not null). Think about `$_GET` / `$_POST`, for instance. If you want to know whether a variable exists **and** not "empty", that is the job of `empty`.
In where shall I use isset() and !empty()
[ "", "php", "isset", "" ]
I'm trying to parse XML returned from the Youtue API. The APIcalls work correctly and creates an XmlDocument. I can get an XmlNodeList of the "entry" tags, but I'm not sure how to get the elements inside such as the , , etc... ``` XmlDocument xmlDoc = youtubeService.GetSearchResults(search.Term, "published", 1, 50); XmlNodeList listNodes = xmlDoc.GetElementsByTagName("entry"); foreach (XmlNode node in listNodes) { //not sure how to get elements in here } ``` The XML document schema is shown here: <http://code.google.com/apis/youtube/2.0/developers_guide_protocol_understanding_video_feeds.html> I know that node.Attributes is the wrong call, but am not sure what the correct one is? By the way, if there is a better way (faster, less memory) to do this by serializing it or using linq, I'd be happy to use that instead. Thanks for any help!
Here some examples reading the XmlDocument. I don't know whats fastest or what needs less memory - but i would prefer Linq To Xml because of its clearness. ``` XmlDocument xmlDoc = youtubeService.GetSearchResults(search.Term, "published", 1, 50); XmlNodeList listNodes = xmlDoc.GetElementsByTagName("entry"); foreach (XmlNode node in listNodes) { // get child nodes foreach (XmlNode childNode in node.ChildNodes) { } // get specific child nodes XPathNavigator navigator = node.CreateNavigator(); XPathNodeIterator iterator = navigator.Select(/* xpath selector according to the elements/attributes you need */); while (iterator.MoveNext()) { // f.e. iterator.Current.GetAttribute(), iterator.Current.Name and iterator.Current.Value available here } } ``` and the linq to xml one: ``` XmlDocument xmlDoc = youtubeService.GetSearchResults(search.Term, "published", 1, 50); XDocument xDoc = XDocument.Parse(xmlDoc.OuterXml); var entries = from entry in xDoc.Descendants("entry") select new { Id = entry.Element("id").Value, Categories = entry.Elements("category").Select(c => c.Value) }; foreach (var entry in entries) { // entry.Id and entry.Categories available here } ```
I realise this has been answered and LINQ to XML is what I'd go with but another option would be XPathNavigator. Something like ``` XPathNavigator xmlNav = xmlDoc.CreateNavigator(); XPathNodeIterator xmlitr = xmlNav.Select("/XPath/expression/here") while (xmlItr.MoveNext()) ... ``` The code is off the top of my head so it may be wrong and there may be a better way with XPathNavigator but it should give you the general idea
How to Parse XML file in c# (youtube api result)?
[ "", "c#", "asp.net", "xml", "xsd", "" ]
When defining the markup for an asp gridview and the tag **Columns**, one can only choose from a predefined set of controls to add within it (asp:BoundField, asp:ButtonField etc). Im curious about if i can add the same type of behavior, say restricting the content to a custom control with the properties "Text" and "ImageUrl" to a TemplateContainer defined in a standard usercontrol and then handle the rendering of each element within the container from code behind somehow?
Alright i finally solved it, which means i can do the following ``` <%@ Register src="~/Controls/Core/ContextMenu.ascx" tagname="ContextMenu" tagprefix="uc" %> <%@ Register Assembly="App_Code" Namespace="Core.Controls.ContextMenu" TagPrefix="cc" %> <uc:ContextMenu ID="ContextMenuMain" runat="server"> <Items> <cc:ContextMenuItem Text="New" ImageUrl="..." /> <cc:ContextMenuItem Text="Save" ImageUrl="..." /> </Items> </uc:ContextMenu> ``` Where each ContextMenuItem is a custom class in app code, notice that i have to register the app\_code assembly in order for the markup to recognize the class. The namespace points to the location of the class. For the code behind of the usercontrol we just add this: ``` private List<ContextMenuItem> items = new List<ContextMenuItem>(); [PersistenceMode(PersistenceMode.InnerProperty), DesignerSerializationVisibility(DesignerSerializationVisibility.Content)] public List<ContextMenuItem> Items { get { if (items == null) { items = new List<ContextMenuItem>(); } return items; } set { items = value; } } ``` Which can be processed by the usercontrol when its time to render :)
FYI The fields (asp:BoundField, asp:ButtonField etc) are not actually controls but are instead derived from the DatControlField class. Likewise, the columns property is not a ITemplate but is a DataFieldCollection. Something like that should be possible if your controls all derive from the same class or implement the same interface.
How to add same behaviour to a usercontrol template as the gridview <Columns>
[ "", "c#", "asp.net", "user-controls", "" ]
For political correctness, I would like to know if there is a way to instantiate a date so that it contains the lowest date value possible in c# net v2.
Try DateTime.MinValue. This is the lowest possible value for a DateTime instance in the CLR. It is a language independent value.
A newly constructed DateTime object also handily constructs by default to MinValue.
absolute min date
[ "", "c#", ".net", "datetime", ".net-2.0", "" ]
I have a perl script I'd like to filter my cpp/h files through before gcc processes them normally -- basically as an extra preprocessing step. Is there an easy way to do this? I realize I can feed the cpp files to the script and have gcc read the output from stdin, but this doesn't help with the header files.
The classic way to handle such a process is to treat the source code (input to the Perl filter) as a new language, with a new file suffix. You then tell `make` that the way to compile a C++ source file from this new file type is with the Perl script. For example: * New suffix: `.ccp` * New rule (assuming `.cc` suffix): ``` .ccp.cc: ${FILTERSCRIPT} $< ``` * Add the new suffix to the suffix list - with priority over the normal C++ rules. The last point is the trickiest. If you just add the `.ccp` suffix to the list, then `make` won't really pay attention to changes in the `.ccp` file when the `.cc` file exists. You either have to remove the intermediate `.cc` file or ensure that `.ccp` appears before `.cc` in the suffixes list. (Note: if you write a '`.ccp.o`' rule without a '`.ccp.cc`' rule and don't ensure that that the '`.cc`' intermediate is cleaned up, then a rebuild after a compilation failure may mean that `make` only compiles the '`.cc`' file, which can be frustrating and confusing.) If changing the suffix is not an option, then write a compilation script that does the filtering and invokes the C++ compiler directly.
The C and C++ preprocessor does not have any support for this kind of thing. The only way to handle this is to have your makefile (or whatever) process all the files through the perl script before calling the compiler. This is obviously very difficult, and is one very good reason for not designing architectures that need such a step. What are you doing that makes you think you need such a facility? There is probably a better solution that you are not aware of.
Filter C++ through a perl script?
[ "", "c++", "perl", "gcc", "preprocessor", "" ]
I am using ASP.NET with C# I have a HTMLSelect element and i am attemping to call a javascript function when the index of the select is changed. riskFrequencyDropDown is dynamically created in the C# code behind tried: ``` riskFrequencyDropDown.Attributes.Add("onchange", "updateRiskFrequencyOfPointOnMap(" + riskFrequencyDropDown.ID.Substring(8) + "," + riskFrequencyDropDown.SelectedValue +");"); ``` but it does not call the javascript function on my page. When i remove the parameters it works fine, but i need to pass these parameters to ensure proper functionality. Any insight to this problem would be much appreciated
Are your parameters numeric or alphanumeric? If they contain letters, you'll need to quote them in the javascript. Note the addition of the single quotes below. ``` riskFrequencyDropDown.Attributes.Add("onchange", "updateRiskFrequencyOfPointOnMap('" + riskFrequencyDropDown.ID.Substring(8) + "','" + riskFrequencyDropDown.SelectedValue +"');"); ```
tvanfosson's response answers your question. Another option to consider (that might keep you from suffering from this again): Author simpler javascript in your C#: ``` riskFrequencyDropDown.Attributes.Add("onchange", "updateRiskFrequencyOfPointOnMap(this);"); ``` And then, inside the javascript function itself (that gets called on the onchange), get the values you need *from the dropDown object*: ``` function updateRiskFrequencyOfPointOnMap(dropDown) { var riskFrequencyId = dropDown.ID.Substring(8); var riskFrequencyValue = dropDown.SelectedValue; // etc } ``` A less-obvious benefit to this design: if you ever change how you use the dropdown object (Substring(6) vs Substring(8), maybe), you don't have to change your C# -- you only have to change your javascript (and only in one place).
How to call javascript function on a select elements selectedIndexChange
[ "", "c#", "asp.net", "html-select", "" ]
One thing I always shy away from is 3d graphics programming, so I've decided to take on a project working with 3d graphics for a learning experience. I would like to do this project in Linux. I want to write a simple 3d CAD type program. Something that will allow the user to manipulate objects in 3d space. What is the best environment for doing this type of development? I'm assuming C++ is the way to go, but what tools? Will I want to use Eclipse? What tools will I want?
**OpenGL/SDL**, and the IDE is kind-of irrelevant. My personal IDE preference is gedit/VIM + Command windows. There are tons of IDE's, all of which will allow you to program with OpenGL/SDL and other utility libraries. I am presuming you are programming in C, but the bindings exist for Python, Perl, PHP or whatever else, so no worries there. Have a look online for open-source CAD packages, they may offer inspiration! Another approach might be a C#/Mono implementations ... these apps are gaining ground ... and you might be able to make it a bit portable.
It depends on what exactly you want to learn. At the heart of the 3d stuff is openGL, there is really no competitor for 3d apps, especially on non-windows platforms. On top of this you might want a scenegraph (osg, openscengraph, coin) - this handles the list of objects you want to draw, their relationship to each other in space the ability to move one relative to the others, selecting objects etc. It calls opengGL to do the actual rendering. Then on top of this you need a gui library. Qt, Fltk, wxWigets or one built into the scene library or written directly in openGL. This handles the menus, dialogs frames etc of your app. It provides an openGL canvas to draw into. Personal choices are [openscenegraph](http://openscenegraph.org) and [Qt](http://qt.nokia.com/)
What environment should I use for 3d programming on Linux?
[ "", "c++", "linux", "3d", "" ]
I've got a base class: ``` public abstract class StuffBase { public abstract void DoSomething(); } ``` And two derived classes ``` public class Stuff1 : StuffBase { public void DoSomething() { Console.WriteLine("Stuff 1 did something cool!"); } public Stuff1() { Console.WriteLine("New stuff 1 reporting for duty!"); } } public class Stuff2 : StuffBase { public void DoSomething() { Console.WriteLine("Stuff 2 did something cool!"); } public Stuff1() { Console.WriteLine("New stuff 2 reporting for duty!"); } } ``` Okay, now say I've got a list of items: ``` var items = new List<StuffBase>(); items.Add(new Stuff1()); items.Add(new Stuff2()); ``` and I want them all to call their DoSomething() method. I could expect to just iterate the list and call their DoSomething() method, so let's say I've got a method to do that called AllDoSomething() that just iterates over the list and does the job: ``` public static void AllDoSomething(List<StuffBase> items) { items.ForEach(i => i.DoSomething()); } ``` What is the practical difference of the following method? ``` public static void AllDoSomething<T>(List<T> items) where T: StuffBase { items.ForEach(i => i.DoSomething()); } ``` Both methods appear in real terms, although being syntactically different, to be doing the same thing. Are they just different ways of doing the same thing? I understand generics and type constraints but can't see why I would use one way over the other in this instance.
This is because as of yet, C# does not support [Covariance](http://en.wikipedia.org/wiki/Covariance_and_contravariance_(computer_science)#C.23). > More formally, in C# v2.0 if T is a > subtype of U, then T[] is a subtype of > U[], but G is not a subtype of G > (where G is any generic type). In > type-theory terminology, we describe > this behavior by saying that C# array > types are “covariant” and generic > types are “invariant”. *Reference: <http://blogs.msdn.com/rmbyers/archive/2005/02/16/375079.aspx>* If you have the following method : ``` public static void AllDoSomething(List<StuffBase> items) { items.ForEach(i => i.DoSomething()); } var items = new List<Stuff2>(); x.AllDoSomething(items); //Does not compile ``` Where as if you use the generic type constraint, it will. For more information about Covariance and Contravariance], check out [Eric Lippert's series of posts](http://blogs.msdn.com/ericlippert/archive/tags/Covariance+and+Contravariance/default.aspx). --- Other posts worth reading : * <http://www.pabich.eu/blog/archive/2008/02/12/c-generics---parameter-variance-its-constraints-and-how-it.aspx> * <http://blogs.msdn.com/rmbyers/archive/2006/06/01/613690.aspx> * <http://msdn.microsoft.com/en-us/library/ms228359(VS.80).aspx> * <http://www.csharp411.com/convert-between-generic-ienumerablet/> * <http://research.microsoft.com/apps/pubs/default.aspx?id=64042> * [Why can't List<parent> = List<child>?](https://stackoverflow.com/questions/1169215/why-cant-listparent-listchild)
Suppose you had a list: ``` List<Stuff1> l = // get from somewhere ``` Now try: ``` AllDoSomething(l); ``` With the generic version, it will be allowed. With the non-generic, it won't. That's the essential difference. A list of `Stuff1` is not a list of `StuffBase`. But in the generic case, you don't require it to be exactly a list of `StuffBase`, so it's more flexible. You could work around that by first copying your list of `Stuff1` into a list of `StuffBase`, to make it compatible with the non-generic version. But then suppose you had a method: ``` List<T> TransformList<T>(List<T> input) where T : StuffBase { List<T> output = new List<T>(); foreach (T item in input) { // examine item and decide whether to discard it, // make new items, whatever } return output; } ``` Without generics, you could accept a list of `StuffBase`, but you would then have to return a list of `StuffBase`. The caller would have to use casts if they knew that the items were really of a derived type. So generics allow you to preserve the actual type of an argument and channel it through the method to the return type.
When should or shouldn't I be using generic type constraints?
[ "", "c#", "generics", "c#-2.0", "type-constraints", "" ]
Is there any difference between DateTime in c# and DateTime in SQL server?
Precision and range (so, everything important ;-p) From MSDN: [.NET System.DateTime](http://msdn.microsoft.com/en-us/library/system.datetime.aspx) > The DateTime value type represents dates and times with values ranging from 12:00:00 midnight, January 1, 0001 Anno Domini (Common Era) through 11:59:59 P.M., December 31, 9999 A.D. (C.E.) > > Time values are measured in 100-nanosecond units called ticks, and a particular date is the number of ticks since 12:00 midnight, January 1, 0001 A.D. (C.E.) in the GregorianCalendar calenda [Transact SQL datetime](http://msdn.microsoft.com/en-us/library/ms187819.aspx) > Date Range: January 1, 1753, through December 31, 9999 > > Accuracy: Rounded to increments of .000, .003, or .007 seconds
You can also use datetime2 of SQL Server 2008. The precision there is 100ns as well. In fact, it was introduced to match the .NET DateTime precision. [datetime2 (Transact-SQL)](http://technet.microsoft.com/en-us/library/bb677335.aspx)
Is there any difference between DateTime in c# and DateTime in SQL server?
[ "", "c#", "sql-server", "datetime", "" ]
Is there a preferred way to identify core .net framework assemblies ? i.e. asm which are part of the framework ? This is for a an application auto updater which 1) takes in an assembly using ASP.NET upload 2) checks it's assembly references 3) ensures they're available for deployment too 4) they're pulled as needed based on auth/authorization etc. etc Part #3 is where it'd be good to check if they're part of the core framework
Assemblies have attributes that you can examine with reflection: ``` object[] attribs = assembly.GetCustomAttributes(); ``` You could take a look at the objects returned by that call in the debugger, and see if any are common across the assemblies you want to categorise. **Edit:** And - what a surprise! - Jon Skeet has already posted [an answer to a similar-but-not-identical question using this technique](https://stackoverflow.com/questions/962639/detect-if-the-type-of-an-object-is-a-type-defined-by-net-framework/962676#962676). Should work for you as well.
You can check the property `Assembly.GlobalAssemblyCache` to see if it's in the GAC, but I think that's the closest you can get without parsing Microsoft's name from the assembly company.
How to identify core .NET Framework assemblies?
[ "", "c#", ".net", "" ]
I'm trying to find all the occurrences of "Arrows" in text, so in ``` "<----=====><==->>" ``` the arrows are: ``` "<----", "=====>", "<==", "->", ">" ``` This works: ``` String[] patterns = {"<=*", "<-*", "=*>", "-*>"}; for (String p : patterns) { Matcher A = Pattern.compile(p).matcher(s); while (A.find()) { System.out.println(A.group()); } } ``` but this doesn't: ``` String p = "<=*|<-*|=*>|-*>"; Matcher A = Pattern.compile(p).matcher(s); while (A.find()) { System.out.println(A.group()); } ``` No idea why. It often reports `"<"` instead of `"<===="` or similar. What is wrong?
**Solution** The following program compiles to one possible solution to the question: ``` import java.util.regex.Pattern; import java.util.regex.Matcher; public class A { public static void main( String args[] ) { String p = "<=+|<-+|=+>|-+>|<|>"; Matcher m = Pattern.compile(p).matcher(args[0]); while (m.find()) { System.out.println(m.group()); } } } ``` Run #1: ``` $ java A "<----=====><<---<==->>==>" <---- =====> < <--- <== -> > ==> ``` Run #2: ``` $ java A "<----=====><=><---<==->>==>" <---- =====> <= > <--- <== -> > ==> ``` **Explanation** An asterisk will match zero or more of the preceding characters. A plus (`+`) will match one or more of the preceding characters. Thus `<-*` matches `<` whereas `<-+` matches `<-` and any extended version (such as `<--------`).
When you match `"<=*|<-*|=*>|-*>"` against the string `"<---"`, it matches the first part of the pattern, `"<=*"`, because `*` includes zero or more. Java matching is greedy, but it isn't smart enough to know that there is another possible longer match, it just found the first item that matches.
Find ASCII "arrows" in text
[ "", "java", "regex", "" ]
Hey I'm calling history.back() on the on-click of a 'back' button in a rails app. But nothing happens. There is history in the browser -- pressing the browser's back button takes me back to the correct page. If I use history.go(-2) however, the page goes back correctly. So why do I have to tell javascript to go back two pages instead of one? Any ideas how to debug this? I tried this in FF and Safari. Thanks! --Additional Info: Ok I played around some more and this works: ``` <a href='javascript:' onclick='history.back();'> ``` Originally, the code was: ``` <a href='#' onclick='history.back();'> ``` What's the difference? (Note that this used to work before, something has changed which makes the latter link not work)
have you tried adding return false? So: ``` <a href='#' onclick='history.go(-1);return false;'> ```
If you use `link_to_function` it will automatically add `return false` to the end. ``` <%= link_to_function "Go back", "history.back()" %> ```
Javascript history.back() not working in rails app
[ "", "javascript", "ruby-on-rails", "" ]
I have written this small code snippet in C++, the output is also attached. I fail to understand why the constructor is being called only once, while i can see two calls being made for destructor. From what i understand, default constructor and overloaded assignment operator should be called at line 28. Can someone please throw some light on this: ``` 1 #include <iostream> 2 using namespace std; 3 4 class ABC { 5 char c; 6 public: 7 ABC() { 8 cout << "default" << endl; 9 } 10 ABC(char c) { 11 this->c = c; 12 cout << c << endl; 13 } 14 ~ABC() { 15 cout << hex << this << " destructor " << c << endl; 16 } 17 void method() { 18 cout << "method" << endl; 19 } 20 void operator= (const ABC& a) { 21 cout << "operator" << endl; 22 } 23 24 }; 25 26 int main() { 27 ABC b('b'); 28 ABC a = b; 29 } ``` --- ``` Output in g++ version 4.0.1: ~/src$ g++ test.cpp ~/src$ ./a.out b 0xbffff0ee destructor b 0xbffff0ef destructor b ```
The code you have just call the copy constructor, this is the definition: ``` ABC(const ABC& a):c(a.c){ cout << "copying " << hex << &a << endl; } ``` And you shoud see output like this: ``` b copying 0x7fffebc0e02f 0x7fffebc0e02e destructor b 0x7fffebc0e02f destructor b ``` If you want to call default constructor and then the assignment operator you must use two separate statement: ``` ABC b('b'); ABC a; a = b; ```
``` ABC a = b; ``` This is **a** *copy constructor* not the assignment operator! you could redefine it like this what you have is compiler-generated one : ``` ABC(const ABC& other) { c = other.c; cout << c << " copy constructor" << endl; } ``` --- If you really insist on not using a copy constructor you can add converstion operator like to your class and forget the copy constructor! ``` operator char() { return c; } ```
C++ Constructor call
[ "", "c++", "" ]
I'm used to add methods to external classes like IEnumerable. But can we extend Arrays in C#? I am planning to add a method to arrays that converts it to a IEnumerable even if it is multidimensional. Not related to [How to extend arrays in C#](https://stackoverflow.com/questions/628427/how-to-extend-arrays-in-c)
``` static class Extension { public static string Extend(this Array array) { return "Yes, you can"; } } class Program { static void Main(string[] args) { int[,,,] multiDimArray = new int[10,10,10,10]; Console.WriteLine(multiDimArray.Extend()); } } ```
Yes. Either through extending the `Array` class as already shown, or by extending a specific kind of array or even a generic array: ``` public static void Extension(this string[] array) { // Do stuff } // or: public static void Extension<T>(this T[] array) { // Do stuff } ``` The last one is not exactly equivalent to extending `Array`, as it wouldn't work for a multi-dimensional array, so it's a little more constrained, which could be useful, I suppose.
Is it possible to extend arrays in C#?
[ "", "c#", ".net", "arrays", "extension-methods", "ienumerable", "" ]
What's the fastest way to find strings in text files ? Case scenario : Looking for a particular path in a text file with around 50000 file paths listed (each path has it's own line).
A file of that size should easily fit in memory and you can make it into a std::set (or even better a hashset, if you have a library of that at hand) with the paths as its items. Checking if an exact path is there will then be very fast. If you need to look for sub-paths as well, a sorted std::vector (if you're looking for prefixes only) may be the only useful approach -- or if you're looking for completely general substrings of paths then you'll need to scan through all the vector anyway, but unless you have to do it a zillion times even that wouldn't be too bad.
Do you have to find one string once in the file, the same string repeatitly in several files, several strings in the same file? Depending on the scenario, you have several possible answers. * building a data stucture (like the set proposed by Alex) is usefull if you have to find several strings in the same file * using an algorithm like [Boyer-Moore](http://en.wikipedia.org/wiki/Boyer-Moore_string_search_algorithm) is efficient if you have to search for one string * using a regular expression engine will probably be preferable if you have to search for several strings.
Quickest way to find substrings in text files
[ "", "c++", "algorithm", "text", "find", "path", "" ]
The problem: we have a very complex search query. If its result yields too few rows we expand the result by UNIONing the query with a less strict version of the same query. We are discussing wether a different approach would be faster and/or better in quality. Instead of UNIONing we would create a custom sql function which would return a matching score. Then we could simply order by that matching score. Regarding performance: will it be slower than a UNION? We use PostgreSQL. Any suggestions would be greatly appreciated. Thank you very much Max
You want to order by the "return value" of your custom function? Then the database server can't use an index for that. The score has to be calculated for each record in the table (that hasn't been excluded with a WHERE clause) and stored in some temporary storage/table. Then the order by is performed on that temporary table. So this easily can get slower than your union queries (depending on your union statements of course).
A **definitive** answer can only be given if you measure the performance of both approaches in realistic environments. Everything else is guesswork at best. There are so many variables at play here - the structure of the tables and the types of data in them, the distribution of the data, what kind of indices you have at your disposal, how heavy the load on the server is - it's almost impossible to predict any outcome, really. So really - my best advice is: try both approaches, on the live system, with live data, not just with a few dozen test rows - and measure, measure, measure. Marc
SQL Performance: UNION or ORDER BY
[ "", "sql", "performance", "postgresql", "" ]
The I kind of want to do is select max(f1, f2, f3). I know this doesn't work, but I think what I want should be pretty clear (see update 1). I was thinking of doing select max(concat(f1, '--', f2 ...)), but this has various disadvantages. In particular, doing concat will probably slow things down. What's the best way to get what I want? update 1: The answers I've gotten so far aren't what I'm after. max works over a set of records, but it compares them using only one value; I want max to consider several values, just like the way order by can consider several values. update 2: Suppose I have the following table: ``` id class_name order_by1 order_by_2 1 a 0 0 2 a 0 1 3 b 1 0 4 b 0 9 ``` I want a query that will group the records by class\_name. Then, within each "class", select the record that would come first if you ordered by `order_by1` ascending then `order_by2` ascending. The result set would consist of records 2 and 3. In my magical query language, it would look something like this: ``` select max(* order by order_by1 ASC, order_by2 ASC) from table group by class_name ```
Based on an answer I gave to another question: [SQL - SELECT MAX() and accompanying field](https://stackoverflow.com/questions/1015689/sql-select-max-and-accompanying-field/2345892#2345892) To make it work for multiple columns, add more columns to the inner select's ORDER BY.
``` Select max(val) From ( Select max(fld1) as val FROM YourTable union Select max(fld2) as val FROM YourTable union Select max(fld3) as val FROM YourTable ) x ``` **Edit:** Another alternative is: ``` SELECT CASE WHEN MAX(fld1) >= MAX(fld2) AND MAX(fld1) >= MAX(fld3) THEN MAX(fld1) WHEN MAX(fld2) >= MAX(fld1) AND MAX(fld2) >= MAX(fld3) THEN MAX(fld2) WHEN MAX(fld3) >= MAX(fld1) AND MAX(fld3) >= MAX(fld2) THEN MAX(fld3) END AS MaxValue FROM YourTable ```
What's the best way to select max over multiple fields in SQL?
[ "", "sql", "" ]
I'm looking at putting together an opensource project in Java and am heavily debating not supporting JDKs 1.4 and older. The framework could definitely be written using older Java patterns and idioms, but would really benefit from features from the more mature 1.5+ releases, like generics and annotations. So really what I want to know is if support for older JDKs is a major determining factor when selecting a framework? Understandably there are legacy systems that are stuck with older JDKs, but logistics aside, does anyone out there have a compelling technical reason for supporting 1.4 JDKs? thanks, steve
I can't think of any technical reason to stick with 1.4 compatibility for mainstream Java with the possible exception of particular mobile or embedded devices (see discussion on Jon's answer). Legacy support has to have limits and 1.5 is nearly 5 years old. The more compelling reasons to get people to move on the better in my opinion. Update (I can't sleep): A good precedent to consider is that [Spring 3 will require Java 5 (pdf)](http://www.springsource.com/files/Hoeller-Spring-3.pdf). Also consider that the lot of the servers that the larger corporations are using are, or will soon be, EOL ( WAS 5.1 is out of support [since Sep '08](http://www-01.ibm.com/common/ssi/cgi-bin/ssialias?subtype=ca&infotype=an&appname=iSource&supplier=897&letternum=ENUS907-043), JBoss 4.0 support [ends Sep '09](http://www.redhat.com/security/updates/jboss_notes/)) and that Java 1.4 itself is [out of support since Oct '08](http://java.sun.com/j2se/1.4.2/).
Is it possible that someone might want to use it on a Blackberry or similar mobile device? I don't believe they generally support 1.5.
Is there still a good reason to support JDK 1.4?
[ "", "open-source", "frameworks", "legacy", "java", "" ]
I am new to C#. I am trying to complile the following program but it throws an error given at the end: I know I am making a silly mistake. Any help would be much appreciated: static void Main(string[] args) { ``` IntPtr hCannedMessages = CannedMessagesInit(); using (StreamReader sr = new StreamReader(CANNED_MESSAGE_FILE)) { String line, sub; all_integer_IDs[] myobjarray;// = new all_integer_IDs[10]; for (int c = 0; c < 10; c++) { myobjarray[c] = new all_integer_IDs(); } line = sr.ReadLine(); Console.WriteLine(line); if (line.Length > 15) { sub = line.Remove(line.IndexOf(' ', 2)); Console.WriteLine("{0} \n",sub); myobjarray[0].setvalues((int)sub[2], (int)sub[3], (int)sub[4], (int)sub[5]); ``` Console.WriteLine("{0}, {1}, {2}, {3}", myobjarray[0].m\_messageID, myobjarray[0].m\_messagetype, myobjarray[0].m\_classID, myobjarray[0].m\_categoryID); } ``` Console.Read(); sr.Close(); } } } ``` } And the class is in Class1.cs file in the same project and is as follows: public class all\_integer\_IDs { ``` public all_integer_IDs() { setvalues(0, 0, 0, 0); } ~all_integer_IDs() { } public void setvalues (int messageID, int messagetype, int classID, int categoryID) { this.m_messageID = messageID; this.m_messagetype = messagetype; this.m_classID = classID; this.m_categoryID = categoryID; } public int m_messageID; public int m_messagetype; public int m_classID; public int m_categoryID; } ``` The error is as follows: Use of unassigned local variable 'myobjarray' at line 55 which is copied and pasted below: myobjarray[c] = new all\_integer\_IDs(); Thanks, Viren
You have not allocated space for *myObjarray*. You need to allocate it Use: ``` all_integer_IDs[] myobjarray = new all_integer_IDs[10]; for (int c = 0; c < 10; c++) { myobjarray[c] = new all_integer_IDs(); } ``` at line 55. And please use [PascalCase](http://msdn.microsoft.com/en-us/library/x2dbyw72%28VS.71%29.aspx) for class names (in your case, AllIntegerIDs). Other developers will thank you for that --EDIT, my bad. Corrected the way to call it. Please try the following
It looks like you need to declare the size and type of the array myobjarray when you instantiate it. In fact, it looks like you've already got that code, you just need to remove the comment symbol. ``` all_integer_IDs[] myobjarray = new all_integer_IDs[10](); ```
C# unassigned local variable error
[ "", "c#", "" ]
I have a model, below, and I would like to get all the distinct `area` values. The SQL equivalent is `select distinct area from tutorials` ``` class Tutorials(db.Model): path = db.StringProperty() area = db.StringProperty() sub_area = db.StringProperty() title = db.StringProperty() content = db.BlobProperty() rating = db.RatingProperty() publishedDate = db.DateTimeProperty() published = db.BooleanProperty() ``` I know that in Python I can do ``` a = ['google.com', 'livejournal.com', 'livejournal.com', 'google.com', 'stackoverflow.com'] b = set(a) b >>> set(['livejournal.com', 'google.com', 'stackoverflow.com']) ``` But that would require me moving the area items out of the query into another list and then running set against the list (sounds very inefficient) and if I have a distinct item that is in position 1001 in the datastore I wouldnt see it because of the fetch limit of 1000. I would like to get all the distinct values of area in my datastore to dump it to the screen as links.
Datastore cannot do this for you in a single query. A datastore request always returns a consecutive block of results from an index, and an index always consists of all the entities of a given type, sorted according to whatever orders are specified. There's no way for the query to skip items just because one field has duplicate values. One option is to restructure your data. For example introduce a new entity type representing an "area". On adding a Tutorial you create the corresponding "area" if it doesn't already exist, and on deleting a Tutoral delete the corresponding "area" if no Tutorials remain with the same "area". If each area stored a count of Tutorials in that area, this might not be too onerous (although keeping things consistent with transactions etc would actually be quite fiddly). I expect that the entity's key could be based on the area string itself, meaning that you can always do key lookups rather than queries to get area entities. Another option is to use a queued task or cron job to periodically create a list of all areas, accumulating it over multiple requests if need be, and put the results either in the datastore or in memcache. That would of course mean the list of areas might be temporarily out of date at times (or if there are constant changes, it might never be entirely in date), which may or may not be acceptable to you. Finally, if there are likely to be very few areas compared with tutorials, you could do it on the fly by requesting the first Tutorial (sorted by area), then requesting the first Tutorial whose area is greater than the area of the first, and so on. But this requires one request per distinct area, so is unlikely to be fast.
The DISTINCT keyword has been introduced in release 1.7.4.
How to get the distinct value of one of my models in Google App Engine
[ "", "python", "google-app-engine", "google-cloud-datastore", "" ]
For work i need to write a tcp daemon to respond to our client software and was wondering if any one had any tips on the best way to go about this. Should i fork for every new connection as normally i would use threads?
It depends on your application. Threads and forking can both be perfectly valid approaches, as well as the third option of a single-threaded event-driven model. If you can explain a bit more about exactly what you're writing, it would help when giving advice. For what it's worth, here are a few general guidelines: * If you have no shared state, use forking. * If you have shared state, use threads or an event-driven system. * If you need high performance under very large numbers of connections, avoid forking as it has higher overhead (particularly memory use). Instead, use threads, an event loop, or several event loop threads (typically one per CPU). Generally forking will be the easiest to implement, as you can essentially ignore all other connections once you fork; threads the next hardest due to the additional synchronization requirements; the event loop more difficult due to the need to turn your processing into a state machine; and multiple threads running event loops the most difficult of them all (due to combining other factors).
I'd suggest forking for connections over threads any day. The problem with threads is the shared memory space, and how easy it is to manipulate the memory of another thread. With forked processes, any communication between the processes has to be intentionally done by you. Just searched and found this SO answer: [What is the purpose of fork?](https://stackoverflow.com/questions/985051/what-is-the-purpose-of-fork/985068#985068). You obviously know the answer to that, but the #1 answer in that thread has good points on the advantages of fork().
best way to write a linux daemon
[ "", "c++", "linux", "daemon", "" ]
I'm going to be driving a touch-screen application (not a web app) that needs to present groups of images to users. The desire is to present a 3x3 grid of images with a page forward/backward capability. They can select a few and I'll present just those images. I don't see that `ListView` does quite what I want (although WPF is big enough that I might well have missed something obvious!). I could set up a `Grid` and stuff images in the grid positions. But I was hoping for something nicer, more automated, less brute-force. Any thoughts or pointers?
You might want to use an `ItemsControl`/`ListBox` and then set a `UniformGrid` panel for a 3x3 display as its `ItemsPanel` to achieve a proper WPF bindable solution. ``` <ListBox ScrollViewer.HorizontalScrollBarVisibility="Disabled"> <ListBox.ItemsPanel> <ItemsPanelTemplate> <UniformGrid Rows="3" Columns="3"/> </ItemsPanelTemplate> </ListBox.ItemsPanel> <Image Source="Images\img1.jpg" Width="100"/> <Image Source="Images\img2.jpg" Width="50"/> <Image Source="Images\img3.jpg" Width="200"/> <Image Source="Images\img4.jpg" Width="75"/> <Image Source="Images\img5.jpg" Width="125"/> <Image Source="Images\img6.jpg" Width="100"/> <Image Source="Images\img7.jpg" Width="50"/> <Image Source="Images\img8.jpg" Width="50"/> <Image Source="Images\img9.jpg" Width="50"/> </ListBox> ``` You need to set your collection of Images as ItemsSource binding if you are looking for a dynamic solution here. But the question is too broad to give an exact answer.
I know that this is a pretty old question, but I'm answering because this page is in the first page on Google and this link could be useful for someone. [WPF Photo Viewer Demo](https://github.com/microsoft/WPF-Samples/tree/master/Sample%20Applications/PhotoViewerDemo) Screenshot: ![Screenshot](https://i.stack.imgur.com/wK4hK.png)
WPF Image Gallery
[ "", "c#", "wpf", "image", "xaml", "gallery", "" ]
I have a multi-byte string containing a mixture of japanese and latin characters. I'm trying to copy parts of this string to a separate memory location. Since it's a multi-byte string, some of the characters uses one byte and other characters uses two. When copying parts of the string, I must not copy "half" japanese characters. To be able to do this properly, I need to be able to determine where in the multi-byte string characters starts and ends. As an example, if the string contains 3 characters which requires [2 byte][2 byte][1 byte], I must copy either 2, 4 or 5 bytes to the other location and not 3, since if I were copying 3 I would copy only half the second character. To figure out where in the multi-byte string characters starts and ends, I'm trying to use the Windows API function CharNext and CharNextExA but without luck. When I use these functions, they navigate through my string one byte at a time, rather than one character at a time. According to MSDN, CharNext is supposed to *The CharNext function retrieves a pointer to the next character in a string.*. Here's some code to illustrate this problem: ``` #include <windows.h> #include <stdio.h> #include <wchar.h> #include <string.h> /* string consisting of six "asian" characters */ wchar_t wcsString[] = L"\u9580\u961c\u9640\u963f\u963b\u9644"; int main() { // Convert the asian string from wide char to multi-byte. LPSTR mbString = new char[1000]; WideCharToMultiByte( CP_UTF8, 0, wcsString, -1, mbString, 100, NULL, NULL); // Count the number of characters in the string. int characterCount = 0; LPSTR currentCharacter = mbString; while (*currentCharacter) { characterCount++; currentCharacter = CharNextExA(CP_UTF8, currentCharacter, 0); } } ``` (please ignore memory leak and failure to do error checking.) Now, in the example above I would expect that characterCount becomes 6, since that's the number of characters in the asian string. But instead, characterCount becomes 18 because mbString contains 18 characters: ``` 門阜陀阿阻附 ``` I don't understand how it's supposed to work. How is CharNext supposed to know whether "é–€é" in the string is an encoded version of a Japanese character, or in fact the characters é – € and é? Some notes: * I've read Joels blog post about what every developer needs to know about Unicode. I may have misunderstood something in it though. * If all I wanted to do was to count the characters, I could count the characters in the asian string directly. Keep in mind that my real goal is copying parts of the multi-byte string to a separate location. The separate location only supports multi-byte, not widechar. * If I convert the content of mbString back to wide char using MultiByteToWideChar, I get the correct string (門阜陀阿阻附), which indicates that there's nothing wrong with mbString. EDIT: Apparantly the CharNext functions doesn't support UTF-8 but Microsoft forgot to document that. I threw/copiedpasted together my own routine, which I won't use and which needs improving. I'm guessing it's easily crashable. ``` LPSTR CharMoveNext(LPSTR szString) { if (szString == 0 || *szString == 0) return 0; if ( (szString[0] & 0x80) == 0x00) return szString + 1; else if ( (szString[0] & 0xE0) == 0xC0) return szString + 2; else if ( (szString[0] & 0xF0) == 0xE0) return szString + 3; else if ( (szString[0] & 0xF8) == 0xF0) return szString + 4; else return szString +1; } ```
Here is a really good explanation of what is going on here at the [Sorting it All Out blog](http://blogs.msdn.com/michkap/default.aspx): [Is CharNextExA broken?](http://www.siao2.com/2007/04/19/2190207.aspx). In short, CharNext is not designed to work with UTF8 strings.
As far as I can determine (google and experimentation), `CharNextExA` doesn't actually work with UTF-8, only supported multibyte encodings that use shorter lead/trail byte pairs or single byte characters. UTF-8 is a fairly regular encoding, there are a lot of libraries that will do what you want but it's also fairly easy to roll your own. Have a look in here [unicode.org](http://www.unicode.org/versions/Unicode5.0.0/ch03.pdf), particularly table 3-7 for valid sequence forms. ``` const char* NextUtf8( const char* in ) { if( in == NULL || *in == '\0' ) return in; unsigned char uc = static_cast<unsigned char>(*in); if( uc < 0x80 ) { return in + 1; } else if( uc < 0xc2 ) { // throw error? invalid lead byte } else if( uc < 0xe0 ) { // check in[1] for validity( 0x80 .. 0xBF ) return in + 2; } else if( uc < 0xe1 ) { // check in[1] for validity( 0xA0 .. 0xBF ) // check in[2] for validity( 0x80 .. 0xBF ) return in + 3; } else // ... etc. // ... } ```
How do I use CharNext in the Windows API properly?
[ "", "c++", "unicode", "multibyte", "" ]
In my web application I am inititalizing logging in my web application by using the `PropertyConfigurator.configure(filepath)` function in a servlet which is loaded on startup. ``` String log4jfile = getInitParameter("log4j-init-file"); if (log4jfile != null) { String propfile = getServletContext().getRealPath(log4jfile); PropertyConfigurator.configure(propfile); ``` I have placed my **log4j.properties** file in **WEB\_INF/classes** directory and am specifying the file as ``` log4j.appender.rollingFile.File=${catalina.home}/logs/myapp.log ``` My root logger was initally configured as: ``` log4j.rootLogger=DEBUG, rollingFile,console ``` (I believe the console attribute also logs the statements into catalina.out) In windows the logging seems to occor normally with the statements appearing in the log file. In unix my logging statements are being redirected to the catalina.out instead of my actual log file(Which contains only logs from the initalization servlet). All subsequent logs show up in catalina.out. This leads me to believe that my log4j is not getting set properly but I am unable to figure out the cause. Could anyone help me figure out where the problem might be. Thanks in advance, Fell
It may be a permissions problem - check the owners and permissions of the ${catalina.home}/logs directory.
It will depend on the server you are using. Is it Tomcat? Is it the same version on Windows and on Linux?
log4j logging to catalina.out in unix instead of log file
[ "", "java", "log4j", "" ]
Basically this program searches a .txt file for a word and if it finds it, it prints the line and the line number. Here is what I have so far. Code: ``` #include "std_lib_facilities.h" int main() { string findword; cout << "Enter word to search for.\n"; cin >> findword; char filename[20]; cout << "Enter file to search in.\n"; cin >> filename; ifstream ist(filename); string line; string word; int linecounter = 1; while(getline(ist, line)) { if(line.find(findword) != string::npos){ cout << line << " " << linecounter << endl;} ++linecounter; } keep_window_open(); } ``` Solved.
You're looking for [`find`](http://www.cppreference.com/wiki/string/find): ``` if (line.find(findword) != string::npos) { ... } ```
I would do as you suggested and break the lines into words or tokens delimited by whitespace and then search for desired keywords amongst the list of tokens.
Reading parts of a line (getline())
[ "", "c++", "" ]
I'm implementing a web application that is written in C++ using CGI. Is it possible to use a 3D drawn GUI that also has animations? Should I just include some kind of mechanism that generates animated gifs and uses an [image map](http://www.w3schools.com/TAGS/tag_map.asp)? Is there another, more elegant way of doing this? EDIT: So it sums up to Java or Silverlight or Flash 10. Is Flash 10 common already? If not is Java a better choice since it's more wide spread?
First: from some of your comments, it appears you're not planning to actually use your web application in a browser. If I'm wrong, see below. If I'm right, then you're perfectly fine to write whatever UI you want using whatever technology you want and connect to your web application via that UI program. There are issues you'll have to deal with: what platform, what technology, etc. But you'll have no problems connecting to your web application using such a UI; just follow the HTTP protocols in your socket programming or use a framework that does it for you. That said, if you're planning to do 3d in-browser, you should look into the [Google O3D API](http://code.google.com/apis/o3d/). It's a browser plugin, but it should give you everything you need to do 3d in-browser using the GPU rather than software rendering like Flash and Silverlight do. If you're not willing to use a browser plugin and you're not trying to do your own UI program, then your only other option is to use Canvas and Chrome Experiments as @Kitsune has suggested.
I'd recommend taking a look at the [Canvas](http://en.wikipedia.org/wiki/Canvas_(HTML_element)) HTML tag, as well as [Chrome Experiments](http://www.chromeexperiments.com/). Those require a pretty modern browser with a very goo JavaScript implementation, but some are rather impressive. Other options include Adobe Flash and Microsoft's Silverlight.
Generating a 3D GUI through CGI
[ "", "c++", "web-applications", "cgi", "3d", "" ]
Should be a simple question, I'm just not familiar with PHP syntax and I am wondering if the following code is safe from SQL injection attacks?: ``` private function _getAllIngredients($animal = null, $type = null) { $ingredients = null; if($animal != null && $type != null) { $query = 'SELECT id, name, brief_description, description, food_type, ingredient_type, image, price, created_on, updated_on FROM ingredient WHERE food_type = \'' . $animal . '\' AND ingredient_type =\'' . $type . '\';'; $rows = $this->query($query); if(count($rows) > 0) { ``` etc, etc, etc I've googled around a bit and it seems that injection safe code seems to look different than the WHERE food\_type = \'' . $animal . '\' syntax used here. Sorry, I don't know what version of PHP or MySQL that is being used here, or if any 3rd party libraries are being used, can anyone with expertise offer any input? ## UPDATE What purpose does the \ serve in the statement?: ``` WHERE food_type = \'' . $animal . '\' ``` In my googling, I came across many references to `mysql_real_escape_string`...is this a function to protect from SQL Injection and other nastiness? The class declaration is: ``` class DCIngredient extends SSDataController ``` So is it conceivable that `mysql_real_escape_string` is included in there? Should I be asking to see the implementation of SSDataController?
**Yes** this code is vulnerable to SQL-Injection. The "\" escapse only the quote character, otherwise PHP thinks the quote will end your (sql-)string. Also as you deliver the whole SQL-String to the *SSDataControler* Class, it is not possible anymore to avoid the attack, if a prepared string has been injected. So **the class SSDataControler is broken (vulnerable) by design.** try something more safe like this: ``` $db_connection = new mysqli("host", "user", "pass", "db"); $statement = $db_connection->prepare("SELECT id, name, brief_description, description, food_type, ingredient_type, image, price, created_on, updated_on FROM ingredient WHERE food_type = ? AND ingredient_type = ?;';"); $statement->bind_param("s", $animal); $statement->bind_param("s", $type); $statement->execute(); ``` by using the *bind* method, you can specify the type of your parameter(s for string, i for integer, etc) and you will never think about sql injection again
`$animal` can be a string which contains `'; drop table blah; --` so yes, this is vunerable to SQL injection. You should look into using prepared statements, where you bind parameters, so that injection cannot occur: <https://www.php.net/pdo.prepared-statements>
Is this PHP/MySQL statement vulnerable to SQL injection?
[ "", "php", "mysql", "sql-injection", "" ]
From cplusplus.com: ``` template < class Key, class Compare = less<Key>, class Allocator = allocator<Key> > class set; ``` "Compare: Comparison class: A class that takes two arguments of the same type as the container elements and returns a bool. The expression comp(a,b), where comp is an object of this comparison class and a and b are elements of the container, shall return true if a is to be placed at an earlier position than b in a strict weak ordering operation. This can either be a class implementing a function call operator or a pointer to a function (see constructor for an example). This defaults to less, which returns the same as applying the less-than operator (a`<`b). The set object uses this expression to determine the position of the elements in the container. All elements in a set container are ordered following this rule at all times." Given that the comparison class is used to decide which of the two objects is "smaller" or "less", how does the class check whether two elements are equal (e.g. to prevent insertion of the same element twice)? I can imagine two approaches here: one would be calling (a == b) in the background, but not providing the option to override this comparison (as with the default less`<Key>`)doesn't seem too STL-ish to me. The other would be the assumption that (a == b) == !(a < b) && !(b < a) ; that is, two elements are considered equal if neither is "less" than the other, but somehow this doesn't feel right to me either, considering that the comparison can be an arbitrarily complex bool functor between objects of an arbitrarily complex class. So how is it really done?
Not an exact duplicate, but the first answer [here](https://stackoverflow.com/questions/1114856/stdset-with-user-defined-type-how-to-ensure-no-duplicates) answers your question Your second guess as to the behaviour is correct
Associative containers in the standard library are defined in terms of *equivalence of keys*, not equality *per se*. As not all `set` and `map` instances use `less`, but may use a generic comparison operator it's necessary to define equivalence in terms of this one comparison function rather then attempting to introduce a separate equality concept. In general, two keys (`k1` and `k2`) in an associative container using a comparison function `comp` are equivalent if and only if: ``` comp( k1, k2 ) == false && comp( k2, k1 ) == false ``` In a container using `std::less` for types that don't have a specific std::less specialization, this means the same as: ``` !(k1 < k2) && !(k2 < k1) ```
When are two elements of an STL set considered identical?
[ "", "c++", "stl", "set", "" ]
I have the following table schema in acess (2007). > Date eventDate bit(boolean) lunch, > bit(boolean) snacks, bit(boolean) Tea, I would like to have a single query that gives count of luch, snacks and tea (each as a column) for a given month (using the eventdate). Thanks for help.
In Access, True is -1 and False is 0. So you can use the absolute value of the sum of those values to indicate how many were True. ``` SELECT Abs(Sum(lunch)) AS SumofLunch, Abs(Sum(snacks)) AS SumofSnacks, Abs(Sum(Tea)) AS SumofTea FROM YourTable WHERE eventDate >= #2009/08/01# And eventDate < #2009/09/01#; ```
Try: ``` SELECT SUM(ABS(lunch)) AS lunchCount, SUM(ABS(snacks)) AS snackCount, SUM(ABS(tea)) AS teaCount FROM <TableName> WHERE eventDate >= #1/1/2009# AND eventDate < #2/1/2009# ```
Access database query for using month from a date
[ "", "sql", "ms-access", "" ]
The recent [ATL security update](http://msdn.microsoft.com/en-us/visualc/ee309358.aspx) updated the C++ runtimes to version 8.0.50727.4053. Unfortunately, this update broke one of our DLLs that dynamically links to the runtime, as we only have 8.0.50727.762 available to us on the target machine (we don't even use ATL). Is there a way we can get Visual Studio to dynamically link to the older DLL? I'd like to avoid statically linking if possible.
Another solution is forcing VS to link against the old versions of the WinSxS DLLs as explained in [this article](http://tedwvc.wordpress.com/2009/08/10/avoiding-problems-with-vc2005-sp1-security-update-kb971090/).
Copy the requisite DLL versions directly into your executable's directory, it is searched first during dynamic linking.
ATL Security update broke compatibility for DLLs depending on the older version
[ "", "c++", "visual-studio", "runtime", "atl", "" ]
I am looking to create a simple php script that based on the URI, it will call a certain function. Instead of having a bunch of if statements, I would like to be able to visit: /dev/view/posts/ and it would call a 'posts' function I have created in the PHP script. Any help would be greatly appreciated. Thanks!
Take a look at the [`call_user_func`](https://www.php.net/call_user_func) function documentation. ``` $functions['/dev/view/posts'] = 'function_a'; $functions['/dev/view/comments'] = 'function_b'; $functions['/dev/view/notes'] = 'function_c'; $uri = '/dev/view/comments'; call_user_func($functions[$uri]); ```
Are you using a framework? they do this sort of thing for you. you need to use [mod\_rewrite](http://www.addedbytes.com/apache/url-rewriting-for-beginners/) in apache to do this. Basically you take /dev/view/posts and rewrite it to /dev/view.php?page=posts ``` RewriteEngine On RewriteRule ^/dev/view/posts/(.*)$ /dev/view?page=$1 ``` in view.php ``` switch($_REQUEST['page']) { case 'posts': // call posts echo posts(); break; } ``` **EDIT** made this call whatever function is called "page" You probably want to use a framework to do this because there are security implications. but very simply you can do this: ``` if (array_key_exists('page',$_REQUEST)) { $f = $_REQUEST['page']; if (is_callable($f)) { call_user_func($f); } } ``` Note there are MUCH better ways of doing this! You should be using a framework!!!
Call PHP Function based on URI
[ "", "php", "" ]
I need to add some extra text to an existing PDF using Python, what is the best way to go about this and what extra modules will I need to install. Note: Ideally I would like to be able to run this on both Windows and Linux, but at a push Linux only will do. Edit: [pypdf](https://pypi.org/project/pypdf/) and [ReportLab](https://www.reportlab.com/) look good but neither one will allow me to edit an existing PDF, are there any other options?
I know this is an older post, but I spent a long time trying to find a solution. I came across a decent one using only ReportLab and PyPDF so I thought I'd share: 1. read your PDF using `PdfFileReader()`, we'll call this *input* 2. create a new pdf containing your text to add using ReportLab, save this as a string object 3. read the string object using `PdfFileReader()`, we'll call this *text* 4. create a new PDF object using `PdfFileWriter()`, we'll call this *output* 5. iterate through *input* and apply `.mergePage(*text*.getPage(0))` for each page you want the text added to, then use `output.addPage()` to add the modified pages to a new document This works well for simple text additions. See PyPDF's sample for watermarking a document. Here is some code to answer the question below: ``` packet = StringIO.StringIO() can = canvas.Canvas(packet, pagesize=letter) <do something with canvas> can.save() packet.seek(0) input = PdfFileReader(packet) ``` From here you can merge the pages of the input file with another document.
# Example for [Python 2.7]: --- ``` from pyPdf import PdfFileWriter, PdfFileReader import StringIO from reportlab.pdfgen import canvas from reportlab.lib.pagesizes import letter packet = StringIO.StringIO() can = canvas.Canvas(packet, pagesize=letter) can.drawString(10, 100, "Hello world") can.save() #move to the beginning of the StringIO buffer packet.seek(0) # create a new PDF with Reportlab new_pdf = PdfFileReader(packet) # read your existing PDF existing_pdf = PdfFileReader(file("original.pdf", "rb")) output = PdfFileWriter() # add the "watermark" (which is the new pdf) on the existing page page = existing_pdf.getPage(0) page.mergePage(new_pdf.getPage(0)) output.addPage(page) # finally, write "output" to a real file outputStream = file("destination.pdf", "wb") output.write(outputStream) outputStream.close() ``` # Example for Python 3.x: --- ``` from PyPDF2 import PdfFileWriter, PdfFileReader import io from reportlab.pdfgen import canvas from reportlab.lib.pagesizes import letter packet = io.BytesIO() can = canvas.Canvas(packet, pagesize=letter) can.drawString(10, 100, "Hello world") can.save() #move to the beginning of the StringIO buffer packet.seek(0) # create a new PDF with Reportlab new_pdf = PdfFileReader(packet) # read your existing PDF existing_pdf = PdfFileReader(open("original.pdf", "rb")) output = PdfFileWriter() # add the "watermark" (which is the new pdf) on the existing page page = existing_pdf.pages[0] page.merge_page(new_pdf.pages[0]) output.add_page(page) # finally, write "output" to a real file output_stream = open("destination.pdf", "wb") output.write(output_stream) output_stream.close() ```
Add text to Existing PDF using Python
[ "", "python", "pdf", "" ]
I need to write some code that will search and replace whole words in a string that are outside HTML tags. So if I have this string: ``` string content = "the brown fox jumped over <b>the</b> lazy dog over there"; string keyword = "the"; ``` I need to something like: ``` if (content.ToLower().Contains(keyword.ToLower())) content = content.Replace(keyword, String.Format("<span style=\"background-color:yellow;\">{0}</span>", keyword)); ``` but I don't want to replace the "the" in the bold tags or the "the" in "there", just the first "the".
Try this: ``` content = RegEx.Replace(content, "(?<!>)" + keyword + "(?!(<|\w))", "<span blah...>" + keyword + '</span>'; ``` **Edit:** I fixed the "these" case, but not the case where *more than* the keyword is wrapped in HTML, e.g., "fox jumped **over the lazy** dog." What you're asking for is going to be nearly impossible with RegEx and normal, everyday HTML, because to know if you're "inside" a tag, you would have to "pair" each start and end tag, and ignore tags that are intended to be self-closing (BR and IMG, for instance). If this is merely eye candy for a web site, I suggest going the other route: fix your CSS so the SPAN you are adding only *impacts* the HTML outside of a tag. For example: ``` content = content.Replace("the", "<span class=\"highlight\">the</span>"); ``` Then, in your CSS: ``` span.highlight { background-color: yellow; } b span.highlight, i span.highlight, em span.highlight, strong span.highlight, p span.highlight, blockquote span.highlight { background: none; } ``` Just add an exclusion for each HTML tag whose contents should not be highlighted.
you can use [this](http://www.codeproject.com/KB/cs/TagBasedHtmlParser.aspx) library to parse you html and to replace only the words that are not in any html, to replace only the word "the" and not "three" use RegEx.Replace("the\s+"...) instead of string replace
Search and replace non-HTML content
[ "", "c#", "html", "string", "" ]
Is there a method that tests if 2 URLs are equal, ie point to the same place? I am not talking about 2 URLs with different domain names pointing to the same IP address but for example, 2 URLs that point to the same .aspx page: * <http://example.com/Products/Default.aspx?A=B&C=D&E=F> is equal to these: * <http://example.com/Products/Default.aspx> * <http://example.com/Products/> * `~/Products/Default.aspx` * `~/Products/` Note/assumtions 1. QueryString Values are Ignored 2. ASP.NET (Pref C#) 3. Default.aspx is the default page **----UPDATE----** This is a very crude method that tests a URL to see if matches the current URL: I tried creating a `new Uri()` with both the local and check URLs but dont know that works and went down the string checking avenue. The implementation of the SiteMapProvider skips this step if the URL starts with "HTTP" as this assumes an external URL. Since I have a SaaS framework that will always ensure relative paths (as these can be on different subdomains) it easier to strip things down. Any comments on optimization? I guess for a start we can pass in a variable containing the current URL? Not sure of the overhead of calling `HttpContext.Current.Request.Url.LocalPath` many times? ``` /// <summary> /// Assumes URL is relative aspx page or folder path /// </summary> /// <param name="url"></param> /// <returns></returns> public static bool CurrentURLMatch(string url) { string localURL = HttpContext.Current.Request.Url.LocalPath; if (HttpContext.Current.Request.Url.Host == "localhost") { localURL = localURL.Substring(localURL.IndexOf('/') + 1); localURL = localURL.Substring(localURL.IndexOf('/')); } string compareURL = url.ToLower(); // Remove QueryString Values if (localURL.Contains("?")) { localURL = localURL.Split('?')[0]; } if (compareURL.Contains("?")) { compareURL = compareURL.Split('?')[0]; } if (localURL.Contains("#")) { localURL = localURL.Split('#')[0]; } if (compareURL.Contains("?")) { compareURL = compareURL.Split('#')[0]; } // Prepare End of Local URL if (!localURL.Contains("aspx")) { if (!localURL.EndsWith("/")) { localURL = String.Concat(localURL, "/"); } } // Prepare End of Compare URL if (!compareURL.Contains("aspx")) { if (!compareURL.EndsWith("/")) { compareURL = String.Concat(localURL, "/"); } } if (localURL.EndsWith(@"/")) { localURL = String.Concat(localURL, "Default.aspx"); } if (compareURL.EndsWith(@"/")) { compareURL = String.Concat(compareURL, "Default.aspx"); } if (compareURL.Contains(@"//")) { compareURL = compareURL.Replace(@"//", String.Empty); compareURL = compareURL.Substring(compareURL.IndexOf("/") + 1); } compareURL = compareURL.Replace("~", String.Empty); if (localURL == compareURL) { return true; } return false; } ```
You might be looking for [URL normalization](http://en.wikipedia.org/wiki/URL_normalization) techniques. They might be a good starting point :) Once you have normalized the URLs, you simply need to check if they are equal (keep in mind your assumptions, for instance, you discard the querystring).
for the record, here is the translation of <http://en.wikipedia.org/wiki/URL%5Fnormalization> to C#: ``` using System; using System.Web; namespace UrlNormalizationTest { public static class UrlNormalization { public static bool AreTheSameUrls(this string url1, string url2) { url1 = url1.NormalizeUrl(); url2 = url2.NormalizeUrl(); return url1.Equals(url2); } public static bool AreTheSameUrls(this Uri uri1, Uri uri2) { var url1 = uri1.NormalizeUrl(); var url2 = uri2.NormalizeUrl(); return url1.Equals(url2); } public static string[] DefaultDirectoryIndexes = new[] { "default.asp", "default.aspx", "index.htm", "index.html", "index.php" }; public static string NormalizeUrl(this Uri uri) { var url = urlToLower(uri); url = limitProtocols(url); url = removeDefaultDirectoryIndexes(url); url = removeTheFragment(url); url = removeDuplicateSlashes(url); url = addWww(url); url = removeFeedburnerPart(url); return removeTrailingSlashAndEmptyQuery(url); } public static string NormalizeUrl(this string url) { return NormalizeUrl(new Uri(url)); } private static string removeFeedburnerPart(string url) { var idx = url.IndexOf("utm_source=", StringComparison.Ordinal); return idx == -1 ? url : url.Substring(0, idx - 1); } private static string addWww(string url) { if (new Uri(url).Host.Split('.').Length == 2 && !url.Contains("://www.")) { return url.Replace("://", "://www."); } return url; } private static string removeDuplicateSlashes(string url) { var path = new Uri(url).AbsolutePath; return path.Contains("//") ? url.Replace(path, path.Replace("//", "/")) : url; } private static string limitProtocols(string url) { return new Uri(url).Scheme == "https" ? url.Replace("https://", "http://") : url; } private static string removeTheFragment(string url) { var fragment = new Uri(url).Fragment; return string.IsNullOrWhiteSpace(fragment) ? url : url.Replace(fragment, string.Empty); } private static string urlToLower(Uri uri) { return HttpUtility.UrlDecode(uri.AbsoluteUri.ToLowerInvariant()); } private static string removeTrailingSlashAndEmptyQuery(string url) { return url .TrimEnd(new[] { '?' }) .TrimEnd(new[] { '/' }); } private static string removeDefaultDirectoryIndexes(string url) { foreach (var index in DefaultDirectoryIndexes) { if (url.EndsWith(index)) { url = url.TrimEnd(index.ToCharArray()); break; } } return url; } } } ``` With the following tests: ``` using NUnit.Framework; using UrlNormalizationTest; namespace UrlNormalization.Tests { [TestFixture] public class UnitTests { [Test] public void Test1ConvertingTheSchemeAndHostToLowercase() { var url1 = "HTTP://www.Example.com/".NormalizeUrl(); var url2 = "http://www.example.com/".NormalizeUrl(); Assert.AreEqual(url1, url2); } [Test] public void Test2CapitalizingLettersInEscapeSequences() { var url1 = "http://www.example.com/a%c2%b1b".NormalizeUrl(); var url2 = "http://www.example.com/a%C2%B1b".NormalizeUrl(); Assert.AreEqual(url1, url2); } [Test] public void Test3DecodingPercentEncodedOctetsOfUnreservedCharacters() { var url1 = "http://www.example.com/%7Eusername/".NormalizeUrl(); var url2 = "http://www.example.com/~username/".NormalizeUrl(); Assert.AreEqual(url1, url2); } [Test] public void Test4RemovingTheDefaultPort() { var url1 = "http://www.example.com:80/bar.html".NormalizeUrl(); var url2 = "http://www.example.com/bar.html".NormalizeUrl(); Assert.AreEqual(url1, url2); } [Test] public void Test5AddingTrailing() { var url1 = "http://www.example.com/alice".NormalizeUrl(); var url2 = "http://www.example.com/alice/?".NormalizeUrl(); Assert.AreEqual(url1, url2); } [Test] public void Test6RemovingDotSegments() { var url1 = "http://www.example.com/../a/b/../c/./d.html".NormalizeUrl(); var url2 = "http://www.example.com/a/c/d.html".NormalizeUrl(); Assert.AreEqual(url1, url2); } [Test] public void Test7RemovingDirectoryIndex1() { var url1 = "http://www.example.com/default.asp".NormalizeUrl(); var url2 = "http://www.example.com/".NormalizeUrl(); Assert.AreEqual(url1, url2); } [Test] public void Test7RemovingDirectoryIndex2() { var url1 = "http://www.example.com/default.asp?id=1".NormalizeUrl(); var url2 = "http://www.example.com/default.asp?id=1".NormalizeUrl(); Assert.AreEqual(url1, url2); } [Test] public void Test7RemovingDirectoryIndex3() { var url1 = "http://www.example.com/a/index.html".NormalizeUrl(); var url2 = "http://www.example.com/a/".NormalizeUrl(); Assert.AreEqual(url1, url2); } [Test] public void Test8RemovingTheFragment() { var url1 = "http://www.example.com/bar.html#section1".NormalizeUrl(); var url2 = "http://www.example.com/bar.html".NormalizeUrl(); Assert.AreEqual(url1, url2); } [Test] public void Test9LimitingProtocols() { var url1 = "https://www.example.com/".NormalizeUrl(); var url2 = "http://www.example.com/".NormalizeUrl(); Assert.AreEqual(url1, url2); } [Test] public void Test10RemovingDuplicateSlashes() { var url1 = "http://www.example.com/foo//bar.html".NormalizeUrl(); var url2 = "http://www.example.com/foo/bar.html".NormalizeUrl(); Assert.AreEqual(url1, url2); } [Test] public void Test11AddWww() { var url1 = "http://example.com/".NormalizeUrl(); var url2 = "http://www.example.com".NormalizeUrl(); Assert.AreEqual(url1, url2); } [Test] public void Test12RemoveFeedburnerPart() { var url1 = "http://site.net/2013/02/firefox-19-released/?utm_source=rss&utm_medium=rss&utm_campaign=firefox-19-released".NormalizeUrl(); var url2 = "http://site.net/2013/02/firefox-19-released".NormalizeUrl(); Assert.AreEqual(url1, url2); } } } ```
Check if 2 URLs are equal
[ "", "c#", "asp.net", "" ]
Are there any IDEs on PC which will allow you to view your changes live as you are typing? / Can any IDEs you know of be set up to do that? For both HTML and PHP ideally?
You could stick a meta refresh in your head, but that will get ugly if you have a lot of php parse errors. You could put the page you want to test inside a frame and have some javascript refresh that frame every few seconds too. Then you could use any IDE and browser you want!
Zend will do this. But be forewarned that seeing a bunch of unfinished PHP code is very ugly. I'd suggest sticking with refreshing your browser. Oh, and if you actually do anything with that PHP code, like request an RSS feed, you would be making hundreds of calls if it ran the PHP code often.
Live HTML & PHP editor/IDE for PC?
[ "", "php", "html", "ide", "" ]
I know you can use window.print() to print the current page... but what I want to know is can I build a document using javascript in order to populate it with data and print it off? Just like you can have html/xml as a javascript object, can you do something similar to this: ``` var name = "Matt"; var htmlDocumentToPrint = "<html><body><div style='width:300px; height:20px; background-color:#000; text-align:center;'>My name is " + name + "</div></body></html>"; htmlDocumentToPrint.print(); ``` I don't really care about adding colors all that much-- just want to format a document, populate it with data, and print it off. Is this possible?
[Print()](https://developer.mozilla.org/en-US/docs/Web/API/window.print) is a method on the window object. If you create a document in a window using javascript, then call print on that window object, it should work. ``` <script type="text/javascript"> var myWindow = window.open('','','width=200,height=100') myWindow.document.write("This is 'myWindow'") myWindow.print(); </script> ``` Example modified from [w3schools.com](http://www.w3schools.com) window open example.
My first thought: You could create an iframe programmatically, assign the HTML to be printed, call the `print()` function on the context of the iframe.contentWindow, and then remove the iframe from the DOM: ``` function printHTML(input){ var iframe = document.createElement("iframe"); // create the element document.body.appendChild(iframe); // insert the element to the DOM iframe.contentWindow.document.write(input); // write the HTML to be printed iframe.contentWindow.print(); // print it document.body.removeChild(iframe); // remove the iframe from the DOM } printHTML('<h1>Test!</h1>'); ``` You can test the above snippet [here](http://jsbin.com/ezipu/edit).
How does the Javascript print function work? Can I create a document using javascript and print it off?
[ "", "javascript", "html", "dom", "printing", "" ]
If I have a map like this: ``` std::map<char, std::vector<char> > m; m['A'].push_back('a'); m['A'].push_back('b'); m['A'].push_back('c'); m['B'].push_back('h'); m['B'].push_back('f'); ``` How would I find and delete 'b'? Is this possible?
Sure, use an iterator: ``` for (std::map<char, std::vector<char> >::iterator i = m.begin(); i != m.end(); ++i) { std::vector<char>::iterator j = std::find(i->second.begin(), i->second.end(), 'b'); if (j != i->second.end()) { i->second.erase(j); } } ```
If you expect there can be multiple 'b's in the vector, I would write this way. ``` for (std::map<char, std::vector<char> >::iterator i = m.begin(); i != m.end(); ++i) { i->second.erase(std::remove(i->second.begin(), i->second.end(), 'b'), i->second.end()); } ```
c++ map question
[ "", "c++", "" ]
Does anyone have experience profiling a Python/SQLAlchemy app? And what are the best way to find bottlenecks and design flaws? We have a Python application where the database layer is handled by SQLAlchemy. The application uses a batch design, so a lot of database requests is done sequentially and in a limited timespan. It currently takes a bit too long to run, so some optimization is needed. We don't use the ORM functionality, and the database is PostgreSQL.
Sometimes just plain SQL logging (enabled via python's logging module or via the `echo=True` argument on `create_engine()`) can give you an idea how long things are taking. For example if you log something right after a SQL operation, you'd see something like this in your log: ``` 17:37:48,325 INFO [sqlalchemy.engine.base.Engine.0x...048c] SELECT ... 17:37:48,326 INFO [sqlalchemy.engine.base.Engine.0x...048c] {<params>} 17:37:48,660 DEBUG [myapp.somemessage] ``` if you logged `myapp.somemessage` right after the operation, you know it took 334ms to complete the SQL part of things. Logging SQL will also illustrate if dozens/hundreds of queries are being issued which could be better organized into much fewer queries via joins. When using the SQLAlchemy ORM, the "eager loading" feature is provided to partially (`contains_eager()`) or fully (`eagerload()`, `eagerload_all()`) automate this activity, but without the ORM it just means to use joins so that results across multiple tables can be loaded in one result set instead of multiplying numbers of queries as more depth is added (i.e. `r + r*r2 + r*r2*r3` ...) If logging reveals that individual queries are taking too long, you'd need a breakdown of how much time was spent within the database processing the query, sending results over the network, being handled by the DBAPI, and finally being received by SQLAlchemy's result set and/or ORM layer. Each of these stages can present their own individual bottlenecks, depending on specifics. For that you need to use profiling, such as cProfile or hotshot. Here is a decorator I use: ``` import cProfile as profiler import gc, pstats, time def profile(fn): def wrapper(*args, **kw): elapsed, stat_loader, result = _profile("foo.txt", fn, *args, **kw) stats = stat_loader() stats.sort_stats('cumulative') stats.print_stats() # uncomment this to see who's calling what # stats.print_callers() return result return wrapper def _profile(filename, fn, *args, **kw): load_stats = lambda: pstats.Stats(filename) gc.collect() began = time.time() profiler.runctx('result = fn(*args, **kw)', globals(), locals(), filename=filename) ended = time.time() return ended - began, load_stats, locals()['result'] ``` To profile a section of code, place it in a function with the decorator: ``` @profile def go(): return Session.query(FooClass).filter(FooClass.somevalue==8).all() myfoos = go() ``` The output of profiling can be used to give an idea where time is being spent. If for example you see all the time being spent within `cursor.execute()`, that's the low level DBAPI call to the database, and it means your query should be optimized, either by adding indexes or restructuring the query and/or underlying schema. For that task I would recommend using pgadmin along with its graphical EXPLAIN utility to see what kind of work the query is doing. If you see many thousands of calls related to fetching rows, it may mean your query is returning more rows than expected - a cartesian product as a result of an incomplete join can cause this issue. Yet another issue is time spent within type handling - a SQLAlchemy type such as `Unicode` will perform string encoding/decoding on bind parameters and result columns, which may not be needed in all cases. The output of a profile can be a little daunting but after some practice they are very easy to read. There was once someone on the mailing list claiming slowness, and after having him post the results of profile, I was able to demonstrate that the speed problems were due to network latency - the time spent within cursor.execute() as well as all Python methods was very fast, whereas the majority of time was spent on socket.receive(). If you're feeling ambitious, there's also a more involved example of SQLAlchemy profiling within the SQLAlchemy unit tests, if you poke around <http://www.sqlalchemy.org/trac/browser/sqlalchemy/trunk/test/aaa_profiling> . There, we have tests using decorators that assert a maximum number of method calls being used for particular operations, so that if something inefficient gets checked in, the tests will reveal it (it is important to note that in Python, function calls have the highest overhead of any operation, and the count of calls is more often than not nearly proportional to time spent). Of note are the the "zoomark" tests which use a fancy "SQL capturing" scheme which cuts out the overhead of the DBAPI from the equation - although that technique isn't really necessary for garden-variety profiling.
There's an extremely useful profiling recipe on the [SQLAlchemy wiki](http://www.sqlalchemy.org/trac/wiki/UsageRecipes/Profiling) With a couple of minor modifications, ``` from sqlalchemy import event from sqlalchemy.engine import Engine import time import logging logging.basicConfig() logger = logging.getLogger("myapp.sqltime") logger.setLevel(logging.DEBUG) @event.listens_for(Engine, "before_cursor_execute") def before_cursor_execute(conn, cursor, statement, parameters, context, executemany): context._query_start_time = time.time() logger.debug("Start Query:\n%s" % statement) # Modification for StackOverflow answer: # Show parameters, which might be too verbose, depending on usage.. logger.debug("Parameters:\n%r" % (parameters,)) @event.listens_for(Engine, "after_cursor_execute") def after_cursor_execute(conn, cursor, statement, parameters, context, executemany): total = time.time() - context._query_start_time logger.debug("Query Complete!") # Modification for StackOverflow: times in milliseconds logger.debug("Total Time: %.02fms" % (total*1000)) if __name__ == '__main__': from sqlalchemy import * engine = create_engine('sqlite://') m1 = MetaData(engine) t1 = Table("sometable", m1, Column("id", Integer, primary_key=True), Column("data", String(255), nullable=False), ) conn = engine.connect() m1.create_all(conn) conn.execute( t1.insert(), [{"data":"entry %d" % x} for x in xrange(100000)] ) conn.execute( t1.select().where(t1.c.data.between("entry 25", "entry 7800")).order_by(desc(t1.c.data)) ) ``` Output is something like: ``` DEBUG:myapp.sqltime:Start Query: SELECT sometable.id, sometable.data FROM sometable WHERE sometable.data BETWEEN ? AND ? ORDER BY sometable.data DESC DEBUG:myapp.sqltime:Parameters: ('entry 25', 'entry 7800') DEBUG:myapp.sqltime:Query Complete! DEBUG:myapp.sqltime:Total Time: 410.46ms ``` Then if you find an oddly slow query, you could take the query string, format in the parameters (can be done the `%` string-formatting operator, for psycopg2 at least), prefix it with "EXPLAIN ANALYZE" and shove the query plan output into <http://explain.depesz.com/> (found via [this good article on PostgreSQL performance](http://robots.thoughtbot.com/post/2638538135/postgresql-performance-considerations))
How can I profile a SQLAlchemy powered application?
[ "", "python", "sqlalchemy", "profiler", "" ]
I know that it's not safe to throw exceptions from destructors, but is it ever unsafe to throw exceptions from constructors? e.g. what happens for objects that are declared globally? A quick test with gcc and I get an abort, is that always guaranteed? What solution would you use to cater for that situation? Are there any situations where constructors can throw exceptions and not leave things how we expect. EDIT: I guess I should add that I'm trying to understand under what circumstances I could get a resource leak. Looks like the sensible thing to do is manually free up resources we've obtained part way through construction before throwing the exception. I've never needed to throw exceptions in constructors before today so trying to understand if there are any pitfalls. i.e. Is this also safe? ``` class P{ public: P() { // do stuff... if (error) throw exception } } dostuff(P *p){ // do something with P } ... try { dostuff(new P()) } catch(exception) { } ``` will the memory allocated to the object P be released? EDIT2: Forgot to mention that in this particular case dostuff is storing the reference to P in an output queue. P is actually a message and dostuff takes the message, routes it to the appropriate output queue and sends it. Essentially, once dostuff has hold of it, it gets released later in the innards of dostuff. I think I want to put an autoptr around P and call release on the autoptr after dostuff to prevent a memory leak, would that be correct?
Throwing exceptions from a constructor is a [good thing](http://www.parashift.com/c++-faq-lite/exceptions.html#faq-17.2). When something fails in a constructor, you have two options: * Maintain a "zombie" state, where the class exists but does nothing, or * Throw an exception. And maintaining zombie classes can be quite a hassle, when the real answer should have been, "this failed, now what?". According to the Standard at 3.6.2.4: > If construction or destruction of a non-local static object ends in throwing an uncaught exception, the result is to call terminate (18.6.3.3). Where terminate refers to [`std::terminate`](http://en.cppreference.com/w/cpp/error/terminate). --- Concerning your example, no. This is because you aren't using [RAII concepts](http://en.wikipedia.org/wiki/Resource_Acquisition_Is_Initialization). When an exception is thrown, the stack will be unwound, which means all objects get their destructor's called as the code gets to the closest corresponding `catch` clause. A pointer doesn't have a destructor. Let's make a simple test case: ``` #include <string> int main(void) { try { std::string str = "Blah."; int *pi = new int; throw; delete pi; // cannot be reached } catch(...) { } } ``` Here, `str` will allocate memory, and copy "Blah." into it, and `pi` will be initialized to point to an integer in memory. When an exception is thrown, stack-unwinding begins. It will first "call" the pointer's destructor (do nothing), then `str`'s destructor, which will free the memory that was allocated to it. If you use RAII concepts, you'd use a smart pointer: ``` #include <memory> #include <string> int main(void) { try { std::string s = "Blah."; std::auto_ptr<int> pi(new int); throw; // no need to manually delete. } catch(...) { } } ``` Here, `pi`'s destructor will call `delete` and no memory will be leaked. This is why you should always wrap your pointers, and is the same reason we use `std::vector` rather than manually allocating, resizing, and freeing pointers. (Cleanliness and Safety) ### Edit I forgot to mention. You asked this: > I think I want to put an autoptr around P and call release on the autoptr after dostuff to prevent a memory leak, would that be correct? I didn't state it explicitly, and only implied it above, but the answer is **no**. All you have to do is place it inside of `auto_ptr` and when the time comes, it will be deleted automatically. Releasing it manually defeats the purpose of placing it in a container in the first place. I would also suggest you look at more advanced smart pointers, such as those in [boost](http://www.boost.org/doc/libs/1_39_0/libs/smart_ptr/smart_ptr.htm). An extraordinarily popular one is [`shared_ptr`](http://www.boost.org/doc/libs/1_39_0/libs/smart_ptr/shared_ptr.htm), which is [reference counted](http://en.wikipedia.org/wiki/Reference_counting), making it suitable for storage in containers and being copied around. (Unlike `auto_ptr`. Do *not* use `auto_ptr` in containers!)
As [Spence mentioned](https://stackoverflow.com/questions/1197566/is-it-ever-not-safe-to-throw-an-exception-in-a-constructor/1197577#1197577), throwing from a constructor (or allowing an exception to escape a constructor) risks leaking resources if the constructor is not written carefully to handle that case. This is one important reason why using RAII objects (like smart pointers) should be favored - they'll automatically handle the cleanup in the face of exceptions. If you have resources that require deleting or otherwise manually releasing, you need to make certain that they're cleaned up before the exception leaves. This is not always as easy as it might sound (and certainly not as easy as letting an RAII object handle it automatically). And don't forget, if you need to manually handle clean up for something that happens in the constructor's initialization list, you'll need to use the funky 'function-try-block' syntax: ``` C::C(int ii, double id) try : i(f(ii)), d(id) { //constructor function body } catch (...) { //handles exceptions thrown from the ctor-initializer //and from the constructor function body } ``` Also, remember that exception safety is the main (only??) reason that the 'swap' idiom gained widespread favor - it's an easy way to ensure that copy constructors don't leak or corrupt objects in the face of exceptions. So, the bottom line is that using exceptions to handle errors in constructors is fine, but it's not necessarily automatic.
Is it ever not safe to throw an exception in a constructor?
[ "", "c++", "" ]
I have a table of ratings that stores a user ID, object ID, and a score (+1 or -1). Now, when I want to display a list of objects with their total scores, the number of +1 votes, and the number of -1 votes. How can I do this efficiently without having to do SELECT COUNT(\*) FROM rating WHERE (score = +/-1 AND object\_id = ..)? That's two queries per object displayed, which is unacceptable. Is the database design reasonable?
While it doesn't address your question of reasonable design, here's a query that gets you both counts at once: ``` select sum(case when score = 1 then 1 else 0 end) 'positive' , sum(case when score = -1 then 1 else 0 end) 'negative' , objectId from ratings where objectId = @objectId ... group by objectId ```
``` select object_id, sum(case when score = 1 then 1 else 0) upvotes, sum(case when score = -1 then -1 else 0) downvotes, sum(score) from ratings group by object_id ``` Perhaps something like that.
Optimally querying a database of ratings?
[ "", "sql", "database", "database-design", "" ]
I have a structure called vertex and I created some pointers to them. What I want to do is add those pointers to a list. My code below, when it tries to insert the pointer into the list, creates a segmentation fault. Can someone please explain what is going on? ``` #include <iostream> #include <list> #define NUM_VERTICES 8 using namespace std; enum { WHITE, GRAY, BLACK }; struct vertex { int color; int distance; char parent; }; int main() { //create the vertices vertex r = {WHITE, NULL, NULL}; //create pointer to the vertex structures vertex *pr = &r; //create a list to hold the vertices list<vertex*> *r_list = new list<vertex*>; list<vertex*>::iterator it; r_list->insert(it, pr); } ```
There are several things wrong here. First off, you aren't initializing the iterator, like other's have said: ``` list<vertex*>::iterator it = r_list->begin(); ``` Do this and your code will be fine. But your code is done in a bad manner. Why are you allocating the list from the heap? Look at your code: you have a memory leak. You aren't calling `delete r_list` anywhere. This is why you should use smart pointers ([`std::unique_ptr`](http://en.cppreference.com/w/cpp/memory/unique_ptr), [`std::shared_ptr`](http://en.cppreference.com/w/cpp/memory/shared_ptr) if you have C++11, boost equivalents otherwise : [`boost::scoped_ptr`](http://www.boost.org/doc/libs/release/libs/smart_ptr/scoped_ptr.htm) and [`boost::shared_ptr`](http://www.boost.org/doc/libs/release/libs/smart_ptr/shared_ptr.htm)) But better yet, just do it on the stack: ``` //create a list to hold the vertices list<vertex*> r_list; list<vertex*>::iterator it = r_list->begin(); r_list.insert(it, pr); ``` In addition, using the iterator to insert is going about things the long way. Just use [push front()](http://en.cppreference.com/w/cpp/container/list/push_front) or [push back()](http://en.cppreference.com/w/cpp/container/list/push_back): ``` //create a list to hold the vertices list<vertex*> r_list; r_list.push_back(pr); ``` Another thing: if your list outlives the vertex you've constructed, it will be pointing to something invalid. For example: ``` // global list<vertex*> r_list; void some_function(void) { //create the vertices vertex r = {WHITE, NULL, NULL}; //create pointer to the vertex structures vertex *pr = &r; r_list.push_back(pr); } // right here, vertex r stops existing: the list now contains an // invalid pointer. ``` One solution is to store pointers to heap-allocated vertices: ``` // global list<vertex*> r_list; void some_function(void) { //create the vertices vertex *r = new vertex; r->color = WHITE; r->distance = 0; r->parent = 0; r_list.push_back(r); } ``` Now even after the function the list is pointing to a valid heap-allocated vertex. This now has the problem that when you're done using the list, you need to go through the lsit and call `delete` on each element. This problem is assisted by using the [Boost Pointer Container Library](http://www.boost.org/doc/libs/1_39_0/libs/ptr_container/doc/ptr_container.html). The best way, though, is to just store vertices themselves (rather than pointers to them): ``` //create a list to hold the vertices list<vertex> r_list; //create the vertices vertex r = {WHITE, NULL, NULL}; r_list.push_back(r); ``` If you give vertex a constructor, you can even just construct them in-place: ``` struct vertex { int color; int distance; char parent; vertex(int _color, int _distance, char _parent) : color(_color), distance(_distance), parent(_parent) { } }; //create a list to hold the vertices list<vertex> r_list; r_list.push_back(vertex(WHITE, NULL, NULL)); ``` (these are now outside your problem) Firstly, NULL is generally only used when dealing with pointers. Since `distance` and `parent` are not pointers, use `0` to initialize them, rather than `NULL`: ``` //create the vertices vertex r = {WHITE, 0, 0}; ``` Secondly, use `constants` rather than `#define`: ``` #define NUM_VERTICES 8 // <- bad const int NumberVertices = 8; // <- good ``` Lastly, give your enum a name, or place it in a namespace: ``` enum Color { WHITE, GRAY, BLACK }; ``` Hope these help!
First of all, you aren't initializing `it` to anything. Do you mean: ``` list<vertex*>::iterator it = r_list->begin(); ``` Also, why are you initializing an int and char to NULL? Usually people use NULL for pointers. Also, how about naming your enum and benefiting from the type safety of enums, instead of using them as ints? Also, no need to create a new variable to make a pointer to the vertex. When you call insert, you can pass in `&r`. Also, as Peter points out, why not just use `push_back()`? Your code should look more like this: ``` using namespace std; enum Color { WHITE, GRAY, BLACK }; struct vertex { Color color; int distance; char parent; }; int main(int argc, char** argv) { //create the vertices vertex r = {WHITE, 0, ''}; //create a list to hold the vertices list* r_list = new list(); list::iterator it = r_list->begin(); r_list->insert(it, &r); // Or even better, use push_back (or front) r_list->push_back(&r); } ```
STL List to hold structure pointers
[ "", "c++", "list", "stl", "pointers", "segmentation-fault", "" ]
After over a decade of C/C++ coding, I've noticed the following pattern - very good programmers tend to have detailed knowledge of the innards of the compiler. I'm a reasonably good programmer, and I have an ad-hoc collection of compiler "superstitions", so I'd like to reboot my knowledge and start from the basics. Can anyone recommend links to online resources or favorite books? I'm particularly interested in C/C++ compiling, optimization, GCC and LLVM.
Start with the dragon book....(stress more on code optimization and code generation) Go onto write a toy compiler for an educational programming language like Decaf or Cool.., you may use parser generators (lex and yacc) for your front end(to make life easier and focus on more imp stuff).... Then read gcc internals book along with browsing gcc source code.
Compiler Text are good, but they are a bit heavy for teaching yourself. Jack Crenshaw has a "Book" that was a series of articles you can download and read call "Lets Build a Compiler." It follows a "Learn By Doing" methodology that is great if you didn't get anything out of taking formal classes on the subject, or it's been WAY too many years since took it (that's my case). It holds your hand and leads you through writting a compiler instead of smacking you around with Lambda Calculus and deep theoretical issues that only academia cares about. It was a good way to stir up those brain cells that only had a fuzzy memory of writting something on the Vax (YEAH, that right a VAX!) many many moons ago at school. It's written very conversationally and easy to just sit down and read, unlike most text books which require several pots of coffee just to get past the first chapter. Once you have a basis for understanding then more traditional text such as the Dragon book are great references to expand on your understanding. (And personal I like the Dead Tree versions, I printed out Jack's, it's much easier to read in a comfortable position than on a laptop. And the Ebook readers are too expensive for something that doesn't actually feel like you're reading a real book yet.) What some might call a "downside" is that it's written in Pascal, but I thought that just made me think about it more than if someone had given me a working C program to start with. Appart from that it was written with the 68000 in mind, which is only being used in embedded systems at this point time. Again for me this wasn't a problem, I knew 68000 asm and 68000 asm is easier to read than some other asm.
How do C/C++ compilers work?
[ "", "c++", "c", "compiler-construction", "compiler-optimization", "" ]
I would like to know how (and by extension if it's possible) to put SQL Server 2005 in single user mode using SQL statements? I found these instructions on the MSDN, but alas they require SSMS. <http://msdn.microsoft.com/en-us/library/ms345598(SQL.90,loband).aspx> \*To set a database to single-user mode 1. In Object Explorer, connect to an instance of the SQL Server 2005 Database Engine, and then expand that instance. 2. Right-click the database to change, and then click Properties. 3. In the Database Properties dialog box, click the Options page. 4. From the Restrict Access option, select Single. 5. If other users are connected to the database, an Open Connections message will appear. To change the property and close all other connections, click Yes.\*
The following should work: ``` ALTER DATABASE [MyDatabase] SET SINGLE_USER WITH NO_WAIT GO ``` with ``` ALTER DATABASE [MyDatabase] SET MULTI_USER WITH NO_WAIT GO ``` to set it back to multi-user
Try ``` alter database adventureWorks set SINGLE_USER with rollback immediate ``` Should you wish to provide ample time for already executing transactions to complete gracefully you can issue the following: ``` alter database adventureWorks set SINGLE_USER with rollback after 60 seconds ```
What SQL is required to put SQL Server 2005 in single user mode?
[ "", "sql", "sql-server", "sql-server-2005", "" ]
I have been looking all over the web for the simplest solution for this, and currently I have come across nothing that seems simple enough for my needs. I am looking for a way to manipulate a matrix of pixels manually in C++, platform independent. Does anyone know of a library that is simple to use that will help me obtain this?
Use [SDL](http://www.libsdl.org/)
Use [OpenCV](http://opencv.willowgarage.com/wiki/)
C++ pixel level control over graphics
[ "", "c++", "graphics", "" ]
I tried to load some scripts into a page using `innerHTML` on a `<div>`. It appears that the script loads into the DOM, but it is never executed (at least in Firefox and Chrome). Is there a way to have scripts execute when inserting them with `innerHTML`? Sample code: ``` <!DOCTYPE html> <html> <body onload="document.getElementById('loader').innerHTML = '<script>alert(\'hi\')<\/script>'"> Shouldn't an alert saying 'hi' appear? <div id="loader"></div> </body> </html> ```
You have to use [eval()](http://www.w3schools.com/jsref/jsref_eval.asp) to execute any script code that you've inserted as DOM text. MooTools will do this for you automatically, and I'm sure jQuery would as well (depending on the version. jQuery version 1.6+ uses `eval`). This saves a lot of hassle of parsing out `<script>` tags and escaping your content, as well as a bunch of other "gotchas". Generally if you're going to `eval()` it yourself, you want to create/send the script code without any HTML markup such as `<script>`, as these will not `eval()` properly.
Here is a method that recursively replaces all scripts with executable ones: ``` function nodeScriptReplace(node) { if ( nodeScriptIs(node) === true ) { node.parentNode.replaceChild( nodeScriptClone(node) , node ); } else { var i = -1, children = node.childNodes; while ( ++i < children.length ) { nodeScriptReplace( children[i] ); } } return node; } function nodeScriptClone(node){ var script = document.createElement("script"); script.text = node.innerHTML; var i = -1, attrs = node.attributes, attr; while ( ++i < attrs.length ) { script.setAttribute( (attr = attrs[i]).name, attr.value ); } return script; } function nodeScriptIs(node) { return node.tagName === 'SCRIPT'; } ``` **Example call:** ``` nodeScriptReplace(document.getElementsByTagName("body")[0]); ```
Can scripts be inserted with innerHTML?
[ "", "javascript", "html", "dom", "innerhtml", "" ]
This one's probably easy: I have two variables: ``` $sender_id $receiver_id ``` Those ID's are stored and assigned to a user in tblusers. I have no problem selecting one-at-a-time: ``` $data = mysql_query("SELECT * FROM tblusers WHERE usrID='$receiverID'") or die(mysql_error()); while($row = mysql_fetch_array( $data )) { echo $row['usrFirstName']; echo $row['usrLastName']; } ``` However, how would I select both rows (one for senderID and receiverID) so that I can gain access to further information on both those users. Sort of like a "SELECT within a SELECT". Thanks!
``` SELECT * FROM tblusers WHERE usrID='$receiverID' or usrID='$sender_id' ``` EDIT: clarification ``` while($row = mysql_fetch_array( $data )) { if($row['usrID'] == $receiverID) { echo "Receiver: " . $row['usrFirstName'] . " " . $row['usrLastName']; } else { echo "Sender: " . $row['usrFirstName'] . " " . $row['usrLastName']; } } ```
If you need to differentiate which is the receiver and which is the sender: ``` select 'Receiver' as UserType, * from users where usrid = $receiver_id union all select 'Sender' as UserType, * from users where usrid = $sender_id ``` This will return: ``` UserType | UsrID | Name Receiver | 23 | John Smith Sender | 42 | Adam Douglas ``` Of course, with two rows, you can always just compare the ID's to figure that out, too. This is mainly to make scaling easier if you have a larger result set than just two rows.
PHP/MySQL Select Statement
[ "", "php", "mysql", "" ]
I am trying to replace the HTML code with a heredoc statement. However, I am getting a parse error in the last line.I am sure that I have not left any leading space or indentation on the heredoc closing tag line.Following is **a part of** the code: ``` $table = <<<ENDHTML <div style="text-align:center;"> <table border="0.5" cellpadding="1" cellspacing="1" style="width:50%; margin-left:auto; margin-right:auto;"> <tr> <th>Show I.D</th> <th>Show Name</th> </tr> ENDHTML; while($row = mysql_fetch_assoc($result)){ extract($row); $table .= <<<ENDHTML <tr> <td>$showid2 </td> <td>$showname2</td> </tr> ENDHTML; } $table .= <<<ENDHTML </table> <p><$num_shows Shows</p> </div> ENDHTML; echo $table; ?> ``` Where is the problem? I have a related question in addition to above. As a coding practice, is it better to use PHP code throughout or is it better to use a heredoc syntax. I mean, while in PHP mode, the script bounces back and forth between the HTML and PHP code. So, which is the preferred method?
Guys, finally I succeeded in getting the parse error out of my way (phew!!). I just rewrote the code and it worked. Here is the code: ``` $table = <<<ABC <div style="text-align:center;"> <table border="0.5" cellpadding="1" cellspacing="1" style="width:50%; margin-left:auto; margin-right:auto;"> <tr> <th>Show I.D</th> <th>Show Name</th> <th>Show Genre</th> </tr> ABC; while($row = mysql_fetch_assoc($result)) { extract($row); $table .= <<<ABC <tr> <td>$showid2 </td> <td>$showname2</td> <td>$showtype2_label</td> </tr> ABC; } $table .= <<<ABC </table> <p>$num_shows Shows</p> </div> ABC; echo $table; ```
From the [PHP manual about the Heredoc syntax](http://docs.php.net/manual/en/language.types.string.php#language.types.string.syntax.heredoc): > The closing identifier ***must*** begin in the first column of the line. And a little later in the nice red *Warning* box: > It is very important to note that the line with the closing identifier must contain no other characters, except ***possibly*** a semicolon (;). That means especially that the identifier ***may not be indented***, and there may not be any spaces or tabs before or after the semicolon. So you need to write the code like this to comply with the syntax specification: ``` $table = <<<ENDHTML <div style="text-align:center;"> <table border="0.5" cellpadding="1" cellspacing="1" style="width:50%; margin-left:auto; margin-right:auto;"> <tr> <th>Show I.D</th> <th>Show Name</th> </tr> ENDHTML; while($row = mysql_fetch_assoc($result)){ extract($row); $table .= <<<ENDHTML <tr> <td>$showid2 </td> <td>$showname2</td> </tr> ENDHTML; } $table .= <<<ENDHTML </table> <p><$num_shows Shows</p> </div> ENDHTML; echo $table; ``` It’s up to you if you really want to use that.
Problem with heredoc statement
[ "", "php", "" ]
I C# we do it through reflection. In Javascript it is simple as: ``` for(var propertyName in objectName) var currentPropertyValue = objectName[propertyName]; ``` How to do it in Python?
``` for property, value in vars(theObject).items(): print(property, ":", value) ``` Be aware that in some rare cases there's a `__slots__` property, such classes often have no `__dict__`.
[`dir()`](http://docs.python.org/library/functions.html#dir) is the simple way. See here: [Guide To Python Introspection](https://web.archive.org/web/20161130135145/http://www.ibm.com/developerworks/library/l-pyint/index.html/)
How to enumerate an object's properties in Python?
[ "", "python", "reflection", "properties", "" ]
I have read the documentation and several websites on exactly how to do this, however Matlab does not seem to pick up the classes that I have added to the dynamic java class path. Nor do I use the right syntax to correctly construct the object. I have an class HandDB and which to create an object of this type and invoke it's static methods to connect to a SQL database. The class has an empty constructor and takes no parameters. The class is part of a package 'nuffielddb' which I made in a project within Netbeans. All the files are on my usb stick which is my E:\ drive... I would like to be able to use all the classes within the package. The package is contained at E:\nuffielddb. I entered the following commands into Matlab: ``` javaaddpath('E:\'); javaclasspath; % Output from java class path includes E:\ within dynamic path str = java.lang.String('Test'); % Works fine db = nuffieldbd.HandDB(); % Does not work - undefined variable or class error ``` Interesting I typed 'import nuffielddb.\*;' and received no error. Just where am I going wrong? Thanks for your help btw!
Ah problem solved! Well not solved in a sense! I found out it's actually a problem with my matlab installation and I have no idea how to fix it :-( Never mind, it works on the computers at the office :-)
if your classes are in a .jar file, make sure your classpath includes the .jar file name itself (not just the directory it's in). Also if the MATLAB JRE is Java 1.5 (R2006b is, whereas R2009a is Java 1.6, not sure when they switched), make sure your classes are compiled with 1.5 as a target, not 1.6, otherwise MATLAB will not be able to use them.
How do I use user defined Java classes within Matlab?
[ "", "java", "matlab", "interface", "" ]
I got an asp:Image in my server-markup like this: ``` <asp:Image ID="Img1" runat="server"/> ``` Now, I want to find this img in my javascript, but seem to run into the problem of ASP.Net obfuscating my names. The client-markup will look something like this: ``` <img id="ctl00_Content_Img1"/> ``` I imagine this is because everything is inside a form-element called 'Content', which is quite normal I guess? :) Any pointers on how to access this from javascript? [EDIT] I was thinking if there's an easy way to change my javascript "servertime" to search for the obfuscated id ?
Here you can get a server control's client side id by using it's ClientID property like that : ``` <script> var imgID = '<%= Img1.ClientID %>'; var imgObject = document.getElementById(imgID); </script> ```
You can obtain a client reference of the Id generated on the server-side with the ClientID property: ``` var img1 = document.getElementById('<%= Img1.ClientID %>'); ```
How to access asp:Image from Javascript?
[ "", "asp.net", "javascript", "servercontrols", "" ]
For `INSERT`, `UPDATE` and `DELETE` SQL statements executed directly against the database, most database providers return the count of rows affected. For stored procedures, the number of records affected is always `-1`. How do we get the number of records affected by a stored procedure?
Register an out parameter for the stored procedure, and set the value based on `@@ROWCOUNT` if using SQL Server. Use `SQL%ROWCOUNT` if you are using Oracle. Mind that if you have multiple `INSERT/UPDATE/DELETE`, you'll need a variable to store the result from `@@ROWCOUNT` for each operation.
`@@RowCount` will give you the number of records affected by a SQL Statement. The `@@RowCount` works only if you issue it immediately afterwards. So if you are trapping errors, you have to do it on the same line. If you split it up, you will miss out on whichever one you put second. ``` SELECT @NumRowsChanged = @@ROWCOUNT, @ErrorCode = @@ERROR ``` If you have multiple statements, you will have to capture the number of rows affected for each one and add them up. ``` SELECT @NumRowsChanged = @NumRowsChanged + @@ROWCOUNT, @ErrorCode = @@ERROR ```
How can I get the number of records affected by a stored procedure?
[ "", "sql", "sql-server", "oracle", "t-sql", "plsql", "" ]
I've tested this and it works fine, but it looks... weird... to me. Should I be concerned that this is nonstandard form which will be dropped in a future version of PHP, or that it may stop working? I've always had a default case as the final case, never as the first case... ``` switch($kind) { default: // The kind wasn't valid, set it to the default $kind = 'kind1'; // and fall through: case 'kind1': // Do some stuff for kind 1 here break; case 'kind2': // do some stuff for kind2 here break; // [...] case 'kindn': // do some stuff for kindn here break; } // some more stuff that uses $kind here... ``` (In case it's not obvious what I'm trying to do is ensure $kind is valid, hence the default: case. But the switch also performs some operations, and then $kind is used after the switch as well. That's why default: falls through to the first case, and also sets $kind) Suggestions? Is this normal/valid syntax?
It is an unusual idiom, it causes a little pause when you're reading it, a moment of "huh?". It works, but most people would probably expect to find the default case at the end: ``` switch($kind) { case 'kind2': // do some stuff for kind2 here break; // [...] case 'kindn': // do some stuff for kindn here break; case 'kind1': default: // Assume kind1 $kind = 'kind1'; break; } ```
In case anybody find this page through google as I did: I was wondering the same thing as Josh - so... One thing is standards, which I think we should all try harder to adhere too, but another thing is hacking (in the: exploit-every-possibility kinda way). While it's ugly/weird/not normal - it IS possible and IMHO could be useful in some rare cases... Consider the following: ``` $color = "greenish"; //$color = "green"; switch($color) { default: echo "no colors were selected so the color is: "; case "red": echo "red<br />\n"; break; case "blue": echo "blue<br />\n"; break; case "green": echo "green<br />\n"; break; } ``` If `$color = "greenish";` the code will print > no colors were selected so the color is red while if `$color = "green";` or any other defined cases, it will just print the color. It know it not the best example, but you get the point ;) Hope it helps somebody.
default as first option in switch statement?
[ "", "php", "syntax", "switch-statement", "" ]
Markov chains are a (almost standard) way to generate [random gibberish](http://uswaretech.com/blog/2009/06/pseudo-random-text-markov-chains-python/) which looks intelligent to untrained eye. How would you go about identifying markov generated text from human written text. It would be awesome if the resources you point to are Python friendly.
You could use a "brute force" approach, whereby you compare the generated language to data collected on n-grams of higher order than the Markov model that generated it. i.e. If the language was generated with a 2nd order Markov model, up to 3-grams are going to have the correct frequencies, but 4-grams probably won't. You can get up to 5-gram frequencies from Google's public [n-gram dataset.](http://googleresearch.blogspot.com/2006/08/all-our-n-gram-are-belong-to-you.html) It's huge though - 24G *compressed* - you need to get it by post on DVDs from [LDC](http://www.ldc.upenn.edu). EDIT: Added some implementation details The n-grams have already been counted, so you just need to store the counts (or frequencies) in a way that's quick to search. A properly indexed database, or perhaps a Lucene index should work. Given a piece of text, scan across it and look up the frequency of each 5-gram in your database, and see where it ranks compared to other 5-grams that start with the same 4 words. Practically, a bigger obstacle might be the licensing terms of the dataset. Using it for a commercial app might be prohibited.
One simple approach would be to have a large group of humans read input text for you and see if the text makes sense. I'm only half-joking, this is a tricky problem. I believe this to be a hard problem, because Markov-chain generated text is going to have a lot of the same properties of real human text in terms of word frequency and simple relationships between the ordering of words. The differences between real text and text generated by a Markov chain are in higher-level rules of grammar and in semantic meaning, which are hard to encode programmatically. The other problem is that Markov chains are good enough at generating text that they sometimes come up with grammatically and semantically correct statements. As an example, here's an [aphorism from the kantmachine](http://www.beetleinabox.com/cgi-bin/kant.pl): > Today, he would feel convinced that > the human will is free; to-morrow, > considering the indissoluble chain of > nature, he would look on freedom as a > mere illusion and declare nature to be > all-in-all. While this string was written by a computer program, it's hard to say that a human would never say this. I think that unless you can give us more specific details about the computer and human-generated text that expose more obvious differences it will be difficult to solve this using computer programming.
Algorithms to identify Markov generated content?
[ "", "python", "algorithm", "markov", "" ]
I’m currently responsible for rolling out the use of jQuery to the community of Web Developers within our company. Part of this involves presenting a course, however another part involves communicating standards and best practice. If you Google 'jQuery best practice', you’ll probably find the following among the search results. <http://www.smashingmagazine.com/2008/09/16/jquery-examples-and-best-practices/> <http://www.artzstudio.com/2009/04/jquery-performance-rules/> These have been helpful and I have gleamed much useful information on them. However, what I would be really interested in would be any tips, traps, opinions, etc, on best practice from experienced jQuery developers and those who may have found themselves in a similar position to myself. Any good links would also be appreciated. **EDIT:** Added a jQuery Coding Standards section on my own page: <http://www.jameswiseman.com/blog/?p=48>
You can find this trending topic right here in StackOverflow.com [jQuery pitfalls to avoid](https://stackoverflow.com/questions/1229259/jquery-pitfalls-to-avoid) Very interesting useful tips one after the other. here are some more i found in my bookmarks: * <http://paulirish.com/2011/11-more-things-i-learned-from-the-jquery-source/> * <http://jquery.open2space.com/> * http://thetoptenme.wordpress.com/2008/08/19/the-complete-guide-for-jquery-developer-reblog/ * <http://www.tvidesign.co.uk/blog/improve-your-jquery-25-excellent-tips.aspx>
Something I've personally started to do is a sort of an [Apps Hungarian Notation](http://en.wikipedia.org/wiki/Hungarian_notation#Systems_vs._Apps_Hungarian) for jQuery sets, by prefixing those variables with a `$` ``` var someInt = 1; var $someQueryCollection = $( 'selector' ); ``` I find that as my jQuery snippets grow, this becomes invaluable, not only in the promotion of storing jQuery sets as variables, but to help me keep track of which variables *actually are* jQuery sets.
jQuery Standards and Best Practice
[ "", "javascript", "jquery", "code-standards", "" ]
``` class Action { public: void operator() () const; } class Data { public: Data(); ~Data(); Register(Action action) { _a = action; } private: Action _a; } class Display { public: Display(Data d) { d.Register( bind(Display::SomeTask, this, _1) ); } ~Display(); void SomeTask(); } ``` I want to bind the private member \_a of Data to a member function of Display, but I get compile errors saying my argument types don't match when I call d.Register, what am I doing wrong? Thanks.
What you're trying to do is not completely clear, but I'll assume that "bind" is boost::bind (or tr1::bind). A couple of problems with bind(Display::SomeTask, this, \_1): * It should be &Display::SomeTask * The \_1 placeholder makes no sense because that creates an unary function object and: + Display::SomeTask takes no arguments + Action::operator() takes no arguments Using Boost.Function and Boost.Bind, here's what you could write to acheive what I guess you're trying to do: ``` typedef boost::function<void(void)> Action; class Data { public: Data(); ~Data(); Register(Action action) { _a = action; } private: Action _a; }; class Display { public: Display(Data d) { d.Register( bind(&Display::SomeTask, this) ); } ~Display(); void SomeTask(); }; ```
I cannot see what 'bind' returns, but I absolutely sure this is not compatible with Action class. Also you are using 'copy semantic', so if Action has empty implmentation, you will never get desired. Try change Register(Action\* action), and allow 'bind' to return some child of Action class. Also review possibility to migrate to templates - than you even can exclude Action class at all ``` template <class A> class Data { ... Register(A action)... A _a; ... ``` In this case you could be able to use as classes with overridden operator() as functions without argument.
pass a callable object to a member function
[ "", "c++", "bind", "functor", "" ]
I have a website that has a bunch of PDFs that are pre-created and sitting on the webserver. I don't want to allow a user to just type in a URL and get the PDF file (ie <http://MySite/MyPDFFolder/MyPDF.pdf>) I want to only allow them to be viewed when I load them and display them. I have done something similar before. I used PDFSharp to create a PDF in memory and then load it to a page like this: ``` protected void Page_Load(object sender, EventArgs e) { try { MemoryStream streamDoc = BarcodeReport.GetPDFReport(ID, false); // Set the ContentType to pdf, add a header for the length // and write the contents of the memorystream to the response Response.ContentType = "application/pdf"; Response.AddHeader("content-length", Convert.ToString(streamDoc.Length)); Response.BinaryWrite(streamDoc.ToArray()); //End the response Response.End(); streamDoc.Close(); } catch (NullReferenceException) { Communication.Logout(); } } ``` I tried to use this code to read from a file, but could not figure out how to get a MemoryStream to read in a file. I also need a way to say that the "/MyPDFFolder" path is non-browsable. Thanks for any suggestions
To load a PDF file from the disk into a buffer: ``` byte [] buffer; using(FileStream fileStream = new FileStream(Filename, FileMode.Open)) { using (BinaryReader reader = new BinaryReader(fileStream)) { buffer = reader.ReadBytes((int)reader.BaseStream.Length); } } ``` Then you can create your `MemoryStream` like this: ``` using (MemoryStream msReader = new MemoryStream(buffer, false)) { // your code here. } ``` But if you already have your data in memory, you don't need the `MemoryStream`. Instead do this: ``` Response.ContentType = "application/pdf"; Response.AddHeader("Content-Length", buffer.Length.ToString()); Response.BinaryWrite(buffer); //End the response Response.End(); streamDoc.Close(); ```
Anything that is displayed on the user's screen can be captured. You might protect your source files by using a browser-based PDF viewer, but you can't prevent the user from taking snapshots of the data. As far as keeping the source files safe...if you simply store them in a directory that is not under your web root...that should do the trick. Or you can use an .htaccess file to restrict access to the directory.
Can I display a PDF, but not allow linking to it in a website?
[ "", "c#", "asp.net", "web", "" ]
I understand the need for a virtual destructor. But why do we need a ***pure*** virtual destructor? In one of the C++ articles, the author has mentioned that we use pure virtual destructor when we want to make a class abstract. But we can make a class abstract by making any of the member functions as pure virtual. So my questions are 1. When do we really make a destructor pure virtual? Can anybody give a good real time example? 2. When we are creating abstract classes is it a good practice to make the destructor also pure virtual? If yes..then why?
1. Probably the real reason that pure virtual destructors are allowed is that to prohibit them would mean adding another rule to the language and there's no need for this rule since no ill-effects can come from allowing a pure virtual destructor. 2. Nope, plain old virtual is enough. If you create an object with default implementations for its virtual methods and want to make it abstract without forcing anyone to override any **specific** method, you can make the destructor pure virtual. I don't see much point in it but it's possible. Note that since the compiler will generate an implicit destructor for derived classes, if the class's author does not do so, any derived classes will **not** be abstract. Therefore having the pure virtual destructor in the base class will not make any difference for the derived classes. It will only make the base class abstract (thanks for [@kappa](https://stackoverflow.com/users/771271/kappa)'s comment). One may also assume that every deriving class would probably need to have specific clean-up code and use the pure virtual destructor as a reminder to write one but this seems contrived (and unenforced). **Note:** The destructor is the only method that even if it *is* pure virtual **has** to have an implementation in order to instantiate derived classes (yes pure virtual functions can have implementations, being pure virtual means derived classes must override this method, this is orthogonal to having an implementation). ``` struct foo { virtual void bar() = 0; }; void foo::bar() { /* default implementation */ } class foof : public foo { void bar() { foo::bar(); } // have to explicitly call default implementation. }; ```
All you need for an abstract class is at least one pure virtual function. Any function will do; but as it happens, the destructor is something that *any* class will have—so it's always there as a candidate. Furthermore, making the destructor pure virtual (as opposed to just virtual) has no behavioral side effects other than to make the class abstract. As such, a lot of style guides recommend that the pure virtual destuctor be used consistently to indicate that a class is abstract—if for no other reason than it provides a consistent place someone reading the code can look to see if the class is abstract.
Why do we need a pure virtual destructor in C++?
[ "", "c++", "destructor", "pure-virtual", "" ]
Is C# code faster than Visual Basic.NET code, or that is a myth?
That is a myth. They compile down to the same CLR. However the compiler for the same routine may come out slightly differently in the CLR. So for certain routines some may be slightly better like (0.0000001%) faster in C# and vice versa for VB.NET, but they are both running off the same common runtime so they both are the same in performance where it counts.
The only reason that the same code in vb.Net might be slower than c# is that VB [defaults to have `checked` arithmetic on and c# doesn't](http://msdn.microsoft.com/en-us/library/system.overflowexception.aspx). > By default, arithmetic operations and overflows in Visual Basic are checked; in c#, they are not. If you disable that then the resulting IL is likely to be identical. To test this take your code and run it through Reflector and you will see that it looks very similar if you switch from c# to vb.Net views. It is possible that an optimization (or just difference in behaviour) in the c# compiler verses the vb.net compiler might lead to one slightly favouring the other. This is: 1. Unlikely to be significant * if it was it would be low hanging fruit to fix 2. Unlikely to happen. * c# and vb.net's abstract syntax trees are very close in structure. You could automatically transliterate a great deal of vb.Net into c# and vice versa. What is more the result would stand a good chance of looking idiomatic. There are a few constructs in c# not in vb.net such as unsafe pointers. Where used they might provide some benefits but only if they were actually used, and used properly. If you are down to that sort of optimization you should be benchmarking appropriately. Frankly if it makes a **really** big difference then the question should not be "Which of c#/vb.net should I use" you should instead be asking yourself why you don't move some code over to C++/CLI. The only way I could think of that the different compilers could introduce serious, pervasive differences is if one chose to: 1. Implement tail calls in different places * These can make things faster or slower and certainly would affect things on deeply recursive functions. The 4.0 JIT compiler on all platforms [will now respect all tail call instructions even if it has to do a lot of work to achieve it](http://blogs.msdn.com/clrcodegeneration/archive/2009/05/11/tail-call-improvements-in-net-framework-4.aspx). 2. Implemented iterator blocks or anonymous lambdas significantly more efficiently. * I believe both compilers are about as efficient at a high level as they are going to get though in this regard. Both languages would require explicit support for the 'yield foreach' style available to f#'s sequence generators. 3. Boxed when it was not necessary, perhaps by not using the [constrained opcode](http://msdn.microsoft.com/en-us/library/system.reflection.emit.opcodes.constrained%28VS.85%29.aspx) * I have never seen this happen but would love an example where it does. Both the c# and vb.net compilers currently leave such optimization complexities as en-registering of variables, calling conventions, inlining and unrolling entirely up to the common JIT compiler in the CLR. This is likely to have far more of an impact on anything else (especially when the 32 bit and 64bit JIT's can now behave quite differently).
Is C# code faster than Visual Basic.NET code?
[ "", "c#", "vb.net", "" ]
Is there any windows form control that shows list of drive letters with icons?
No, but I'm sure you could make it happen, shouldn't be too tricky, either with a TreeView or if you would just like the list then you could use a ListView. The code to get the drives would be similar to this: ``` //Get all Drives DriveInfo[] ListAllDrives = DriveInfo.GetDrives(); ``` To determine the icons for the ListViewItem or TreeViewNodes you could do something like this: ``` foreach (DriveInfo Drive in ListAllDrives) { //Create ListViewItem, give name etc. ListViewItem NewItem = new ListViewItem(); NewItem.Text = Drive.Name; //Check type and get icon required. if (Drive.DriveType.Removable) { //Set Icon as Removable Icon } //else if (Drive Type is other... etc. etc.) } ```
If you are willing to pay for this, you could check out <http://viewpack.qarchive.org/>. I'm not aware of any free controls.
Drive select box with icons in windows form
[ "", "c#", ".net", "windows", "winforms", "" ]
I have a ajax javascript method that pulls data from a page etc. I want this process to run on a timed interval, say every minute. But I don't want it to loop forever, so max out at 3 times. What is the best way to implement this?
Like this: ``` var runCount = 0; function timerMethod() { runCount++; if(runCount > 3) clearInterval(timerId); //... } var timerId = setInterval(timerMethod, 60000); //60,000 milliseconds ```
A closure-based solution, using `setInterval()` and `clearInterval()`: ``` // define a generic repeater var repeater = function(func, times, interval) { var ID = window.setInterval( function(times) { return function() { if (--times <= 0) window.clearInterval(ID); func(); } }(times), interval); }; // call the repeater with a function as the argument repeater(function() { alert("stuff happens!"); }, 3, 60000); ``` EDIT: Another way of expressing the same, using `setTimeout()` instead: ``` var repeater = function(func, times, interval) { window.setTimeout( function(times) { return function() { if (--times > 0) window.setTimeout(arguments.callee, interval); func(); } }(times), interval); }; repeater(function() { alert("stuff happens!"); }, 3, 2000); ``` Maybe the latter is a bit easier to understand. In the `setTimeout()` version you can ensure that the next iteration happens only *after* the previous one has finished running. You'd simply move the `func()` line *above* the `setTimeout()` line.
Want a javascript function to run every minute, but max 3 times
[ "", "javascript", "jquery", "ajax", "" ]
I have two tables, one called `customer` and one called `customer_attributes`. The idea is that the customer table holds core customer data, and the application can be customised to support additional attributes depending on how it is used. `customer_attributes` has the following 3 columns: ``` customerID key1 value1 ``` Can I retrieve the full row, with any additional attributes if specified, defaulting to NULL if not? I'm using the following query but it only works if both attributes exist in the customer\_attributes table. ``` SELECT `customer`.*, `ca1`.`value1` AS `wedding_date`, `ca2`.`value1` AS `test` FROM `customer` LEFT JOIN `customer_attributes` AS `ca1` ON customer.customerID = ca1.customerID LEFT JOIN `customer_attributes` AS `ca2` ON customer.customerID = ca2.customerID WHERE (customer.customerID = '58029') AND (ca1.key1 = 'wedding_date') AND (ca2.key1 = 'test') ``` In this case the two attributes I'm interested in are called 'wedding\_date' and 'test'
Try this: ``` SELECT `customer`.*, `ca1`.`value1` AS `wedding_date`, `ca2`.`value1` AS `test` FROM `customer` LEFT JOIN `customer_attributes` AS `ca1` ON customer.customerID = ca1.customerID AND ca1.key1='wedding_date' LEFT JOIN `customer_attributes` AS `ca2` ON customer.customerID = ca2.customerID AND ca2.key1='test' WHERE (customer.customerID = '58029') ``` Moving the 2 WHERE conditions on ca1/ca2 into the JOIN condition instead should sort it
The reason rows are only returned is because of the tests in the WHERE clause. Any rows that do not have the correct key1 are ignored altogether - negating your LEFT JOIN. You could move the key1 tests to your JOIN conditions ``` SELECT `customer`.*, `ca1`.`value1` AS `wedding_date`, `ca2`.`value1` AS `test` FROM `customer` LEFT JOIN `customer_attributes` AS `ca1` ON customer.customerID = ca1.customerID AND ca1.key1 = 'wedding_date' LEFT JOIN `customer_attributes` AS `ca2` ON customer.customerID = ca2.customerID AND ca2.key1 = 'test' WHERE (customer.customerID = '58029') ```
Retrieving a row, with data from key-value pair table in MySQL
[ "", "sql", "mysql", "key-value", "" ]
I have two forms A and B. Form A is the default start up form of the application. I do some stuffs in Form A and i then i want to run my Form B parallel and then pass a parameter to a method in Form B from Form A. How ?
Ian has given some example code, but I'd like to make a broader point: **UI classes are just classes.** How would you pass a value from one object to another object if they *weren't* part of the user interface? You'd have a reference from one to the other, and call a method or set a property. The same exact thing holds for user interface objects. I mention this because it's something that comes up a lot. Whenever you ask yourself: "How do I do *X* with forms?" try asking yourself the same question but with plain old classes. Often the answer will be exactly the same. Of course there are *some* differences for user interface classes - particularly with threading - but for an awful lot of cases, it really helps if you just think of them as normal classes.
FormA should construct/hold an instance to FormB. Obviously the method on FormB needs to be public, change the type of object used in CallMethodOnFormB to the correct type too. ``` public class FormA { private FormB fB; public void CreateFormB() { // This shows the form in parallel. this.fB = new FormB(); this.fB.Show(); } public void CallMethodOnFormB(object value) { this.fB.RunSomeFunction(value); } } ```
How to pass values between forms in c# windows application?
[ "", "c#", "" ]
I am developing a website in PHP and I would like to use a mvc framework to do this as I want to gain experience with PHP frameworks. I have looked at Zend, CakePHP, and CodeIgniter and I have not been able to find an intuitive way to accomplish this. What I need to do is create a layout that will be for example: ``` <html> <head> <!--scripts go here--> </head> <body> <div id='banner'></div> <div id='menu'><?php $this->layout()->menu ?></div> <div id='container'><?php $this->layout()->content ?></div> <div id='ads'><?php $this->layout()->ads ?> <div id='footer'>blah</div> </body> </html> ``` I am looking for a framework that could do this simply without a lot of hacks as this should be simple to accomplish from my perspective. This would pull the menu, content, and ads from separate controllers and views and would of course be dynamic content. I do not want to put the menu code into every view... Maybe this is simple and I am just going about it the wrong way? Any help is appreciated. Thank you, -Ben
Symfony can do what you are looking for using a mix of concepts. * Layout - This is the main structure used to decorate a page. * Template - This is the main view attached to a URL by the controller. * Fragments - Lightweight and uses data you pass to it. * Component - Used if you need access to the model, the code is split between presentation and logic. * Slot - used to replace a defined section of the layout. In your case the layout would have the main content included using the template logic which is the core of the view layer and the other items would be either fragments or components depending on how much of the model they would need to access. The [Symfony documentation](http://www.symfony-project.org/book/1_2/07-Inside-the-View-Layer#chapter_07_sub_page_layout) has a full explanation of this.
Actually, what you want to achieve here can be done with very little deviation from what you have already, using Zend Framework. For the menu, you can use `Zend_Navigation`, which allows you to define the tree of the navigation of your site, and create simple menus. I find that populating the `Zend_Navigation` container in a Front Controller plugin allows you to cache the object easily, so you have little performance worries from traversing your sites tree, For the ads, you simply use the placeholder view helper, and you can once again use a Front Controller plugin to populate this. Using a plugin has the advantage that the logic of counting impressions and rotating ads is kept seperate from your actions, and easily performs its task across every action. dustin.cassiday's method of using the action stack is risky, and can lead to massive headaches debugging your apps. and Itay Moav's method is now really redundant due to `Zend_Navigation`
PHP Frameworks - Layout Dynamic Menu
[ "", "php", "zend-framework", "cakephp", "codeigniter", "symfony1", "" ]
I am doing my first database project. I would like to know **why you should use `NOT NULL` in the following query** ``` ... TITLE nvarchar(60) NOT NULL .. ``` **Context** ``` CREATE TABLE Questions ( USER_ID integer FOREIGN KEY REFERENCES User_info(USER_ID) PRIMARY KEY CHECK (USER_ID>0), QUESTION_ID integer FOREIGN KEY REFERENCES Tags(QUESTION_ID) NOT NULL CHECK (USER_ID>0), QUESTION_BODY nvarchar(4000) NOT NULL, TITLE nvarchar(60) NOT NULL, /////// HERE MODERATOR_REMOVAL boolean NOT NULL, SENT_TIME varchar(15) NOT NULL ) ``` I watched VPuml's tutorial. They put all values in Logical diagram nullable, while all the rest `NOT NULL`. This suggests me that nullable should be used with logical diagrams. **Is there any other use of not null in databases?** I feel that we can check that the user gives value by JS, for instance, not at a database level.
If a null value is not allowed, you should check on user entry *and* have NOT NULL in the database. Having NOT NULL in the database allows you to make absolutely sure that no bad data is entered into the database, regardless of mistakes in front-end code. However, since database errors are generally bad to show your users, you should check to make sure that a null value is not being submitted before it gets to the database check.
`NOT NULL` is often used for foreign keys (i.e.-links to other tables). It ensures that a row links to some other table. It is almost always used on primary keys, meaning the unique identifier in the table. It's also a great constraint for required fields. This ensures that the transaction will fail if a user doesn't enter in a title or a question. Databases are great at enforcing constraints like this very quickly, so you see a lot of that type of business logic put into databases. Do note, however, that you'll want to check for a database error on the application side and handle it accordingly. Anyway, `NOT NULL` is really for a column you *always* want a value in. The cases for that are actually quite a few, but hopefully this gives you some semblance of why we use it.
How should an empty title be checked in a database?
[ "", "sql", "database", "nullable", "ddl", "" ]
I have several forms in a C# application. I use Visual Studio 2010 Beta, but .NET 3.5 and C# 3. I have a base form, called FilteredQueryViewForm in the Shd namespace and I want some other forms to inherit it (because they will basically do the same stuff, but with some additions). I changed things from private to protected in the FilteredQueryViewForm class, so they're accessible from the derived forms. After this I've created a derived form and set the base class to FilteredQueryViewForm. The designer of the derived class complained about Shd.FilteredQueryViewForm not having any constructors... regardless of the fact it had one, with 3 parameters. I thought parameters can be a problem, so I also created a (public, of course) constructor without parameters, but it still doesn't work. The error message is the same: > "Constructor on type 'Shd.FilteredQueryViewForm' not found." And the designer of the derived class won't load. I have tried restarting vs2010beta, re-creating the derived form, but nothing seem to help. Google didn't yield any useful results for me on this problem. :( Is this a problem of Visual Studio 2010 Beta? Or am I doing something wrong?
You will need a constructor without parameters which calls the InitializeComponent() method in every of your forms. Then close the designer window, rebuild the solution and try to reopen the designer. That should work. Rebuilding the solution is essential. The problem is, that if you create a form that inheritates from Shd.FilteredQueryViewForm, the designer will try to call the constructor of the parent form, but it loads this form not from code but from it's built assembly.
I know that it's an old topic, but these things happen again, so I think that my contribute might be useful in future. Emiswelt says "You will need a constructor without parameters which calls the InitializeComponent() method in every of your forms." This is not really needed. You can declare a custom parameterized constructor on the derived form and call normally "InitializeComponent" method (with a call to a custom contructor too). The important thing is that your constructor calls "InitializeComponent" (for new controls) and base constructor calls "InitializeComponent" (for inherited components). This situation will work at runtime, but you won't see inherited controls on Visual Studio designer. To show all the controls at design time you should only add a simple contructor without parameters in the base class. For example, if your base is a form with a button and two radio buttons: ``` using System.Windows.Forms; namespace Test { public partial class Form1 : Form { public Form1(string foo) { //use "foo" here InitializeComponent(); //here button and radios will be initialized } } } ``` You can see it on the design tool and you can avoid the blank constructor (with no parameters) without problems. The Form2 is now inherited from Form1: ``` namespace Test { public partial class Form2 : Form1 { public Form2(string foo) : base(foo) { //you can use "foo" here even if it is passed to base class too InitializeComponent(); } } } ``` There is no blank constructor and it will compile and run normally. At rutime your Form2 will show the same control set as Form1. But... you can't see it at design time because Visual Studio can't identify where "InitializeComponent" method is and an error is showed. Why? Because there should be a constructor without parameters somewhere on the calls' chain. The solution is a simple modification on the base class: ``` using System.Windows.Forms; namespace Test { public partial class Form1 : Form { public Form1(string foo):base() { //use foo here } public Form1() //Visual studio designer likes this! { InitializeComponent(); } } } ``` That's all.
.NET inherited (WinForms) Form - VS designer issue
[ "", "c#", "winforms", "visual-studio-2010", "visual-inheritance", "" ]
Let's suppose that I have a list of elements, and I want to select only some of them, according to a certain function (for example a *distance* to an other element). I want to have as a result a list of tuple, with the distance and the element. So, I wrote the following code ``` result = [ ( myFunction(C), C) for C in originalList if myFunction(C) < limit ] ``` But `myFunction` is a very time-consuming function, and the `originalList` quite big. So doing like that, `myFunction` will be call twice for every selected element. So, is there a way to avoid this ?? I have two other possibilities, but they are not so good: 1. the first one, is to create the unfiltered list ``` unfiltered = [ (myFunction(C),C) for C in originalList ] ``` and then sort it ``` result = [ (dist,C) for dist,C in unfiltered if dist < limit ] ``` but in that case, I duplicate my `originalList` and waste some memory (the list could be quite big - more than 10,000 elements) 2. the second one is tricky and not very pythonic, but efficient (the best we can do, since the function should be evaluated once per element). `myFunction` stores it last result in a global variable (`lastResult` for example), and this value is re-used in the List comprehension ``` result = [ (lastResult,C) for C in originalList if myFunction(C) < limit ] ``` Do you have any better idea to achieve that, in an efficient and pythonic way ?? Thanks for your answers.
Sure, the difference between the following two: ``` [f(x) for x in list] ``` and this: ``` (f(x) for x in list) ``` is that the first will generate the list in memory, whereas the second is a new generator, with lazy evaluation. So, simply write the "unfiltered" list as a generator instead. Here's your code, with the generator inline: ``` def myFunction(x): print("called for: " + str(x)) return x * x originalList = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] limit = 10 result = [C2 for C2 in ((myFunction(C), C) for C in originalList) if C2[0] < limit] # result = [C2 for C2 in [(myFunction(C), C) for C in originalList] if C2[0] < limit] ``` Note that you will not see a difference in the printout from the two, but if you were to look at memory usage, the second statement which is commented out, will use more memory. To do a simple change to your code in your question, rewrite unfiltered as this: ``` unfiltered = [ (myFunction(C),C) for C in originalList ] ^ ^ +---------- change these to (..) ---------+ | v unfiltered = ( (myFunction(C),C) for C in originalList ) ```
Don't use a list comprehension; a normal for loop is fine here.
List Comprehensions in Python : efficient selection in a list
[ "", "python", "list-comprehension", "" ]
See duplicate: [Customizing the cassini webserver](https://stackoverflow.com/questions/1133615/customizing-the-cassini-webserver) --- I have a web service hosted using Cassini web server. It works fine if I reference it by localhost, but when I use my machine's name (even using the browser locally), I get: ``` Server Error in '/' Application. HTTP Error 403 - Forbidden. Version Information: Cassini Web Server 1.0.40305.0 ``` How can I configure it to accept all requests?
Cassini is specifically build to reject any outside connections. You could rebuild from [sourcecode](http://blogs.msdn.com/dmitryr/archive/2008/10/03/cassini-for-framework-3-5.aspx) if you want it to accept outside connections. Edit: In reply to the below comment on what to edit in the sourcecode, you'd probably want to change the following in `Server.cs` on line 88: ``` _socket = CreateSocketBindAndListen(AddressFamily.InterNetwork, IPAddress.Loopback, _port); ``` Into something like this: ``` _socket = CreateSocketBindAndListen(AddressFamily.InterNetwork, IPAddress.Any, _port); ``` This will bind the listening socket to all available IP addresses on the system instead of only to the localhost address (127.0.0.1).
To update this answer, IIS Express is a new alternative that allows this. <http://weblogs.asp.net/scottgu/archive/2010/06/28/introducing-iis-express.aspx>
How can I configure Cassini web server to accept requests to different hosts?
[ "", "c#", ".net", "web-services", "cassini", "" ]
I've got a string that I'm trying to split into chunks based on blank lines. Given a string `s`, I thought I could do this: ``` re.split('(?m)^\s*$', s) ``` This works in some cases: ``` >>> s = 'foo\nbar\n \nbaz' >>> re.split('(?m)^\s*$', s) ['foo\nbar\n', '\nbaz'] ``` But it doesn't work if the line is completely empty: ``` >>> s = 'foo\nbar\n\nbaz' >>> re.split('(?m)^\s*$', s) ['foo\nbar\n\nbaz'] ``` What am I doing wrong? [python 2.5; no difference if I compile `'^\s*$'` with `re.MULTILINE` and use the compiled expression instead]
Try this instead: ``` re.split('\n\s*\n', s) ``` The problem is that "$ \*^" actually only matches "spaces (if any) that are alone on a line"--not the newlines themselves. This leaves the delimiter empty when there's nothing on the line, which doesn't make sense. This version also gets rid of the delimiting newlines themselves, which is probably what you want. Otherwise, you'll have the newlines stuck to the beginning and end of each split part. Treating multiple consecutive blank lines as defining an empty block ("abc\n\n\ndef" -> ["abc", "", "def"]) is trickier...
The re library can split on one or more empty lines ! An empty line is a string that consists of zero or more whitespaces, starts at the start of the line and ends at the end of a line. Special character '$' matches the end of the string or just before the newline at the end of the string, and in MULTILINE mode also matches before a newline (excerpt from [docs](http://docs.python.org/2/library/re.html)). That's why we need to add a special character '\s\*' for the line break. Everything is possible :-) ``` >>> import re >>> text = "foo\n \n \n \nbar\n" >>> re.split("(?m)^\s*$\s*", text) ['foo\n', 'bar\n'] ``` The same regex works with windows style line breaks. ``` >>> import re >>> text = "foo\r\n \r\n \r\n \r\nbar\r\n" >>> re.split("(?m)^\s*$\s*", text) ['foo\r\n', 'bar\r\n'] ```
Matching blank lines with regular expressions
[ "", "python", "regex", "" ]
I know that this feature is to enhance user experience (by not retyping their details every time) and users can tweak their browser settings, but is it possible to send some "headers" etc (or any method, I mean server side controlling) so that when users try to log into my site they dont get this "Do you want IE/Firefox to remember your password?" prompt? I am not very sure if it is possible? what would you say? Any ideas will help. Thanks.
Put 'autocomplete="off"' in the input tag of the field. This works in IE and Firefox, but it's not part of the HTML standard, so you'll end up with invalid mark-up.
Add autocomplete="off" to the <form> [Mozilla docs on autocomplete](https://developer.mozilla.org/en/How_to_Turn_Off_Form_Autocompletion)
can we avoid browser's "remember password" when a user tries to log in to the system?
[ "", "php", "browser", "passwords", "header", "" ]
I failed to understand why auto implemented property language feature exist in C# 3.0. What the difference it is making when you say ``` public string FirstName; ``` than ``` public string FirstName { get; set; } ```
Because they are implemented differently in the resulting IL code (and machine language). An Automatic property is still exposed as a public getter and setter, whereas a public field is just that - a single field.. Thus, implementing an auto property allows you at some later date to change the internal behavior of either the getter or setter (like adding a validator) without recompiling or re=coding any dependant classes that use it...
Just to add to what other people have said, declaring a public field, the field is accessible for read and write. declaring public automatic property, although the property is public, you can still add modifier to control the accessibility at the get/set level. ``` public string FirstName { get; private set; } ``` The user of your class sees FirstName as public property. However, he/she cannot write to it.
Difference between Automatic Properties and public field in C# 3.0
[ "", "c#", "c#-3.0", "" ]
I am kind of a slow learner I guess when it comes to coding, I have been learning PHP for a couple of years and I still don't understand Classes so it's time I put some effort in to at least understanding them a little better. I use functions for everything. People often make comments to me on here that they can't believe I have a social network site and I don't use classes. I really do not understand the benefit of them can you explain the benefits besides it supposedly being easiar for multiple people to work on your code? To me it seems like classes just complicate simple task
Simply (in fact, extremely simply), classes allow you to organize code in logical units as well as provide containers and templates for user-created objects. Let's say you have a car... A car can has a capacity and people inside. ``` class Car { private $people = array(); private $capacity; function __construct($capacity) { $this->capacity = $capacity; } function addPerson($name) { if(count($this->people) >= $this->capacity) { throw new Exception("Car is already at capacity"); } else { $this->people[] = $name; } } function getPeople() { return $this->people; } function getCapacity() { return $this->capacity; } } ``` Now, we can start using those cars: ``` $aliceCar = new Car(2); $aliceCar->addPerson("Alice"); $bobCar = new Car(4); $bobCar->addPerson("Bob"); $bobCar->addPerson("Jake"); ``` I now have 2 cars (instances), which holds different data. ``` echo implode(',', $aliceCar->getPeople()); // Alice echo $aliceCar->getCapacity(); // 2 echo implode(',', $bobCar->getPeople()); // Bob,Jake echo $bobCar->getCapacity(); // 4 ``` I might also want to have a van, which will have an additional property for doors: ``` class Van extends Car { private $num_doors; function __construct($capacity, $num_doors) { parent::__construct($capacity); // Call the parent constructor $this->num_doors = $num_doors; } function getNumDoors() { return $this->num_doors; } } ``` Now let's use that van: ``` $jakeVan = new Van(7, 5); // Van is ALSO a Car $jakeVan->addPerson("Ron"); //Jake is with Bob now, so his son is driving the Van $jakeVan->addPerson("Valery") //Ron's girlfriend echo implode(',', $jakeVan->getPeople()); // Ron,Valery echo $jakeVan->getCapacity(); // 7 echo $jakeVan->getNumDoors(); // 5 ``` Now maybe you can see how we could apply those concepts towards the creation of, for example, a `DBTable` and a `User` class. --- In fact, it's hard to really start explaining why classes simplify one's life without getting into the concepts of Object Oriented Programming (abstraction, encapsulation, inheritance, polymorphism). I recommend you read the following book. It will help you grasp the core concepts of OOP and help you understand why objects to really make your life easier. Without an understanding of those concepts, it's easy to dismiss classes as just another complication. # PHP 5 Objects, Patterns, and Practice [PHP 5 Objects, Patterns, and Practice http://ecx.images-amazon.com/images/I/51BF7MF03NL.\_BO2,204,203,200\_PIsitb-sticker-arrow-click,TopRight,35,-76\_AA240\_SH20\_OU01\_.jpg](https://rads.stackoverflow.com/amzn/click/com/1590593804) Available at [Amazon.com](https://rads.stackoverflow.com/amzn/click/com/1590593804)
This is a *huge* topic and even the best answers from the best SOers could only hope to scratch the surface, but I'll give my two cents. Classes are the foundation of [OOP](http://en.wikipedia.org/wiki/Polymorphism_in_object-oriented_programming). They are, in a very basic way, object blueprints. They afford many features to the programmer, including encapsulation and polymorphism. Encapsulation, inheritance, and polymorphism are very key aspects of OOP, so I'm going to focus on those for my example. I'll write a structured (functions only) and then an OOP version of a code snippet and I hope you will understand the benefits. First, the structured example ``` <?php function speak( $person ) { switch( $person['type'] ) { case 'adult': echo "Hello, my name is " . $person['name']; break; case 'child': echo "Goo goo ga ga"; break; default: trigger_error( 'Unrecognized person type', E_USER_WARNING ); } } $adult = array( 'type' => 'adult' , 'name' => 'John' ); $baby = array( 'type' => 'baby' , 'name' => 'Emma' ); speak( $adult ); speak( $baby ); ``` And now, the OOP example ``` abstract class Person { protected $name; public function __construct( $name ) { $this->name = $name; } abstract public function speak(); } class Adult extends Person { public function speak() { echo "Hello, my name is " . $this->name; } } class Baby extends Person { public function speak() { echo "Goo goo ga ga"; } } $adult = new Adult( 'John' ); $baby = new Baby( 'Emma' ); $adult->speak(); $baby->speak(); ``` Not only should it be evident that just creating new data structures (objects) is easier and more controlled, pay attention to the logic in the speak() function in the first example, to the speak() methods in the 2nd. Notice how the first one must explicitly check the type of person before it can act? What happens when you add other action functions, like walk(), sit(), or whatever else you might have for your data? *Each* of those functions will have to duplicate the "type" check to make sure they execute correctly. This is the *opposite* of encapsulation. The data and the functions which use/modify them are not connected in any explicit way. Whereas with the OOP example, the correct speak() method is invoked based on how the object was created. This is inheritance/polymorphism in action. And notice how speak() in this example, being a method of the object, is explicitly connected to the data it's acting upon? You are stepping into a big world, and I wish you luck with your learning. Let me know if you have any questions.
Can you help me understand PHP Classes a little better?
[ "", "php", "class", "" ]
I have a Generic list of Objects. Each object has 9 string properties. I want to turn that list into a dataset that i can pass to a datagridview......Whats the best way to go about doing this?
Have you tried binding the list to the datagridview directly? If not, try that first because it will save you lots of pain. If you have tried it already, please tell us what went wrong so we can better advise you. Data binding gives you different behaviour depending on what interfaces your data object implements. For example, if your data object only implements `IEnumerable` (e.g. `List`), you will get very basic one-way binding, but if it implements `IBindingList` as well (e.g. `BindingList`, `DataView`), then you get two-way binding.
I apologize for putting an answer up to this question, but I figured it would be the easiest way to view my final code. It includes fixes for nullable types and null values :-) ``` public static DataSet ToDataSet<T>(this IList<T> list) { Type elementType = typeof(T); DataSet ds = new DataSet(); DataTable t = new DataTable(); ds.Tables.Add(t); //add a column to table for each public property on T foreach (var propInfo in elementType.GetProperties()) { Type ColType = Nullable.GetUnderlyingType(propInfo.PropertyType) ?? propInfo.PropertyType; t.Columns.Add(propInfo.Name, ColType); } //go through each property on T and add each value to the table foreach (T item in list) { DataRow row = t.NewRow(); foreach (var propInfo in elementType.GetProperties()) { row[propInfo.Name] = propInfo.GetValue(item, null) ?? DBNull.Value; } t.Rows.Add(row); } return ds; } ```
Convert generic list to dataset in C#
[ "", "c#", "list", "dataset", "" ]
I have a some simple Java code that looks similar to this in its structure: ``` abstract public class BaseClass { String someString; public BaseClass(String someString) { this.someString = someString; } abstract public String getName(); } public class ACSubClass extends BaseClass { public ASubClass(String someString) { super(someString); } public String getName() { return "name value for ASubClass"; } } ``` I will have quite a few subclasses of `BaseClass`, each implementing the `getName()` method in its own way ([template method pattern](http://en.wikipedia.org/wiki/Template_method_pattern)). This works well, but I don't like having the redundant constructor in the subclasses. It's more to type and it is difficult to maintain. If I were to change the method signature of the `BaseClass` constructor, I would have to change all the subclasses. When I remove the constructor from the subclasses, I get this compile-time error: `Implicit super constructor BaseClass() is undefined for default constructor. Must define an explicit constructor` Is what I am trying to do possible?
You get this error because a class which has no constructor has a **default** constructor, which is argument-less and is equivalent to the following code: ``` public ACSubClass() { super(); } ``` However since your BaseClass declares a constructor (and therefore doesn't have the default, no-arg constructor that the compiler would otherwise provide) this is illegal - a class that extends BaseClass can't call `super();` because there is not a no-argument constructor in BaseClass. This is probably a little counter-intuitive because you might think that a subclass automatically has any constructor that the base class has. The simplest way around this is for the base class to not declare a constructor (and thus have the default, no-arg constructor) or have a declared no-arg constructor (either by itself or alongside any other constructors). But often this approach can't be applied - because you need whatever arguments are being passed into the constructor to construct a legit instance of the class.
For those who Google for this error and arrive here: there might be another reason for receiving it. Eclipse gives this error when you have project setup - system configuration mismatch. For example, if you import Java 1.7 project to Eclipse and you do not have 1.7 correctly set up then you will get this error. Then you can either go to `Project - Preference - Java - Compiler` and `switch to 1.6 or earlier`; or go to `Window - Preferences - Java - Installed JREs` and add/fix your JRE 1.7 installation.
Java error: Implicit super constructor is undefined for default constructor
[ "", "java", "inheritance", "dry", "boilerplate", "" ]
I've got an object which is a Dictionary of an unknown type (ie I don't know the type for the key and the value) I want to retrieve all of its values so I can access those by index. So what I want to do is something like that : ``` Dictionary<object, object> d = (Dictionary<object, object>)obj; // cast error l = new List<KeyValuePair<object,object>>(); foreach (KeyValuePair<object, object> k in d) l.Add(new KeyValuePair<object,object>(k.Key, k.Value)); ``` However, as expected, the runtime won't let me cast to a Dictionary< object, object>. Is there a way to do this in .net 3.0 ? (for example using reflection?)
You can't cast `obj` to a `Dictionary<object, object>` because it isn't a `Dictionary<object, object>`. Yes, its keys and values derive from `object`, and can be thus cast to `object`. But you can't cast generic types in C# because they aren't covariant. Even though `T` derives from `object`, `List<T>` doesn't derive from `List<object>`. Consider this method: ``` void ModifyList<List<object> list) { for (int i=0; i<list.Count; i++) { list[i] = list[i].ToString(); } } ``` If you could cast `List<int>` to `List<object>`, you could pass a `List<int>` to that method and it would turn into something else. This is going to change when covariant generics are introduced in C# 4.0. [This article](http://reddevnews.com/Articles/2009/05/01/Generic-Covariance-and-Contravariance-in-C-40.aspx?Page=1) is a pretty good explanation of the issues involved. But to solve your actual problem, this will do the trick: ``` List<KeyValuePair<object, object>> list = d.AsEnumerable() .Select(x => new KeyValuePair<object, object>(x.Key, x.Value)) .ToList(); ```
Since `Dictionary<,>` implements `IDictionary` (non-generic), just iterate that: ``` IDictionary data = ... foreach (DictionaryEntry de in data) { Console.WriteLine(de.Key + ": " + de.Value); } ```
How can I retrieve all the KeyValuePairs contained in a Dictonary<?,?>?
[ "", "c#", ".net", "generics", "reflection", "" ]
It seems to me that anytime I come across internal calls or types, it's like I hit a road block. Even if they are accessible in code like open-source, it still feels they are not usable parts of the API code itself. i.e. it's as if they are discouraged to be modified. Should one keep oneself from using the internal keyword unless it's absolutely necessary? I am asking this for an open-source API. But still not everyone will want to change the API, but mostly use it to write their own code for the app itself.
There is nothing wrong with having an internal type in your DLL that is not a part of your public API. In fact, if you have anything other than a trivial DLL is more likely a sign of bad design if you don't have an internal type (or at least a non-public type) Why? Public APIs are a way of exposing the parts of your object model you want a consumer to use. Having an API of entirely public types means that you want the consumer to see literally everything in your DLL. Think of the versioning issues that come along with that stance. Changing literally anything in your object model is a breaking change. Having internal types allows you great flexibility in your model while avoiding breaking changes to your consumers.
Internal types are types that are explicitly meant to be kept out of the API. You should only mark things internal that you don't want people to see. My guess is that you're coming across types that are internal, but would have been valuable additions to the public API. I've seen this in quite a few projects. That's a different issue, though - it's really the same issue as whether a private type should have been public. In general, a good project **should** have internal or private types. They help implement the required feature set without bloating the public API. Keeping the public API as small as possible to provide the required feature set is part of what makes a library usable.
Does internal types compromise good API design?
[ "", "c#", ".net", "" ]
OK, so I have a database field of type char(1) that has a small number of possible state codes (e.g. 'F'= Failure, 'U'=Unknown, etc.). I'd like to have a C# enum class that corresponds to these states. I can do: ``` public enum StatusCode : byte { Unknown = (byte) 'U', Failure = (byte) 'F', // etc. } ``` So far so good. But in the DataTable returned from the database, the column values are System.Data.SqlTypes.SqlString instances. There are obviously some issues converting from a C# string (or even a C# char) to a C# byte (since C# char is really a UTF-16 codepoint). But in this case I know the values are constrained to a small set, and the code should throw an exception if a value outside this set comes through. With that in mind, what's the best way of doing this? Is it safe to cast from a SqlString to a byte? Would Convert.ToByte() be better? Would it be better to simply use a switch/case construct to crosswalk the values into the enum? I'm looking for the "best" way to do this, not only in terms of getting the right results but also for code clarity. I suppose I could also just use some constants like ``` public const char UnknownStatus = 'U'; public const char FailureStatus = 'F'; ``` But I'd rather use an enum if possible. Any thoughts? Edit: To clarify what I want do do with this, I'm expecting to use these values frequently throughout my code. For example, I want to be able to do things like: ``` public void DoSomething(StatusCode currentStatus) { if(currentStatus == StatusCode.Failure) { throw new SomeException(); } switch(currentStatus) { case StatusCode.Unknown: // do something break; } } ``` And so forth. I particularly want to avoid things like: ``` public void DoSomething(char currentStatus) { if(currentStatus == 'F') { // do something } } ``` Since in this case I'm using what amounts to "magic numbers" all over the place. In particular, this would make migrating to some other state-flagging system virtually impossible. Does that make sense?
Maybe a "constant" object? ``` public sealed class StatusCode { private char value; public static readonly StatusCode Unknown = new StatusCode('U'); public static readonly StatusCode Failure = new StatusCode('F'); private StatusCode(char v) { value = v; } public override string ToString() { return value.ToString(); } } ``` Then, later in your code, you could use it like an enum: `StatusCode.Unknown`. You could also provide an internal method to 'parse' a received value into an object of StatusCode.
**Skip to edit** Have you tried this (which doesn't work as you've checked and commented): ``` public enum StatusCode : char { Failure = 'F', Unknown = 'U', ... } ``` **EDIT - correct solution** or this (maybe even try with a struct): ``` public sealed class StatusCode { public static readonly char Failure = 'F'; public static readonly char Unknown = 'U'; ... public char Value { get; set; } } ``` your code you provided would work like this: ``` public void DoSomething(StatusCode currentStatus) { if(currentStatus.Value == StatusCode.Failure) { throw new SomeException(); } switch(currentStatus.Value) { case StatusCode.Unknown: // do something break; } } ``` If you don't like to use `Value` property you can always implement implicit equality operator between `StatusCode` and `char` types. In that case, your code wouldn't change a bit.
How to implement C# enum for enumerated char(1) database field?
[ "", "c#", "ascii", "byte", "" ]
In [weka](http://www.cs.waikato.ac.nz/ml/weka/) I load an arff file. I can view the relationship between attributes using the visualize tab. However I can't understand the meaning of the jitter slider. What is its purpose?
You can find the answer in the [mailing list archives](https://list.scms.waikato.ac.nz/pipermail/wekalist/2007-September/011382.html): The jitter function in the Visualize panel just adds artificial random noise to the coordinates of the plotted points in order to spread the data out a bit (so that you can see points that might have been obscured by others).
I don't know weka, but generally jitter is a term for the variation of a periodic signal to some reference interval. I'm guessing the slider allows you to set some range or threshold below which data points are treated as being regular, or to modify the output to introduce some variation. The [wikipedia](http://en.wikipedia.org/wiki/Jitter) entry can give you some background. Update: from [this pdf](http://webpages.uncc.edu/~wjiang3/TA/weka/weka2.pdf), the jitter slider is for this purpose: > “Jitter” option to deal with nominal attributes (and to detect “hidden”data points) Based on the accompanying slide it looks like it introduces some variation in the visualisation, perhaps to show when two data points overlap. Update 2: This [google books extract](http://books.google.com/books?id=QTnOcZJzlUoC&pg=PA393&lpg=PA393&dq=weka+jitter&source=bl&ots=3fkFgnXgQb&sig=14MFz1OjhBhai-wf6KvPgOJr-lQ&hl=en&ei=HwF_SozwBpihjAfjz-DwAQ&sa=X&oi=book_result&ct=result&resnum=7#v=onepage&q=weka%20jitter&f=false) (to Data mining By Ian H. Witten, Eibe Frank) seems to confirm my guess: > [jitter] is a random displacement applied to X and Y values to separate points that lie on top of one another. Without jitter, 1000 instances at the same data point would look just the same as 1 instance
What is the meaning of jitter in visualize tab of weka
[ "", "java", "data-mining", "weka", "arff", "" ]
To use a struct, we need to instantiate the struct and use it just like a class. Then why don't we just create a class in the first place?
A struct is a value type so if you create a copy, it will actually physically copy the data, whereas with a class it will only copy the reference to the data
A major difference between the semantics of `class` and `struct` is that `struct`s have value semantics. What is this means is that if you have two variables of the same type, they each have their own copy of the data. Thus if a variable of a given value type is set equal to another (of the same type), operations on one will not affect the other (that is, assignment of value types creates a copy). This is in sharp contrast to reference types. There are other differences: 1. Value types are implicitly `sealed` (it is not possible to derive from a value type). 2. Value types can not be `null`. 3. Value types are given a default constructor that initialzes the value type to its default value. 4. A variable of a value type is always a value of that type. Contrast this with classes where a variable of type `A` could refer to a instance of type `B` if `B` derives from `A`. Because of the difference in semantics, it is inappropriate to refer to `struct`s as "lightweight classes."
Why do we need struct? (C#)
[ "", "c#", "" ]
I know that `include`, `isset`, `require`, `print`, `echo`, and some others are not functions but language constructs. Some of these language constructs need parentheses, others don't. ``` require 'file.php'; isset($x); ``` Some have a return value, others do not. ``` print 'foo'; //1 echo 'foo'; //no return value ``` So what is the **internal** difference between a language construct and a built-in function?
(This is longer than I intended; please bear with me.) Most languages are made up of something called a "syntax": the language is comprised of several well-defined keywords, and the complete range of expressions that you can construct in that language is built up from that syntax. For example, let's say you have a simple four-function arithmetic "language" that only takes single-digit integers as input and completely ignores order of operations (I told you it was a simple language). That language could be defined by the syntax: ``` // The | means "or" and the := represents definition $expression := $number | $expression $operator $expression $number := 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 $operator := + | - | * | / ``` From these three rules, you can build any number of single-digit-input arithmetic expressions. You can then write a parser for this syntax that breaks down any valid input into its component types (`$expression`, `$number`, or `$operator`) and deals with the result. For example, the expression `3 + 4 * 5` can be broken down as follows: ``` // Parentheses used for ease of explanation; they have no true syntactical meaning $expression = 3 + 4 * 5 = $expression $operator (4 * 5) // Expand into $exp $op $exp = $number $operator $expression // Rewrite: $exp -> $num = $number $operator $expression $operator $expression // Expand again = $number $operator $number $operator $number // Rewrite again ``` Now we have a fully parsed syntax, in our defined language, for the original expression. Once we have this, we can go through and write a parser to find the results of all the combinations of `$number $operator $number`, and spit out a result when we only have one `$number` left. Take note that there are no `$expression` constructs left in the final parsed version of our original expression. That's because `$expression` can always be reduced to a combination of other things in our language. PHP is much the same: language constructs are recognized as the equivalent of our `$number` or `$operator`. They **cannot be reduced into other language constructs**; instead, they're the base units from which the language is built up. The key difference between functions and language constructs is this: the parser deals directly with language constructs. It simplifies functions into language constructs. The reason that language constructs may or may not require parentheses and the reason some have return values while others don't depends entirely on the specific technical details of the PHP parser implementation. I'm not that well-versed in how the parser works, so I can't address these questions specifically, but imagine for a second a language that starts with this: ``` $expression := ($expression) | ... ``` Effectively, this language is free to take any expressions it finds and get rid of the surrounding parentheses. PHP (and here I'm employing pure guesswork) may employ something similar for its language constructs: `print("Hello")` might get reduced down to `print "Hello"` before it's parsed, or vice-versa (language definitions can add parentheses as well as get rid of them). This is the root of why you can't redefine language constructs like `echo` or `print`: they're effectively hardcoded into the parser, whereas functions are mapped to a set of language constructs and the parser allows you to change that mapping at compile- or runtime to substitute your own set of language constructs or expressions. At the end of the day, the internal difference between constructs and expressions is this: language constructs are understood and dealt with by the parser. Built-in functions, while provided by the language, are mapped and simplified to a set of language constructs before parsing. More info: * [Backus-Naur form](http://en.wikipedia.org/wiki/Backus%E2%80%93Naur_Form), the syntax used to define formal languages (yacc uses this form) **Edit:** Reading through some of the other answers, people make good points. Among them: * A language builtin is faster to call than a function. This is true, if only marginally, because the PHP interpreter doesn't need to map that function to its language-builtin equivalents before parsing. On a modern machine, though, the difference is fairly negligible. * A language builtin bypasses error-checking. This may or may not be true, depending on the PHP internal implementation for each builtin. It is certainly true that more often than not, functions will have more advanced error-checking and other functionality that builtins don't. * Language constructs can't be used as function callbacks. This is true, because a construct is **not a function**. They're separate entities. When you code a builtin, you're not coding a function that takes arguments - the syntax of the builtin is handled directly by the parser, and is recognized as a builtin, rather than a function. (This may be easier to understand if you consider languages with first-class functions: effectively, you can pass functions around as objects. You can't do that with builtins.)
Language constructs are provided by the language itself (like instructions like "if", "while", ...) ; hence their name. One consequence of that is they are faster to be invoked than pre-defined or user-defined functions *(or so I've heard/read several times)* I have no idea how it's done, but one thing they can do (because of being integrated directly into the langage) is "bypass" some kind of error handling mechanism. For instance, isset() can be used with non-existing variables without causing any notice, warning or error. ``` function test($param) {} if (test($a)) { // Notice: Undefined variable: a } if (isset($b)) { // No notice } ``` \*Note it's not the case for the constructs of all languages. Another difference between functions and language constructs is that some of those can be called without parenthesis, like a keyword. For instance : ``` echo 'test'; // language construct => OK function my_function($param) {} my_function 'test'; // function => Parse error: syntax error, unexpected T_CONSTANT_ENCAPSED_STRING ``` *Here too, it's not the case for all language constructs.* I suppose there is absolutely no way to "disable" a language construct because it is part of the language itself. On the other hand, lots of "built-in" PHP functions are not really built-in because they are provided by extensions such that they are always active *(but not all of them)* Another difference is that language constructs can't be used as "function pointers" (I mean, callbacks, for instance) : ``` $a = array(10, 20); function test($param) {echo $param . '<br />';} array_map('test', $a); // OK (function) array_map('echo', $a); // Warning: array_map() expects parameter 1 to be a valid callback, function 'echo' not found or invalid function name ``` I don't have any other idea coming to my mind right now... and I don't know much about the internals of PHP... So that'll be it right now ^^ If you don't get much answers here, maybe you could ask this to the **mailing-list internals** (see <http://www.php.net/mailing-lists.php> ), where there are many PHP core-developers ; they are the ones who would probably know about that stuff ^^ *(And I'm really interested by the other answers, btw ^^ )* As a reference : [list of keywords and language constructs in PHP](http://php.net/manual/en/reserved.keywords.php)
What is the difference between a language construct and a "built-in" function in PHP?
[ "", "php", "function", "built-in", "language-construct", "" ]
I wanna create a list of 50 elements which consist of four chars each. Every four char string should go into a loop one by one and get checked for one of three letters (o, a, e) anywhere in the current string. dependent on whether or not these letters are identified different commands are executed I tried all day im frustrated please help me....
``` typedef std::list<std::string> MyList; MyList myList = getMyList(); MyList::const_iterator i = myList.begin(), iEnd = myList.end(); for (; i != iEnd; ++i) { const std::string& fourChars = *i; if (fourChars.length() == 4) { std::string::const_iterator j = fourChars.begin(), jEnd = fourChars.end(); for (; j != jEnd; ++j) { char c = *j; switch (c) { case 'o': case 'O': doO(); break; case 'a': case 'A': doA(); break; case 'e': case 'E': doE(); break; default: // not oae } } } else { // not 4 chars, what should we do? } } ```
you can go as: ``` #define NUM_ELEMENTS 50 #define WIDTH 4 // your function char list[NUM_ELEMENTS][WIDTH]; //initialize list for(i=0 ; i < NUM_ELEMENTS ; i++ ) for(j=0 ; j < WIDTH ;j++) { switch(list[i][j]) { case 'o': // execute command break; case 'a': // execute command break; case 'e': // execute command break; } } ```
working with string arrays in c++
[ "", "c++", "arrays", "string", "list", "parsing", "" ]
I have a windows mobile app that look like this: ``` class Program { static void Main(string[] args) { RunHook runHook = new RunHook(); } } class RunHook { private HookKeys hook; public RunHook() { hook = new HookKeys(); hook.HookEvent += EventForHook; } private void EventForHook(HookEventArgs e, KeyBoardInfo keyBoardInfo, ref Boolean handled) { if ((keyBoardInfo.scanCode == 4) && (keyBoardInfo.vkCode == 114)) handled = true; } } ``` It will create a hook into the keyboard (I know that is frowned on by some). My issue is that I need the Main method to never return. This is going to run on devices owned by my company and we are using this to disable the phone hardware keys. This seems like it should be simple, but I am stuck on it. On normal .NET I would just call `Console.Readline()`, but that does not work on Windows Mobile Compact Framework. I have also tried `Thread.Sleep(0)`, but it does not work either. Thanks for any feedback.
`Thread.Sleep(0)` sleeps for zero milliseconds. You probably want `Thread.Sleep(Timeout.Infinite)`. You might also consider creating an `EventWaitHandle`: ``` class Program { static public ManualResetEvent StopMain; static void Main(string[] args) { StopMain = new ManualResetEvent(false); RunHook runHook = new RunHook(); StopMain.WaitOne(); // waits until signalled } } ``` Then, if you were ever ready to exit Main(), you could call (from another thread): ``` Program.StopMain.Set(); ```
If it going to run on devices that are owned by your company then why not run a small windows program in background. I mean just hide the window. Let it sit in your task bar. [Click on this link for more information on use of notification icon in CF.](http://www.microsoft.com/downloads/details.aspx?familyid=5a8384c0-34a5-47d1-bb50-e5e261288ae3&displaylang=en)
Keep a Windows Mobile Console App Running
[ "", "c#", "windows-mobile", "compact-framework", "console", "" ]
I know how to create one to many, many to many relationships in SQL Server, but is it possible to create one to one relationship? And is it possible to create 1 to 0 or 1 relationship?
Yes, just put `PRIMARY KEY`s of both entities into a link table, defining a `UNIQUE` key on both entities: ``` myrel(entityA, entityB, UNIQUE(entityA), UNIQUE(entityB)) ``` Thus, if `entityA = 1` is related to `entityB = 2`: ``` entityA entityB 1 2 ``` , you can relate neither `entityA = 1` to any other `entityB`, nor an `entityB = 2` to any other `entityA`. If you relation is *symmetrical* (i. e. `entityA` and `entityB` belong to same domain and relating `entityA` to `entityB` also means relating `entityB` to `entityA`), then define an additional `CHECK` constrant: ``` entityA entityB UNIQUE(entityA) UNIQUE(entityB) CHECK(entityA < entityB) ``` and transform the normalized relation to a canonical one with this query: ``` SELECT entityA, entityB FROM myrel UNION SELECT entityB, entityA FROM myrel ``` This is a `(0-1):(0-1)` relation. If you want it to be a `1:1` relation, define this table to be a domain for both `entityA` and `entityB`: ``` myrel(entityA, entityB, UNIQUE(entityA), UNIQUE(entityB)) A(id, PRIMARY KEY(id), FOREIGN KEY(id) REFERENCES myrel (entityA)) B(id, PRIMARY KEY(id), FOREIGN KEY(id) REFERENCES myrel (entityB)) ``` By removing the `FOREIGN KEY` from either table's definition, you change the corresponding part of the relationship from `1` to `(0-1)`.
Two ways: 1) a pk-pk 1:1 relationship. Table A and B have both a PK. Create an FK from the B PK to the PK of A. This makes 'B' the FK side of the 1:1 relationship or 2) an FK/UC-PK 1:1 relationship. Table A has a PK and table B has a foreign key to A, but the FK in B is not on the PK of B. Now create a UC on the FK field(s) in B.
relationships in Sql
[ "", "sql", "sql-server", "database-design", "" ]
Programming a Python web application, I want to create a text area where the users can enter text in a lightweight markup language. The text will be imported to a html template and viewed on the page. Today I use this command to create the textarea, which allows users to enter any (html) text: ``` my_text = cgidata.getvalue('my_text', 'default_text') ftable.AddRow([Label(_('Enter your text')), TextArea('my_text', my_text, rows=8, cols=60).Format()]) ``` How can I change this so that only some (safe, eventually lightweight) markup is allowed? All suggestions including sanitizers are welcome, as long as it easily integrates with Python.
Use the python [markdown](http://www.freewisdom.org/projects/python-markdown/Using_as_a_Module) implementation ``` import markdown mode = "remove" # or "replace" or "escape" md = markdown.Markdown(safe_mode=mode) html = md.convert(text) ``` It is very flexible, you can use various extensions, create your own etc.
You could use [restructured text](http://docutils.sourceforge.net/rst.html) . I'm not sure if it has a sanitizing option, but it's well supported by Python, and it generates all sorts of formats.
Lightweight markup language for Python
[ "", "python", "html", "markup", "" ]
I'm having trouble accessing only one XML element that has the same name as other elements (i this case, the tag "name"). How do I access ONLY the "wantedName" below using jQuery? ``` <sample> <input> <value>0.2</value> <name>varName</name> </input> <name>wantedName</name> <output> <value>4</value> <name>varName2</name> </output> </sample> ``` Using $(xml).find("name") returns an array containing [varName, wantedName, varName2]. Unfortunately I can't just resort to accessing the 1st index because the xml is dynamic, so I'm not sure at which index the "wantedName" will be. I'd like to access "wantedName" by doing something like "if this has an immediate parent named `<sample>`, get it, otherwise ignore".
Assuming the structure of the xml always looks like the example given (where name is directly after the input tag), I think this might work ``` $(xml).find("input + name"); ```
Presumably you want to retrieve any `name` tags that are children of `sample`, but *not* those that are children of `input` or any other tag? If so, this should work: `$(xml).find("sample > name");`
Accessing only one element that has a shared name in an XML DOM using jquery
[ "", "javascript", "jquery", "xml", "" ]
Does anyone know of good reference material for creating a com addin for the VBA Editor enviroment, i know its exactly the same as writing a com addin for common enviroments using the addin model provided by microsoft using the IDTExtensibility2 interface. just registering the com registry keys to a different location, Where is that location? Also any examples on .net interop code for how to reference the code editor, in as much detail as possible, add custom menu items to the context menu. you know normal customization code. Also if anyone knows how to hook up visual studio for debugging said project. If you know of anyone of these, id be very much indebted.
It looks like the person who put together the MZ Tools has a small section on their site with some resources on building VBE Addins <http://www.mztools.com/resources_vs60_addins.aspx>
"Microsoft Office 2000 Visual Basic Programmer's Handbook" (ISBN 3-86063-289-2) has four (!) pages on this subject. I only have the German edition, but I could translate the important bits if you think it might help. Obviously, nothing about .Net Interop in there, but some basic info about writing VBE Add-ins.
vba Extentiblity, com addin guidance?
[ "", "c#", "vba", "com", "interop", "editor", "" ]
What does Trusted = yes/no mean in Sql connection string? I am creating a connection string as below : ``` string con= string.Format( "user id=admin;password=admin;server={0};Trusted_Connection=yes;database=dbtest;connection timeout=600", _sqlServer); ``` Please Help
`Integrated Security` or `Trusted_Connection` When `false`, User ID and Password are specified in the connection. When `true`, the current Windows account credentials are used for authentication. Recognized values are true, false, yes, no, and sspi (strongly recommended), which is equivalent to true.
Check out [connectionstring,com](http://www.connectionstrings.com) for detailed description of all the various SQL Server connecion string properties. Specifically, [this article](http://www.connectionstrings.com/Articles/Show/all-sql-server-connection-string-keywords):
What does Trusted = yes/no mean in Sql connection string?
[ "", "c#", "sql-server-2005", "ado.net", "" ]
> **Possible Duplicates:** > [Is there a site that emails out daily C# tips and tricks?](https://stackoverflow.com/questions/195096/is-there-a-site-that-emails-out-daily-c-tips-and-tricks) > [What are your C# Commandments?](https://stackoverflow.com/questions/1228176/what-are-your-c-commandments) **A little background:** I have skimmed through Effective C# by Bill Wagner. I know that my opinion on a book like that is too insignificant. But, still just for record I plan to read and re-read the books for years to come. It is just too good. **Now the question:** Can **[SO](http://stackoverflow.com)** C# gurus tell me their favorite C# tip(s) to make their code effective. **EDIT:** I looking for something that C# gurus use in their day-to-day coding to make it effective. I am not looking for hidden features of C#. If this has already been answered let me know. Thanks.
We have a simialr questions : [Hidden Features of C#](https://stackoverflow.com/questions/9033/hidden-features-of-c) [Is there a site that emails out daily C# tips and tricks?](https://stackoverflow.com/questions/195096/is-there-a-site-that-emails-out-daily-c-tips-and-tricks)
No fancy tricks, just clear well documented code, because in the end you will have to come back to it at 3am in the morning and curse the bug that your cleverness created.
Effective C# tips
[ "", "c#", "" ]
Problem is that I use a 5.2 Sitecore, and when I start to publish something, it only shows the publishing window and a "Queued" message. There is a chance that some people full published several times(that means 15000 items and 4 slaves), and the publishing queue is stuck. The restart of the webserver/app pool is not an option. Any experiences in sitecore publishing errors? something like [this](http://www.mypicx.com/07232009/sitecore_publish_queued/)
If several full publishes are queued then you will have to either wait and see if things finish or recycle the AppPool to start over. Try checking your Sitecore logs to see if there is any information in there that will tell you if publishing is actually locked up or if it is just taking a while to finish. In my experience it is *possible* to see that locked up publishing dialog and still have the publishing queue clear itself. Unfortunately though you will most likely have to recycle the AppPool to fix this issue.
So the final "solution" was to be the AppPool restart. Shame that the full publish stuck the sitecore shell, but the db browser could publish. Whatever, think I should contact to some sitecore support guy. BTW thanks Andrew&Kim!
Is there any way to get publishing queue in sitecore?
[ "", "c#", ".net", "content-management-system", "publishing", "sitecore", "" ]
In Django, how do I associate a Form with a Model so that data entered into the form are inserted into the database table associated with the Model? How do I save that user input to that database table? For example: ``` class PhoneNumber(models.Model): FirstName = models.CharField(max_length=30) LastName = models.CharField(max_length=30) PhoneNumber = models.CharField(max_length=20) class PhoneNumber(forms.Form): FirstName = forms.CharField(max_length=30) LastName = forms.CharField(max_length=30) PhoneNumber = forms.CharField(max_length=20) ``` I know there is a class for creating a form from the the model, but even there I'm unclear on how the data actually gets to the database. And I'd like to understand the inner workings before I move on to the time-savers. If there is a simple example of how this works in the docs, I've missed it. Thanks. UPDATED: To be clear -- I do know about the ModelForm tool, I'm trying to figure out how to do this without that -- in part so I can better understand what it's doing in the first place. ANSWERED: With the help of the anwers, I arrived at this solution: Form definition: ``` class ThisForm(forms.Form) [various Field assignments] model = ThisModel() ``` Code in views to save entered data to database: ``` if request_method == 'POST': form = ThisForm(request.POST) if form.is_valid(): for key, value in form.cleaned_data.items(): setattr(form.model, key, value) form.model.save(form.model) ``` After this the data entered in the browser form was in the database table. Note that the call of the model's save() method required passage of the model itself as an argument. I have no idea why. CAVEAT: I'm a newbie. This succeeded in getting data from a browser to a database table, but God only knows what I've neglected or missed or outright broken along the way. ModelForm definitely seems like a much cleaner solution.
Back when I first used Forms and Models (without using ModelForm), what I remember doing was checking if the form was valid, which would set your cleaned data, manually moving the data from the form to the model (or whatever other processing you want to do), and then saving the model. As you can tell, this was extremely tedious when your form exactly (or even closely) matches your model. By using the ModelForm (since you said you weren't quite sure how it worked), when you save the ModelForm, it instantiates an object with the form data according to the model spec and then saves that model for you. So all-in-all, the flow of data goes from the HTML form, to the Django Form, to the Django Model, to the DB. Some actual code for your questions: To get the browser form data into the form object: ``` if request.method == 'POST': form = SomeForm(request.POST) if form.is_valid(): model.attr = form.cleaned_data['attr'] model.attr2 = form.cleaned_data['attr2'] model.save() else: form = SomeForm() return render_to_response('page.html', {'form': form, }) ``` In the template page you can do things like this with the form: ``` <form method="POST"> {{ form.as_p }} <input type="submit"/> </form> ``` That's just one example that I pulled from [here](http://docs.djangoproject.com/en/dev/topics/forms/).
I'm not sure which class do you mean. I know that there were a helper, something like `form_for_model` (don't really remember the exact name; that was way before 1.0 version was released). Right now I'd it that way: ``` import myproject.myapp.models as models class PhoneNumberForm(forms.ModelForm): class Meta: model = models.PhoneNumber ``` To see the metaclass magic behind, you'd have to look into the code as there is a lot to explain :]. The constructor of the form can take `instance` argument. Passing it will make the form operate on an existing record rather than creating a new one. More info [here](http://docs.djangoproject.com/en/dev/topics/forms/modelforms/#topics-forms-modelforms).
How do I associate input to a Form with a Model in Django?
[ "", "python", "django", "" ]
I would like to be able to trap the double key press (for the Char T for example) in order to do some special processing.I would like the key presses to happen fast enough to not be interpreted as two separate presses, just like the double click. Any ideas how i can achieve this?
When the key(s) are hit, make a note of the time. Then compare it with the time you noted the *last* time they key(s) were hit. If the difference is within your threshold, consider it a double. Otherwise, don't. Rough example: ``` var delta = 500; var lastKeypressTime = 0; function KeyHandler(event) { if ( String.fromCharCode(event.charCode).toUpperCase()) == 'T' ) { var thisKeypressTime = new Date(); if ( thisKeypressTime - lastKeypressTime <= delta ) { doDoubleKeypress(); // optional - if we'd rather not detect a triple-press // as a second double-press, reset the timestamp thisKeypressTime = 0; } lastKeypressTime = thisKeypressTime; } } ```
Have a variable (perhaps `first_press`) that you set to true when a keypress event happens, and start a timer that will reset the variable to false after a set amount of time (however fast you want them to press the keys). In your keypress event, if that variable is true then you have a double press. Example: ``` var first_press = false; function key_press() { if(first_press) { // they have already clicked once, we have a double do_double_press(); first_press = false; } else { // this is their first key press first_press = true; // if they don't click again in half a second, reset window.setTimeout(function() { first_press = false; }, 500); } } ```
How to trap double key press in javascript?
[ "", "javascript", "keypress", "" ]
Does anyone have openid working in a PHP 5.3 installation? None of the [libraries](http://wiki.openid.net/Libraries) I've tried seem to be working.
Ok, I finally got to fix the library... I explained everything [here](http://sourcecookbook.com/en/recipes/60/janrain-s-php-openid-library-fixed-for-php-5-3-and-how-i-did-it) (you can also download the php-openid library after my changes). I needed to do what Paul Tarjan suggested but, also, I needed to modify the `Auth_OpenID_detectMathLibrary` and add the `static` keyword to a lot of functions. After that It seems to work perfectly although it is not an ideal solution... I think that someone should rewrite the whole library in PHP 5...
The solution is to remove all the of pass-by-reference elements in php-openid. change ``` foo(&$param) ``` to ``` foo($param) ``` and the library should work again.
OpenID Library for PHP 5.3
[ "", "php", "openid", "" ]