Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
What is the best practice of Unicode processing in C++?
* Use [ICU](http://www.icu-project.org/) for dealing with your data (or a similar library) * In your own data store, make sure everything is stored in the same encoding * Make sure you are always using your unicode library for mundane tasks like string length, capitalization status, etc. Never use standard library builtins like `is_alpha` unless that is the definition you want. * I can't say it enough: ***never iterate over the indices of a `string` if you care about correctness, always use your unicode library for this.***
If you don't care about backwards compatibility with previous C++ standards, the current C++11 standard has built in Unicode support: <http://www.open-std.org/JTC1/SC22/WG21/docs/papers/2011/n3242.pdf> So the truly best practice for Unicode processing in C++ would be to use the built in facilities for it. That isn't always a possibility with older code bases though, with the standard being so new at present. EDIT: To clarify, C++11 is Unicode aware in that it now has support for Unicode literals and Unicode strings. However, the standard library has only [limited support](http://www.cplusplus.com/reference/codecvt/) for Unicode processing and conversion. For your current needs this may be enough. However, if you need to do a large amount of heavy lifting right now then you may still need to use something like [ICU](http://www.icu-project.org) for more in-depth processing. There are some proposals [currently in the works](http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2012/n3398.html) to include more robust support for text conversion between different encodings. My guess (and hope) is that this will be part of the next [technical report](http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2005/n1810.html).
Unicode Processing in C++
[ "", "c++", "unicode", "" ]
I'm going to be implementing a `PHP/mySQL` setup to store credit card information. It seems like `AES_ENCRYPT/AES_DECRYPT` is the way to go, **but I'm still confused on one point:** **How do I keep the encryption key secure?** Hardwiring it into my PHP scripts (which will live on the same server as the db) seems like a major security hole. **What's the "best practice" solution here?**
You should think long and hard about whether you REALLY need to keep the CC#. If you don't have a great reason, DON'T! Every other week you hear about some company being compromised and CC#'s being stolen. All these companies made a fatal flaw - they kept too much information. Keep the CC# until the transaction clears. After that, delete it. As far as securing the server, the best course of action is to secure the hardware and use the internal system socket to MySQL, and make sure to block any network access to the MySQL server. Make sure you're using both your system permissions and the MySQL permissions to allow as little access as needed. For some scripts, you might consider write-only authentication. There's really no encryption method that will be foolproof (as you will always need to decrypt, and thus must store the key). This is not to say you shouldn't - you can store your key in one location and if you detect system compromise you can destroy the file and render the data useless.
MySQL, there is six easy steps you can do to secure your sensitive data. Step 1: Remove wildcards in the grant tables Step 2: Require the use of secure passwords Note: Use the MySQL “--secure-auth” option to prevent the use of older, less secure MySQL password formats. Step 3: Check the permissions of configuration files Step 4: Encrypt client-server transmissions Step 5: Disable remote access Step 6: Actively monitor the MySQL access log Security Tools
How do I keep a mySQL database secure?
[ "", "php", "mysql", "security", "aes", "" ]
We have some legacy code that needs to identify in the Page\_Load which event caused the postback. At the moment this is implemented by checking the Request data like this... if (Request.Form["\_\_EVENTTARGET"] != null && (Request.Form["\_\_EVENTTARGET"].IndexOf("BaseGrid") > -1 // BaseGrid event ( e.g. sort)        || Request.Form["btnSave"] != null // Save button This is pretty ugly and breaks if someone renames a control. Is there a better way of doing this? Rewriting each page so that it does not need to check this in Page\_Load is not an option at the moment.
This should get you the control that caused the postback: ``` public static Control GetPostBackControl(Page page) { Control control = null; string ctrlname = page.Request.Params.Get("__EVENTTARGET"); if (ctrlname != null && ctrlname != string.Empty) { control = page.FindControl(ctrlname); } else { foreach (string ctl in page.Request.Form) { Control c = page.FindControl(ctl); if (c is System.Web.UI.WebControls.Button) { control = c; break; } } } return control; } ``` Read more about this on this page: <http://ryanfarley.com/blog/archive/2005/03/11/1886.aspx>
In addition to the above code, if control is of type ImageButton then add the below code, ``` if (control == null) { for (int i = 0; i < page.Request.Form.Count; i++) { if ((page.Request.Form.Keys[i].EndsWith(".x")) || (page.Request.Form.Keys[i].EndsWith(".y"))) { control = page.FindControl(page.Request.Form.Keys[i].Substring(0, page.Request.Form.Keys[i].Length - 2)); break; } } } ```
How to Identify Postback event in Page_Load
[ "", "c#", "asp.net", "" ]
Boost is meant to be **the** standard non-standard C++ library that every C++ user can use. Is it reasonable to assume it's available for an open source C++ project, or is it a large dependency too far?
Basically your question boils down to “is it reasonable to have [free library xyz] as a dependency for a C++ open source project.” Now consider the following quote from Stroustrup and the answer is really a no-brainer: > Without a good library, most interesting tasks are hard to do in > C++; but given a good library, almost any task can be made easy Assuming that this is correct (and in my experience, it is) then writing a reasonably-sized C++ project *without* dependencies is downright unreasonable. Developing this argument further, the *one* C++ dependency (apart from system libraries) that can reasonably be expected on a (developer's) client system is the Boost libraries. I *know* that they aren't but it's not an unreasonable presumption for a software to make. If a software can't even rely on Boost, it can't rely on *any* library.
Take a look at <http://www.boost.org/doc/tools.html>. Specifically the *bcp* utility would come in handy if you would like to embed your boost-dependencies into your project. An excerpt from the web site: > "The bcp utility is a tool for extracting subsets of Boost, it's useful for Boost authors who want to distribute their library separately from Boost, and for Boost users who want to distribute a subset of Boost with their application. > > bcp can also report on which parts of Boost your code is dependent on, and what licences are used by those dependencies." Of course this could have some drawbacks - but at least you should be aware of the possibility to do so.
Boost dependency for a C++ open source project?
[ "", "c++", "boost", "standard-library", "" ]
Ok, my web application is at **C:\inetpub\wwwroot\website** The files I want to link to are in **S:\someFolder** Can I make a link in the webapp that will direct to the file in **someFolder**?
If its on a different drive on the server, you will need to make a [virtual directory](http://www.zerog.com/ianetmanual/IISVDirs_O.html) in IIS. You would then link to "`/virtdirect/somefolder/`"
You would have to specifically map it to some URL through your web server. Otherwise, all your files would be accessible to anyone who guessed their URL and you don't want that...
How can I hyperlink to a file that is not in my Web Application?
[ "", "c#", "asp.net", "" ]
I have a SQL query (MS Access) and I need to add two columns, either of which may be null. For instance: ``` SELECT Column1, Column2, Column3+Column4 AS [Added Values] FROM Table ``` where Column3 or Column4 may be null. In this case, I want null to be considered zero (so `4 + null = 4, null + null = 0`). Any suggestions as to how to accomplish this?
Since ISNULL in Access is a boolean function (one parameter), use it like this: ``` SELECT Column1, Column2, IIF(ISNULL(Column3),0,Column3) + IIF(ISNULL(Column4),0,Column4) AS [Added Values] FROM Table ```
According to [Allen Browne](http://allenbrowne.com/QueryPerfIssue.html), the fastest way is to use `IIF(Column3 is Null; 0; Column3)` because both `NZ()` and `ISNULL()` are VBA functions and calling VBA functions slows down the JET queries. I would also add that if you work with linked SQL Server or Oracle tables, the IIF syntax also the query to be executed on the server, which is not the case if you use VBA functions.
SQL Null set to Zero for adding
[ "", "sql", "ms-access", "database-design", "" ]
I need to determine if a Class object representing an interface extends another interface, ie: ``` package a.b.c.d; public Interface IMyInterface extends a.b.d.c.ISomeOtherInterface{ } ``` according to [the spec](http://web.archive.org/web/20100705124350/http://java.sun.com:80/j2se/1.4.2/docs/api/java/lang/Class.html) Class.getSuperClass() will return null for an Interface. > If this Class represents either the > Object class, an interface, a > primitive type, or void, then null is > returned. Therefore the following won't work. ``` Class interface = Class.ForName("a.b.c.d.IMyInterface") Class extendedInterface = interface.getSuperClass(); if(extendedInterface.getName().equals("a.b.d.c.ISomeOtherInterface")){ //do whatever here } ``` any ideas?
Use Class.getInterfaces such as: ``` Class<?> c; // Your class for(Class<?> i : c.getInterfaces()) { // test if i is your interface } ``` Also the following code might be of help, it will give you a set with all super-classes and interfaces of a certain class: ``` public static Set<Class<?>> getInheritance(Class<?> in) { LinkedHashSet<Class<?>> result = new LinkedHashSet<Class<?>>(); result.add(in); getInheritance(in, result); return result; } /** * Get inheritance of type. * * @param in * @param result */ private static void getInheritance(Class<?> in, Set<Class<?>> result) { Class<?> superclass = getSuperclass(in); if(superclass != null) { result.add(superclass); getInheritance(superclass, result); } getInterfaceInheritance(in, result); } /** * Get interfaces that the type inherits from. * * @param in * @param result */ private static void getInterfaceInheritance(Class<?> in, Set<Class<?>> result) { for(Class<?> c : in.getInterfaces()) { result.add(c); getInterfaceInheritance(c, result); } } /** * Get superclass of class. * * @param in * @return */ private static Class<?> getSuperclass(Class<?> in) { if(in == null) { return null; } if(in.isArray() && in != Object[].class) { Class<?> type = in.getComponentType(); while(type.isArray()) { type = type.getComponentType(); } return type; } return in.getSuperclass(); } ``` Edit: Added some code to get all super-classes and interfaces of a certain class.
``` if (interface.isAssignableFrom(extendedInterface)) ``` is what you want i always get the ordering backwards at first but recently realized that it's the exact opposite of using instanceof ``` if (extendedInterfaceA instanceof interfaceB) ``` is the same thing but you have to have instances of the classes rather than the classes themselves
Determining the extended interfaces of a Class
[ "", "java", "reflection", "interface", "" ]
I have the following html code: ``` <h3 id="headerid"><span onclick="expandCollapse('headerid')">&uArr;</span>Header title</h3> ``` I would like to toggle between up arrow and down arrow each time the user clicks the span tag. ``` function expandCollapse(id) { var arrow = $("#"+id+" span").html(); // I have tried with .text() too if(arrow == "&dArr;") { $("#"+id+" span").html("&uArr;"); } else { $("#"+id+" span").html("&dArr;"); } } ``` My function is going always the else path. If I make a javacript:alert of `arrow` variable I am getting the html entity represented as an arrow. How can I tell jQuery to interpret the `arrow` variable as a string and not as html.
When the HTML is parsed, what JQuery sees in the DOM is a `UPWARDS DOUBLE ARROW` ("⇑"), not the entity reference. Thus, in your Javascript code you should test for `"⇑"` or `"\u21d1"`. Also, you need to change what you're switching to: ``` function expandCollapse(id) { var arrow = $("#"+id+" span").html(); if(arrow == "\u21d1") { $("#"+id+" span").html("\u21d3"); } else { $("#"+id+" span").html("\u21d1"); } } ```
If you do an alert of `arrow` what does it return? Does it return the exact string that you're matching against? If you are getting the actual characters `'⇓'` and `'⇑'` you may have to match it against `"\u21D1"` and `"\u21D3"`. Also, you may want to try `&#8657;` and `&#8659;` since not all browsers support those entities. **Update**: here's a fully working example: <http://jsbin.com/edogop/3/edit#html,live> ``` window.expandCollapse = function (id) { var $arrowSpan = $("#" + id + " span"), arrowCharCode = $arrowSpan.text().charCodeAt(0); // 8659 is the unicode value of the html entity if (arrowCharCode === 8659) { $arrowSpan.html("&#8657;"); } else { $arrowSpan.html("&#8659;"); } // one liner: //$("#" + id + " span").html( ($("#" + id + " span").text().charCodeAt(0) === 8659) ? "&#8657;" : "&#8659;" ); }; ```
How to compare an html entity with jQuery
[ "", "javascript", "jquery", "html-entities", "" ]
I'm creating a small database application to teach myself the following concepts 1. C# programming 2. .Net 3.5 framework 3. WPF 4. LINQ ORM I want to use Microsoft Access as the database but I can't seem to find any mention of whether its possible to use SQLMetal to generate the ORM code from a Microsoft Access database. Does anyone know if this is possible? If not, are there any small database or embedded databases I could use? I think SQL express would be overkill for me at this point.
For an embedded database, you can use [SQL Server Compact Edition](http://www.microsoft.com/sql/editions/compact/default.mspx). Unlike SQL Server Express, it is *not* compatible with the LINQ to SQL designer, but it is fully compatible with the command-line SQLMetal. It has a few advantages over SQL Express, like to ability to use embedded or run from a file. Microsoft has a very handy chart outlining the [differences between Express and Compact](http://www.microsoft.com/sql/editions/compact/sscecomparison.mspx).
I don't think SQL Express would be overkill if you want to learn real-world skills - quite the opposite in fact! That'd be my choice, and whatever I chose, I'd stay clear of Access. Good luck
Can you use LINQ tools such as SQLMetal with an access database?
[ "", "c#", "linq-to-sql", "ms-access", "" ]
I have to load a PDF within a page. Ideally I would like to have a loading animated gif which is replaced once the PDF has loaded.
I'm pretty certain that it cannot be done. Pretty much anything else than PDF works, even Flash. (Tested on Safari, Firefox 3, IE 7) Too bad.
Have you tried: ``` $("#iFrameId").on("load", function () { // do something once the iframe is loaded }); ```
How do I fire an event when a iframe has finished loading in jQuery?
[ "", "javascript", "jquery", "" ]
I am using `setInterval(fname, 10000);` to call a function every 10 seconds in JavaScript. Is it possible to stop calling it on some event? I want the user to be able to stop the repeated refresh of data.
`setInterval()` returns an interval ID, which you can pass to `clearInterval()`: ``` var refreshIntervalId = setInterval(fname, 10000); /* later */ clearInterval(refreshIntervalId); ``` See the docs for [`setInterval()`](https://developer.mozilla.org/en-US/docs/Web/API/WindowOrWorkerGlobalScope/setInterval) and [`clearInterval()`](https://developer.mozilla.org/en-US/docs/Web/API/WindowOrWorkerGlobalScope/clearInterval).
If you set the return value of `setInterval` to a variable, you can use `clearInterval` to stop it. ``` var myTimer = setInterval(...); clearInterval(myTimer); ```
Stop setInterval call in JavaScript
[ "", "javascript", "dom-events", "setinterval", "" ]
Is there a built-in method in Python to get an array of all a class' instance variables? For example, if I have this code: ``` class hi: def __init__(self): self.ii = "foo" self.kk = "bar" ``` Is there a way for me to do this: ``` >>> mystery_method(hi) ["ii", "kk"] ``` Edit: I originally had asked for class variables erroneously.
Every object has a `__dict__` variable containing all the variables and its values in it. Try this ``` >>> hi_obj = hi() >>> hi_obj.__dict__.keys() ``` Output ``` dict_keys(['ii', 'kk']) ```
Use [vars()](https://docs.python.org/3/library/functions.html#vars) ``` class Foo(object): def __init__(self): self.a = 1 self.b = 2 vars(Foo()) #==> {'a': 1, 'b': 2} vars(Foo()).keys() #==> ['a', 'b'] ```
How to get instance variables in Python?
[ "", "python", "methods", "instance-variables", "" ]
I saw some code like the following in a JSP ``` <c:if test="<%=request.isUserInRole(RoleEnum.USER.getCode())%>"> <li>user</li> </c:if> ``` My confusion is over the "=" that appears in the value of the `test` attribute. My understanding was that anything included within `<%= %>` is printed to the output, but surely the value assigned to test must be a Boolean, so why does this work? For bonus points, is there any way to change the attribute value above such that it does not use scriptlet code? Presumably, that means using EL instead. Cheers, Don
All that the `test` attribute looks for to determine if something is true is the string "true" (case in-sensitive). For example, the following code will print "Hello world!" ``` <c:if test="true">Hello world!</c:if> ``` The code within the `<%= %>` returns a boolean, so it will either print the string "true" or "false", which is exactly what the `<c:if>` tag looks for.
You can also use something like ``` <c:if test="${ testObject.testPropert == "testValue" }">...</c:if> ```
test attribute in JSTL <c:if> tag
[ "", "java", "jsp", "jstl", "" ]
In C#.Net WPF During UserControl.Load -> What is the best way of showing a whirling circle / 'Loading' Indicator on the UserControl until it has finished gathering data and rendering it's contents?
I generally would create a layout like this: ``` <Grid> <Grid x:Name="MainContent" IsEnabled="False"> ... </Grid> <Grid x:Name="LoadingIndicatorPanel"> ... </Grid> </Grid> ``` Then I load the data on a worker thread, and when it's finished I update the UI under the "MainContent" grid and enable the grid, then set the LoadingIndicatorPanel's Visibility to Collapsed. I'm not sure if this is what you were asking or if you wanted to know how to show an animation in the loading label. If it's the animation you're after, please update your question to be more specific.
This is something that I was working on just recently in order to create a loading animation. This xaml will produce an animated ring of circles. My initial idea was to create an adorner and use this animation as it's content, then to display the loading animation in the adorners layer and grey out the content underneath. Haven't had the chance to finish it yet, so I thought I would just post the animation for your reference. ``` <Window x:Class="WpfApplication2.Window1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="Window1" Height="300" Width="300" > <Window.Resources> <Color x:Key="FilledColor" A="255" B="155" R="155" G="155"/> <Color x:Key="UnfilledColor" A="0" B="155" R="155" G="155"/> <Storyboard x:Key="Animation0" FillBehavior="Stop" BeginTime="00:00:00.0" RepeatBehavior="Forever"> <ColorAnimationUsingKeyFrames Storyboard.TargetName="_00" Storyboard.TargetProperty="(Shape.Fill).(SolidColorBrush.Color)"> <SplineColorKeyFrame KeyTime="00:00:00.0" Value="{StaticResource FilledColor}"/> <SplineColorKeyFrame KeyTime="00:00:01.6" Value="{StaticResource UnfilledColor}"/> </ColorAnimationUsingKeyFrames> </Storyboard> <Storyboard x:Key="Animation1" BeginTime="00:00:00.2" RepeatBehavior="Forever"> <ColorAnimationUsingKeyFrames Storyboard.TargetName="_01" Storyboard.TargetProperty="(Shape.Fill).(SolidColorBrush.Color)"> <SplineColorKeyFrame KeyTime="00:00:00.0" Value="{StaticResource FilledColor}"/> <SplineColorKeyFrame KeyTime="00:00:01.6" Value="{StaticResource UnfilledColor}"/> </ColorAnimationUsingKeyFrames> </Storyboard> <Storyboard x:Key="Animation2" BeginTime="00:00:00.4" RepeatBehavior="Forever"> <ColorAnimationUsingKeyFrames Storyboard.TargetName="_02" Storyboard.TargetProperty="(Shape.Fill).(SolidColorBrush.Color)"> <SplineColorKeyFrame KeyTime="00:00:00.0" Value="{StaticResource FilledColor}"/> <SplineColorKeyFrame KeyTime="00:00:01.6" Value="{StaticResource UnfilledColor}"/> </ColorAnimationUsingKeyFrames> </Storyboard> <Storyboard x:Key="Animation3" BeginTime="00:00:00.6" RepeatBehavior="Forever"> <ColorAnimationUsingKeyFrames Storyboard.TargetName="_03" Storyboard.TargetProperty="(Shape.Fill).(SolidColorBrush.Color)"> <SplineColorKeyFrame KeyTime="00:00:00.0" Value="{StaticResource FilledColor}"/> <SplineColorKeyFrame KeyTime="00:00:01.6" Value="{StaticResource UnfilledColor}"/> </ColorAnimationUsingKeyFrames> </Storyboard> <Storyboard x:Key="Animation4" BeginTime="00:00:00.8" RepeatBehavior="Forever"> <ColorAnimationUsingKeyFrames Storyboard.TargetName="_04" Storyboard.TargetProperty="(Shape.Fill).(SolidColorBrush.Color)"> <SplineColorKeyFrame KeyTime="00:00:00.0" Value="{StaticResource FilledColor}"/> <SplineColorKeyFrame KeyTime="00:00:01.6" Value="{StaticResource UnfilledColor}"/> </ColorAnimationUsingKeyFrames> </Storyboard> <Storyboard x:Key="Animation5" BeginTime="00:00:01.0" RepeatBehavior="Forever"> <ColorAnimationUsingKeyFrames Storyboard.TargetName="_05" Storyboard.TargetProperty="(Shape.Fill).(SolidColorBrush.Color)"> <SplineColorKeyFrame KeyTime="00:00:00.0" Value="{StaticResource FilledColor}"/> <SplineColorKeyFrame KeyTime="00:00:01.6" Value="{StaticResource UnfilledColor}"/> </ColorAnimationUsingKeyFrames> </Storyboard> <Storyboard x:Key="Animation6" BeginTime="00:00:01.2" RepeatBehavior="Forever"> <ColorAnimationUsingKeyFrames Storyboard.TargetName="_06" Storyboard.TargetProperty="(Shape.Fill).(SolidColorBrush.Color)"> <SplineColorKeyFrame KeyTime="00:00:00.0" Value="{StaticResource FilledColor}"/> <SplineColorKeyFrame KeyTime="00:00:01.6" Value="{StaticResource UnfilledColor}"/> </ColorAnimationUsingKeyFrames> </Storyboard> <Storyboard x:Key="Animation7" BeginTime="00:00:01.4" RepeatBehavior="Forever"> <ColorAnimationUsingKeyFrames Storyboard.TargetName="_07" Storyboard.TargetProperty="(Shape.Fill).(SolidColorBrush.Color)"> <SplineColorKeyFrame KeyTime="00:00:00.0" Value="{StaticResource FilledColor}"/> <SplineColorKeyFrame KeyTime="00:00:01.6" Value="{StaticResource UnfilledColor}"/> </ColorAnimationUsingKeyFrames> </Storyboard> </Window.Resources> <Window.Triggers> <EventTrigger RoutedEvent="FrameworkElement.Loaded"> <BeginStoryboard Storyboard="{StaticResource Animation0}"/> <BeginStoryboard Storyboard="{StaticResource Animation1}"/> <BeginStoryboard Storyboard="{StaticResource Animation2}"/> <BeginStoryboard Storyboard="{StaticResource Animation3}"/> <BeginStoryboard Storyboard="{StaticResource Animation4}"/> <BeginStoryboard Storyboard="{StaticResource Animation5}"/> <BeginStoryboard Storyboard="{StaticResource Animation6}"/> <BeginStoryboard Storyboard="{StaticResource Animation7}"/> </EventTrigger> </Window.Triggers> <Canvas> <Canvas Canvas.Left="21.75" Canvas.Top="14" Height="81.302" Width="80.197"> <Canvas.Resources> <Style TargetType="Ellipse"> <Setter Property="Width" Value="15"/> <Setter Property="Height" Value="15" /> <Setter Property="Fill" Value="#FFFFFFFF" /> </Style> </Canvas.Resources> <Ellipse x:Name="_00" Canvas.Left="24.75" Canvas.Top="50"/> <Ellipse x:Name="_01" Canvas.Top="36" Canvas.Left="29.5"/> <Ellipse x:Name="_02" Canvas.Left="43.5" Canvas.Top="29.75"/> <Ellipse x:Name="_03" Canvas.Left="57.75" Canvas.Top="35.75"/> <Ellipse x:Name="_04" Canvas.Left="63.5" Canvas.Top="49.75" /> <Ellipse x:Name="_05" Canvas.Left="57.75" Canvas.Top="63.5"/> <Ellipse x:Name="_06" Canvas.Left="43.75" Canvas.Top="68.75"/> <Ellipse x:Name="_07" Canvas.Top="63.25" Canvas.Left="30" /> <Ellipse Stroke="{x:Null}" Width="39.5" Height="39.5" Canvas.Left="31.75" Canvas.Top="37" Fill="{x:Null}"/> </Canvas> </Canvas> </Window> ```
What is the best way to display a 'loading' indicator on a WPF control
[ "", "c#", ".net", "wpf", "user-controls", "" ]
I have the following code: ``` String inputFile = "somefile.txt"; FileInputStream in = new FileInputStream(inputFile); FileChannel ch = in.getChannel(); ByteBuffer buf = ByteBuffer.allocateDirect(BUFSIZE); // BUFSIZE = 256 /* read the file into a buffer, 256 bytes at a time */ int rd; while ( (rd = ch.read( buf )) != -1 ) { buf.rewind(); for ( int i = 0; i < rd/2; i++ ) { /* print each character */ System.out.print(buf.getChar()); } buf.clear(); } ``` But the characters get displayed at ?'s. Does this have something to do with Java using Unicode characters? How do I correct this?
You have to know what the encoding of the file is, and then decode the ByteBuffer into a CharBuffer using that encoding. Assuming the file is ASCII: ``` import java.util.*; import java.io.*; import java.nio.*; import java.nio.channels.*; import java.nio.charset.*; public class Buffer { public static void main(String args[]) throws Exception { String inputFile = "somefile"; FileInputStream in = new FileInputStream(inputFile); FileChannel ch = in.getChannel(); ByteBuffer buf = ByteBuffer.allocateDirect(BUFSIZE); // BUFSIZE = 256 Charset cs = Charset.forName("ASCII"); // Or whatever encoding you want /* read the file into a buffer, 256 bytes at a time */ int rd; while ( (rd = ch.read( buf )) != -1 ) { buf.rewind(); CharBuffer chbuf = cs.decode(buf); for ( int i = 0; i < chbuf.length(); i++ ) { /* print each character */ System.out.print(chbuf.get()); } buf.clear(); } } } ```
buf.getChar() is expecting 2 bytes per character but you are only storing 1. Use: ``` System.out.print((char) buf.get()); ```
Reading an ASCII file with FileChannel and ByteArrays
[ "", "java", "file-io", "io", "arrays", "filechannel", "" ]
What is considered as best practice when it comes to assemblies and releases? I would like to be able to reference multiple versions of the same library - solution contains multiple projects that depend on different versions of a commonutils.dll library we build ourselves. As all dependencies are copied to the bin/debug or bin/release, only a single copy of commonutils.dll can exist there despite each of the DLL files having different assembly version numbers. Should I include version numbers in the assembly name to be able to reference multiple versions of a library or is there another way?
Here's what I've been living by -- It depends on what you are planning to use the DLL files for. I categorize them in two main groups: 1. Dead-end Assemblies. These are EXE files and DLL files you really aren't planning on referencing from anywhere. Just weakly name these and make sure you have the version numbers you release tagged in source-control, so you can rollback whenever. 2. Referenced Assemblies. Strong name these so you can have multiple versions of it being referenced by other assemblies. Use the full name to reference them (Assembly.Load). Keep a copy of the latest-and-greatest version of it in a place where other code can reference it. Next, you have a choice of whether to copy local or not your references. Basically, the tradeoff boils down to -- do you want to take in patches/upgrades from your references? There can be positive value in that from getting new functionality, but on the other hand, there could be breaking changes. The decision here, I believe, should be made on a case-by-case basis. While developing in Visual Studio, by default you will take the latest version to *compile* with, but once compiled the referencing assembly will require the specific version it was compiled with. Your last decision is to Copy Local or not. Basically, if you already have a mechanism in place to deploy the referenced assembly, set this to false. If you are planning a big release management system, you'll probably have to put a lot more thought and care into this. For me (small shop -- two people), this works fine. We know what's going on, and don't feel restrained from *having* to do things in a way that doesn't make sense. Once you reach runtime, you Assembly.Load whatever you want into the [application domain](http://en.wikipedia.org/wiki/Application_Domain). Then, you can use Assembly.GetType to reach the type you want. If you have a type that is present in multiple loaded assemblies (such as in multiple versions of the same project), you may get an [AmbiguousMatchException](http://msdn.microsoft.com/en-us/library/system.reflection.ambiguousmatchexception.aspx) exception. In order to resolve that, you will need to get the type out of an instance of an assembly variable, not the static Assembly.GetType method.
Assemblies can coexist in the GAC (Global Assembly Cache) even if they have the same name given that the version is different. This is how .NET Framework shipped assemblies work. A requirement that must be meet in order for an assembly to be able to be GAC registered is to be signed. Adding version numbers to the name of the Assembly just defeats the whole purpose of the assembly ecosystem and is cumbersome IMHO. To know which version of a given assembly I have just open the Properties window and check the version.
Assembly names and versions
[ "", "c#", "assemblies", "naming-conventions", "" ]
Is it possible to use DateTimePicker (Winforms) to pick both date and time (in the dropdown)? How do you change the custom display of the picked value? Also, is it possible to enable the user to type the date/time manually?
Set the Format to Custom and then specify the format: ``` dateTimePicker1.Format = DateTimePickerFormat.Custom; dateTimePicker1.CustomFormat = "MM/dd/yyyy hh:mm:ss"; ``` or however you want to lay it out. You could then type in directly the date/time. If you use MMM, you'll need to use the numeric value for the month for entry, unless you write some code yourself for that (e.g., 5 results in May) Don't know about the picker for date and time together. Sounds like a custom control to me.
It is best to use two DateTimePickers for the Job One will be the default for the date section and the second DateTimePicker is for the time portion. Format the second DateTimePicker as follows. ``` timePortionDateTimePicker.Format = DateTimePickerFormat.Time; timePortionDateTimePicker.ShowUpDown = true; ``` The Two should look like this after you capture them ![Two Date Time Pickers](https://i.stack.imgur.com/x94Oa.jpg) To get the DateTime from both these controls use the following code ``` DateTime myDate = datePortionDateTimePicker.Value.Date + timePortionDateTimePicker.Value.TimeOfDay; ``` To assign the DateTime to both these controls use the following code ``` datePortionDateTimePicker.Value = myDate.Date; timePortionDateTimePicker.Value = myDate.TimeOfDay; ```
DateTimePicker: pick both date and time
[ "", "c#", ".net", "winforms", "datetimepicker", "" ]
I'm trying to determine the best way to truncate or drop extra decimal places in SQL without rounding. For example: ``` declare @value decimal(18,2) set @value = 123.456 ``` This will automatically round `@value` to be `123.46`, which is good in most cases. However, for this project, I don't need that. Is there a simple way to truncate the decimals I don't need? I know I can use the `left()` function and convert back to a decimal. Are there any other ways?
You will need to provide 3 numbers to the ROUND function. 1. number **Required. The number to be rounded** 2. decimals **Required. The number of decimal places to round number to operation** 3. *Optional. If 0, it rounds the result to the number of decimal. If another value than 0, it truncates the result to the number of decimals. Default value is 0* Example: ``` select round(123.456, 2, 1) ``` Works in: * SQL Server (starting with 2008), Azure SQL Database, Azure SQL Data Warehouse, Parallel Data Warehouse *Additional Info: <https://www.w3schools.com/sql/func_sqlserver_round.asp>*
``` ROUND ( 123.456 , 2 , 1 ) ``` [When the third parameter **!= 0** it truncates rather than rounds.](https://learn.microsoft.com/sql/t-sql/functions/round-transact-sql#arguments) **Syntax** ``` ROUND ( numeric_expression , length [ ,function ] ) ``` **Arguments** * `numeric_expression` Is an expression of the exact numeric or approximate numeric data type category, except for the bit data type. * `length` Is the precision to which numeric\_expression is to be rounded. length must be an expression of type tinyint, smallint, or int. When length is a positive number, numeric\_expression is rounded to the number of decimal positions specified by length. When length is a negative number, numeric\_expression is rounded on the left side of the decimal point, as specified by length. * `function` Is the type of operation to perform. function must be tinyint, smallint, or int. When function is omitted or has a value of 0 (default), numeric\_expression is rounded. When a value other than 0 is specified, numeric\_expression is truncated.
Truncate (not round) decimal places in SQL Server
[ "", "sql", "sql-server", "t-sql", "rounding", "" ]
Take the following snippet: ``` List<int> distances = new List<int>(); ``` Was the redundancy intended by the language designers? If so, why?
The reason the code appears to be redundant is because, to a novice programmer, it appears to be defining the same thing twice. But this is not what the code is doing. It is defining two separate things that just happen to be of the same type. It is defining the following: 1. A variable named distances of type `List<int>`. 2. An object on the heap of type `List<int>`. Consider the following: ``` Person[] coworkers = new Employee[20]; ``` Here the non-redundancy is clearer, because the variable and the allocated object are of two different types (a situation that is legal if the object’s type derives from or implements the variable’s type).
What's redudant about this? ``` List<int> listOfInts = new List<int>(): ``` Translated to English: (EDIT, cleaned up a little for clarification) * Create a pointer of type List<int> and name it listofInts. * listOfInts is now created but its just a reference pointer pointing to nowhere (null) * Now, create an object of type List<int> on the heap, and return the pointer to listOfInts. * Now listOfInts points to a List<int> on the heap. Not really verbose when you think about what it does. Of course there is an alternative: ``` var listOfInts = new List<int>(); ``` Here we are using C#'s type inference, because you are assigning to it immediately, C# can figure out what type you want to create by the object just created in the heap. To fully understand how the CLR handles types, I recommend reading [CLR Via C#](http://www.microsoft.com/MSPress/books/6522.aspx).
Redundancy in C#?
[ "", "c#", ".net", "generics", "programming-languages", "" ]
I'm trying to draw a polygon using c# and directx All I get is an ordered list of points from a file and I need to draw the flat polygon in a 3d world. I can load the points and draw a convex shape using a trianglefan and drawuserprimitives. This obviously leads to incorrect results when the polygon is very concave (which it may be). I can't imagine I'm the only person to grapple with this problem (tho I'm a gfx/directx neophyte - my background is in gui\windows application development). Can anyone point me towards a simple to follow resource\tutorial\algorithm which may assist me?
Direct3D can only draw triangles (well, it can draw lines and points as well, but that's besides the point). So if you want to draw any shape that is more complex than a triangle, you have to draw a bunch of touching triangles that equal to that shape. In your case, it's a concave polygon triangulation problem. Given a bunch of vertices, you can keep them as is, you just need to compute the "index buffer" (in simplest case, three indices per triangle that say which vertices the triangle uses). Then draw that by putting into vertex/index buffers or using DrawUserPrimitives. Some algorithms for triangulating simple (convex or concave, but without self-intersections or holes) polygons are at [VTerrain site](http://www.vterrain.org/Implementation/Libs/triangulate.html). I have used Ratcliff's code in the past; very simple and works well. VTerrain has a dead link to it; the code can be found [here](http://www.flipcode.com/archives/Efficient_Polygon_Triangulation.shtml). It's C++, but porting that over to C# should be straightforward. Oh, and don't use triangle fans. They are of very limited use, inefficient and are going away soon (e.g. Direct3D 10 does not support them anymore). Just use triangle lists.
If you are able to use the stencil buffer, it should not be hard to do. Here's a general algorithm: ``` Clear the stencil buffer to 1. Pick an arbitrary vertex v0, probably somewhere near the polygon to reduce floating-point errors. For each vertex v[i] of the polygon in clockwise order: let s be the segment v[i]->v[i+1] (where i+1 will wrap to 0 when the last vertex is reached) if v0 is to the "right" of s: draw a triangle defined by v0, v[i], v[i+1] that adds 1 to the stencil buffer else draw a triangle defined by v0, v[i], v[i+1] that subtracts 1 from the stencil buffer end for fill the screen with the desired color/texture, testing for stencil buffer values >= 2. ``` By "right of s" I mean from the perspective of someone standing on v[i] and facing v[i+1]. This can be tested by using a cross product: cross(v0 - v[i], v[i+1] - v[i]) > 0
Is there an efficient\easy way to draw a concave polygon in Direct3d
[ "", "c#", "directx", "polygon", "concave", "" ]
I'm a little confused about how the standard library will behave now that Python (from 3.0) is unicode-based. Will modules such as CGI and urllib use unicode strings or will they use the new 'bytes' type and just provide encoded data?
Logically a lot of things like MIME-encoded mail messages, URLs, XML documents, and so on should be returned as `bytes` not strings. This could cause some consternation as the libraries start to be nailed down for Python 3 and people discover that they have to be more aware of the `bytes`/`string` conversions than they were for `str`/`unicode` ...
One of the great things about this question (and Python in general) is that you can just mess around in the interpreter! [Python 3.0 rc1 is currently available for download](http://www.python.org/download/releases/3.0/). ``` >>> import urllib.request >>> fh = urllib.request.urlopen('http://www.python.org/') >>> print(type(fh.read(100))) <class 'bytes'> ```
Will everything in the standard library treat strings as unicode in Python 3.0?
[ "", "python", "unicode", "string", "cgi", "python-3.x", "" ]
How can you enumerate an `enum` in C#? E.g. the following code does not compile: ``` public enum Suit { Spades, Hearts, Clubs, Diamonds } public void EnumerateAllSuitsDemoMethod() { foreach (Suit suit in Suit) { DoSomething(suit); } } ``` And it gives the following compile-time error: > 'Suit' is a 'type' but is used like a 'variable' It fails on the `Suit` keyword, the second one.
**Update:** *If you're using .NET 5 or newer, use [this solution](https://stackoverflow.com/questions/105372/how-to-enumerate-an-enum#65103244).* ``` foreach (Suit suit in (Suit[]) Enum.GetValues(typeof(Suit))) { } ``` **Note**: The cast to `(Suit[])` is not strictly necessary, [but it does make the code 0.5 ns faster](https://gist.github.com/bartoszkp/9e059c3edccc07a5e588#gistcomment-2625454).
It looks to me like you really want to print out the names of each enum, rather than the values. In which case `Enum.GetNames()` seems to be the right approach. ``` public enum Suits { Spades, Hearts, Clubs, Diamonds, NumSuits } public void PrintAllSuits() { foreach (string name in Enum.GetNames(typeof(Suits))) { System.Console.WriteLine(name); } } ``` By the way, incrementing the value is not a good way to enumerate the values of an enum. You should do this instead. I would use `Enum.GetValues(typeof(Suit))` instead. ``` public enum Suits { Spades, Hearts, Clubs, Diamonds, NumSuits } public void PrintAllSuits() { foreach (var suit in Enum.GetValues(typeof(Suits))) { System.Console.WriteLine(suit.ToString()); } } ```
How to enumerate an enum?
[ "", "c#", ".net", "loops", "enums", "enumeration", "" ]
Does somebody know a Java library which serializes a Java object hierarchy into Java code which generates this object hierarchy? Like Object/XML serialization, only that the output format is not binary/XML but Java code.
I am not aware of any libraries that will do this out of the box but you should be able to take one of the many object to XML serialisation libraries and customise the backend code to generate Java. Would probably not be much code. For example a quick google turned up [XStream](http://xstream.codehaus.org/). I've never used it but is seems to support multiple backends other than XML - e.g. JSON. You can implement your own writer and just write out the Java code needed to recreate the hierarchy. I'm sure you could do the same with other libraries, in particular if you can hook into a SAX event stream. See: [HierarchicalStreamWriter](http://xstream.codehaus.org/javadoc/com/thoughtworks/xstream/io/HierarchicalStreamWriter.html)
Serialised data represents the internal data of objects. There isn't enough information to work out what methods you would need to call on the objects to reproduce the internal state. There are two obvious approaches: * Encode the serialised data in a literal String and deserialise that. * Use java.beans XML persistence, which should be easy enough to process with your favourite XML->Java source technique.
Serialize Java objects into Java code
[ "", "java", "" ]
`mysql_real_escape_string` and `addslashes` are both used to escape data before the database query, so what's the difference? (This question is not about parametrized queries/PDO/mysqli)
> `string mysql_real_escape_string ( string $unescaped_string [, resource $link_identifier ] )` > `mysql_real_escape_string()` calls MySQL's library function mysql\_real\_escape\_string, which prepends backslashes to the following characters: \x00, \n, \r, \, ', " and \x1a. > `string addslashes ( string $str )` > Returns a string with backslashes before characters that need to be quoted in database queries etc. These characters are single quote ('), double quote ("), backslash (\) and NUL (the NULL byte). They affect different characters. `mysql_real_escape_string` is specific to MySQL. Addslashes is just a general function which may apply to other things as well as MySQL.
`mysql_real_escape_string()` has the added benefit of escaping text input correctly with respect to the character set of a database through the optional *link\_identifier* parameter. Character set awareness is a critical distinction. `addslashes()` will add a slash before every eight bit binary representation of each character to be escaped. If you're using some form of multibyte character set it's possible, although probably only through poor design of the character set, that one or both halves of a sixteen or thirty-two bit character representation is identical to the eight bits of a character `addslashes()` would add a slash to. In such cases you might get a slash added before a character that should not be escaped or, worse still, you might get a slash in the middle of a sixteen (or thirty-two) bit character which would corrupt the data. If you need to escape content in database queries you should always use `mysql_real_escape_string()` where possible. `addslashes()` is fine if you're sure the database or table is using 7 or 8 bit ASCII encoding only.
What is the difference between mysql_real_escape_string and addslashes?
[ "", "php", "" ]
I am writing a program which has two panes (via `CSplitter`), however I am having problems figuring out out to resize the controls in each frame. For simplicity, can someone tell me how I would do it for a basic frame with a single `CEdit` control? I'm fairly sure it is to do with the `CEdit::OnSize()` function... But I'm not really getting anywhere... Thanks! :)
When your frame receives an OnSize message it will give you the new width and height - you can simply call the CEdit SetWindowPos method passing it these values. Assume CMyPane is your splitter pane and it contains a CEdit you created in OnCreate called m\_wndEdit: ``` void CMyPane::OnSize(UINT nType, int cx, int cy) { m_wndEdit.SetWindowPos(NULL, 0, 0, cx, cy, SWP_NOMOVE | SWP_NOACTIVATE | SWP_NOZORDER); } ```
A window receives WM\_SIZE message (which is processed by OnSize handler in MFC) immediately after it was resized, so CEdit::OnSize is not what you are looking for. You should add OnSize handler in your frame class and inside this handler as Rob [pointed out](https://stackoverflow.com/questions/127514/resizing-controls-in-mfc#127520) you'll get width and height of the client area of your frame, then you should add the code which adjusts size and position of your control. Something like this ``` void MyFrame::OnSize(UINT nType, int w, int h) { // w and h parameters are new width and height of your frame // suppose you have member variable CEdit myEdit which you need to resize/move myEdit.MoveWindow(w/5, h/5, w/2, h/2); } ```
Resizing Controls in MFC
[ "", "c++", "mfc", "" ]
Dear Stacktoverflow, can you show me an example of how to use a QScrollBar? Thanks.
It will come down to you using the QScrollArea, it is a widget that implements showing something that is larger than the available space. You will not need to use QScrollBar directly. I don't have a PyQt example but there is a C++ example in the QT distribution it is called the "Image Viewer". The object hierarchy will still be the same
``` >>> import sys >>> from PyQt4 import QtCore, QtGui >>> app = QtGui.QApplication(sys.argv) >>> sb = QtGui.QScrollBar() >>> sb.setMinimum(0) >>> sb.setMaximum(100) >>> def on_slider_moved(value): print "new slider position: %i" % (value, ) >>> sb.connect(sb, QtCore.SIGNAL("sliderMoved(int)"), on_slider_moved) >>> sb.show() >>> app.exec_() ``` Now, when you move the slider (you might have to resize the window), you'll see the slider position printed to the terminal as you the handle.
PyQt - QScrollBar
[ "", "python", "pyqt", "" ]
The [Sun Documentation for DataInput.skipBytes](http://java.sun.com/j2se/1.4.2/docs/api/java/io/DataInput.html#skipBytes(int)) states that it "makes an attempt to skip over n bytes of data from the input stream, discarding the skipped bytes. However, it may skip over some smaller number of bytes, possibly zero. This may result from any of a number of conditions; reaching end of file before n bytes have been skipped is only one possibility." 1. Other than reaching end of file, why might `skipBytes()` not skip the right number of bytes? (The `DataInputStream` I am using will either be wrapping a `FileInputStream` or a `PipedInputStream`.) 2. If I definitely want to skip n bytes and throw an `EOFException` if this causes me to go to the end of the file, should I use `readFully()` and ignore the resulting byte array? Or is there a better way?
1) There might not be that much data available to read (the other end of the pipe might not have sent that much data yet), and the implementing class might be non-blocking (i.e. it will just return what it can, rather than waiting for enough data to fulfil the request). I don't know if any implementations actually behave in this way, however, but the interface is designed to permit it. Another option is simply that the file gets closed part-way through the read. 2) Either readFully() (which will always wait for enough input or else fail) or call skipBytes() in a loop. I think the former is probably better, unless the array is truly vast.
I came across this problem today. It was reading off a network connection on a virtual machine so I imagine there could be a number of reasons for this happening. I solved it by simply forcing the input stream to skip bytes until it had skipped the number of bytes I wanted it to: ``` int byteOffsetX = someNumber; //n bytes to skip int nSkipped = 0; nSkipped = in.skipBytes(byteOffsetX); while (nSkipped < byteOffsetX) { nSkipped = nSkipped + in.skipBytes(byteOffsetX - nSkipped); } ```
When can DataInputStream.skipBytes(n) not skip n bytes?
[ "", "io", "java", "" ]
The following will cause infinite recursion on the == operator overload method ``` Foo foo1 = null; Foo foo2 = new Foo(); Assert.IsFalse(foo1 == foo2); public static bool operator ==(Foo foo1, Foo foo2) { if (foo1 == null) return foo2 == null; return foo1.Equals(foo2); } ``` How do I check for nulls?
Use `ReferenceEquals`: ``` Foo foo1 = null; Foo foo2 = new Foo(); Assert.IsFalse(foo1 == foo2); public static bool operator ==(Foo foo1, Foo foo2) { if (object.ReferenceEquals(null, foo1)) return object.ReferenceEquals(null, foo2); return foo1.Equals(foo2); } ```
Cast to object in the overload method: ``` public static bool operator ==(Foo foo1, Foo foo2) { if ((object) foo1 == null) return (object) foo2 == null; return foo1.Equals(foo2); } ```
How do I check for nulls in an '==' operator overload without infinite recursion?
[ "", "c#", ".net", "operator-overloading", "" ]
So I'm writing a framework on which I want to base a few apps that I'm working on (the framework is there so I have an environment to work with, and a system that will let me, for example, use a single sign-on) I want to make this framework, and the apps it has use a Resource Oriented Architecture. Now, I want to create a URL routing class that is expandable by APP writers (and possibly also by CMS App users, but that's WAYYYY ahead in the future) and I'm trying to figure out the best way to do it by looking at how other apps do it.
I prefer to use reg ex over making my own format since it is common knowledge. I wrote a small class that I use which allows me to nest these reg ex routing tables. I use to use something similar that was implemented by inheritance but it didn't need inheritance so I rewrote it. I do a reg ex on a key and map to my own control string. Take the below example. I visit `/api/related/joe` and my router class creates a new object `ApiController` and calls it's method `relatedDocuments(array('tags' => 'joe'));` ``` // the 12 strips the subdirectory my app is running in $index = urldecode(substr($_SERVER["REQUEST_URI"], 12)); Route::process($index, array( "#^api/related/(.*)$#Di" => "ApiController/relatedDocuments/tags", "#^thread/(.*)/post$#Di" => "ThreadController/post/title", "#^thread/(.*)/reply$#Di" => "ThreadController/reply/title", "#^thread/(.*)$#Di" => "ThreadController/thread/title", "#^ajax/tag/(.*)/(.*)$#Di" => "TagController/add/id/tags", "#^ajax/reply/(.*)/post$#Di"=> "ThreadController/ajaxPost/id", "#^ajax/reply/(.*)$#Di" => "ArticleController/newReply/id", "#^ajax/toggle/(.*)$#Di" => "ApiController/toggle/toggle", "#^$#Di" => "HomeController", )); ``` In order to keep errors down and simplicity up you can subdivide your table. This way you can put the routing table into the class that it controls. Taking the above example you can combine the three thread calls into a single one. ``` Route::process($index, array( "#^api/related/(.*)$#Di" => "ApiController/relatedDocuments/tags", "#^thread/(.*)$#Di" => "ThreadController/route/uri", "#^ajax/tag/(.*)/(.*)$#Di" => "TagController/add/id/tags", "#^ajax/reply/(.*)/post$#Di"=> "ThreadController/ajaxPost/id", "#^ajax/reply/(.*)$#Di" => "ArticleController/newReply/id", "#^ajax/toggle/(.*)$#Di" => "ApiController/toggle/toggle", "#^$#Di" => "HomeController", )); ``` Then you define ThreadController::route to be like this. ``` function route($args) { Route::process($args['uri'], array( "#^(.*)/post$#Di" => "ThreadController/post/title", "#^(.*)/reply$#Di" => "ThreadController/reply/title", "#^(.*)$#Di" => "ThreadController/thread/title", )); } ``` Also you can define whatever defaults you want for your routing string on the right. Just don't forget to document them or you will confuse people. I'm currently calling index if you don't include a function name on the right. [Here](http://pastie.org/278748) is my current code. You may want to change it to handle errors how you like and or default actions.
Yet another framework? -- anyway... The trick is with routing is to pass it all over to your routing controller. You'd probably want to use something similar to what I've documented here: <http://www.hm2k.com/posts/friendly-urls> The second solution allows you to use URLs similar to Zend Framework.
PHP Application URL Routing
[ "", "php", "url", "routes", "url-routing", "" ]
Considering "private" is the default access modifier for class Members, why is the keyword even needed?
It's for you (and future maintainers), not the compiler.
There's a certain amount of misinformation here: > "The default access modifier is not private but internal" Well, that depends on what you're talking about. For members of a type, it's private. For top-level types themselves, it's internal. > "Private is only the default for *methods* on a type" No, it's the default for *all members* of a type - properties, events, fields, operators, constructors, methods, nested types and anything else I've forgotten. > "Actually, if the class or struct is not declared with an access modifier it defaults to internal" Only for top-level types. For nested types, it's private. Other than for restricting property access for one part but not the other, the default is basically always "as restrictive as can be." Personally, I dither on the issue of whether to be explicit. The "pro" for using the default is that it highlights anywhere that you're making something more visible than the most restrictive level. The "pro" for explicitly specifying it is that it's more obvious to those who don't know the above rule, and it shows that you've thought about it a bit. Eric Lippert goes with the explicit form, and I'm starting to lean that way too. See [http://csharpindepth.com/viewnote.aspx?noteid=54](http://web.archive.org/web/20160307023117/http://csharpindepth.com/viewnote.aspx?noteid=54) for a little bit more on this.
What does the "private" modifier do?
[ "", "c#", ".net", "private", "access-modifiers", "private-members", "" ]
We have a couple of developers asking for `allow_url_fopen` to be enabled on our server. What's the norm these days and if `libcurl` is enabled is there really any good reason to allow? Environment is: Windows 2003, PHP 5.2.6, FastCGI
You definitely want `allow_url_include` set to Off, which mitigates many of the risks of `allow_url_fopen` as well. But because not all versions of PHP have `allow_url_include`, best practice for many is to turn off fopen. Like with all features, the reality is that if you don't need it for your application, disable it. If you do need it, the curl module probably can do it better, and refactoring your application to use curl to disable `allow_url_fopen` may deter the least determined cracker.
I think the answer comes down to how well you trust your developers to use the feature responsibly? Data from a external URL should be treated like any other untrusted input and as long as that is understood, what's the big deal? The way I see it is that if you treat your developers like children and never let them handle sharp things, then you'll have developers who never learn the responsibility of writing secure code.
Should I allow 'allow_url_fopen' in PHP?
[ "", "php", "configuration", "" ]
Is there any way to edit column names in a DataGridView?
I don't think there is a way to do it without writing custom code. I'd implement a ColumnHeaderDoubleClick event handler, and create a TextBox control right on top of the column header.
You can also change the column name by using: ``` myDataGrid.Columns[0].HeaderText = "My Header" ``` but the `myDataGrid` will need to have been bound to a `DataSource`.
DataGridView Edit Column Names
[ "", "c#", "winforms", "datagridview", "" ]
What is the difference between Views and Materialized Views in Oracle?
Materialized views are disk based and are updated periodically based upon the query definition. Views are virtual only and run the query definition each time they are accessed.
# Views They evaluate the data in the tables underlying the view definition **at the time the view is queried**. It is a logical view of your tables, with no data stored anywhere else. The upside of a view is that it will **always return the latest data to you**. The **downside of a view is that its performance** depends on how good a select statement the view is based on. If the select statement used by the view joins many tables, or uses joins based on non-indexed columns, the view could perform poorly. # Materialized views They are similar to regular views, in that they are a logical view of your data (based on a select statement), however, the **underlying query result set has been saved to a table**. The upside of this is that when you query a materialized view, **you are querying a table**, which may also be indexed. In addition, because all the joins have been resolved at materialized view refresh time, you pay the price of the join once (or as often as you refresh your materialized view), rather than each time you select from the materialized view. In addition, with query rewrite enabled, Oracle can optimize a query that selects from the source of your materialized view in such a way that it instead reads from your materialized view. In situations where you create materialized views as forms of aggregate tables, or as copies of frequently executed queries, this can greatly speed up the response time of your end user application. The **downside though is that the data you get back from the materialized view is only as up to date as the last time the materialized view has been refreshed**. --- Materialized views can be set to refresh manually, on a set schedule, or *based on the database detecting a change in data from one of the underlying tables*. Materialized views can be incrementally updated by combining them with materialized view logs, which **act as change data capture sources** on the underlying tables. Materialized views are most often used in data warehousing / business intelligence applications where querying large fact tables with thousands of millions of rows would result in query response times that resulted in an unusable application. --- Materialized views also help to guarantee a consistent moment in time, similar to [snapshot isolation](https://en.wikipedia.org/wiki/Snapshot_isolation).
What is the difference between Views and Materialized Views in Oracle?
[ "", "sql", "oracle", "view", "relational-database", "materialized-views", "" ]
The purpose of using a Javascript proxy for the Web Service using a service reference with Script Manager is to avoid a page load. If the information being retrieved is potentially sensitive, is there a way to secure this web service call other than using SSL?
If your worried about other people access your web service directly, you could check the calling IP address and host header and make sure it matches expected IP's addresses. If your worried about people stealing information during it's journey from the server to the client, SSL is the only way to go.
I would use ssl it would also depend I suppose on how sensitive your information is.
Is there a good way of securing an ASP.Net web service call made via Javascript on the click event handler of an HTML button?
[ "", "asp.net", "javascript", "service", "security", "" ]
If I have a table in my database called 'Users', there will be a class generated by LINQtoSQL called 'User' with an already declared empty constructor. What is the best practice if I want to override this constructor and add my own logic to it?
The default constructor which is generated by the O/R-Designer, calls a partial function called `OnCreated` - so the best practice is not to override the default constructor, but instead implement the partial function `OnCreated` in `MyDataClasses.cs` to initialize items: ``` partial void OnCreated() { Name = ""; } ``` If you are implementing other constructors, always take care to call the default constructor so the classes will be initialized properly - for example entitysets (relations) are constructed in the default constructor.
It doesn't look like you can override the empty constructor. Instead, I would create a method that performs the functionality that you need in the empty constructor and returns the new object. ``` // Add new partial class to extend functionality public partial class User { // Add additional constructor public User(int id) { ID = id; } // Add static method to initialize new object public User GetNewUser() { // functionality User user = new User(); user.Name = "NewName"; return user; } } ``` Then elsewhere in your code, instead of using the default empty constructor, do one of the following: ``` User user1 = new User(1); User user2 = User.GetNewUser(); ```
Is there a way to override the empty constructor in a class generated by LINQtoSQL?
[ "", "c#", ".net", "linq", "linq-to-sql", "" ]
Newer ARM processors include the PLD and PLI instructions. I'm writing tight inner loops (in C++) which have a non-sequential memory access pattern, but a pattern that naturally my code fully understands. I would anticipate a substantial speedup if I could prefetch the next location whilst processing the current memory location, and I would expect this to be quick-enough to try out to be worth the experiment! I'm using new expensive compilers from ARM, and it doesn't seem to be including PLD instructions anywhere, let alone in this particular loop that I care about. How can I include explicit prefetch instructions in my C++ code?
There should be some Compiler-specific Features. There is no standard way to do it for C/C++. Check out you compiler Compiler Reference Guide. For RealView Compiler see [this](http://www.keil.com/support/man/docs/armccref/armccref_cjagadac.htm) or [this](http://www.keil.com/support/man/docs/armccref/armccref_cjacagfi.htm).
If you are trying to extract truly maximum performance from these loops, than I would recommend writing the entire looping construct in assembler. You should be able to use inline assembly depending on the data structures involved in your loop. Even better if you can unroll any piece of your loop (like the parts involved in making the access non-sequential).
Prefetch instructions on ARM
[ "", "c++", "arm", "assembly", "" ]
I'd like my application to have a full-screen mode. What is the easiest way to do this, do I need a third party library for this or is there something in the JDK that already offers this?
Try the [Full-Screen Exclusive Mode API](http://java.sun.com/docs/books/tutorial/extra/fullscreen/index.html). It was introduced in the JDK in release 1.4. Some of the features include: > * **Full-Screen Exclusive Mode** - allows you to suspend the windowing system so that drawing can be done directly to the screen. > * **Display Mode** - composed of the size (width and height of the monitor, in pixels), bit depth (number of bits per pixel), and refresh rate (how frequently the monitor updates itself). > * **Passive vs. Active Rendering** - painting while on the main event loop using the paint method is passive, whereas rendering in your own thread is active. > * **Double Buffering and Page Flipping** - Smoother drawing means better perceived performance and a much better user experience. > * **BufferStrategy and BufferCapabilities** - classes that allow you to draw to surfaces and components without having to know the number of buffers used or the technique used to display them, and help you determine the capabilities of your graphics device. There are several full-screen exclusive mode examples in the linked tutorial.
JFrame `setUndecorated(true)` method
How to program a full-screen mode in Java?
[ "", "java", "graphics", "fullscreen", "" ]
I have an installation program (just a regular C++ MFC program, not Windows Installer based) that needs to set some registry values based on the type of Internet connection: broadband, dialup, and/or wireless. Right now this information is being determined by asking a series of yes or no questions. The problem is that the person doing the installations is not the same person that owns and uses the computer, so they're not always sure what the answers to these questions should be. Is there a way to programatically determine any of this information? The code is written in C++ (and optionally MFC) for Windows XP and up. .NET-based solutions are not an option because I don't want to have to determine if the framework is installed before our installation program can run. To clarify, the issue is mainly that wireless and dialup connections are not "always-on", which creates a need for our product to behave a different way because our server is not always available. So a strictly speed-measuring solution wouldn't help, though there is a setting that's speed dependent so that the product doesn't try to send MB of information through a dialup connection as soon as it connects.
Use InternetGetConnectedState API to retrieve internet connection state. I tested it and it works fine. I found this document which can help: <http://www.pcausa.com/resources/InetActive.txt>
[I have no idea how to get exactly the information you asked for, but...] Maybe you could rephrase (for yourself) what you try to accomplish? Like, instead of asking "does the user have broadband or dialup", ask "how much bandwidth does the user's internet connection have" - and then you can try to answer the rephrased question without any user input (like by measuring bandwidth). Btw. if you ask the user just for "broadband or dialup", you might encounter some problems: * what if the user has some connection type you didn't anticipate? * what if the user doesn't know (because there's just an ethernet cable going to a PPPoE DSL modem/router)? * what if the user is connected through a series of connections (VPN via dialup, to some other network which has broadband?) Asking for "capabilities" instead of "type" might be more useful in those cases.
How do you detect dialup, broadband or wireless Internet connections in C++ for Windows?
[ "", "c++", "mfc", "windows-xp", "broadband", "" ]
For example, if I declare a long variable, can I assume it will always be aligned on a "sizeof(long)" boundary? Microsoft Visual C++ online help says so, but is it standard behavior? some more info: a. It is possible to explicitely create a misaligned integer (\*bar): > char foo[5] > > int \* bar = (int \*)(&foo[1]); b. Apparently, #pragma pack() only affects structures, classes, and unions. c. MSVC documentation states that POD types are aligned to their respective sizes (but is it always or by default, and is it standard behavior, I don't know)
As others have mentioned, this isn't part of the standard and is left up to the compiler to implement as it sees fit for the processor in question. For example, VC could easily implement different alignment requirements for an ARM processor than it does for x86 processors. Microsoft VC implements what is basically called natural alignment up to the size specified by the #pragma pack directive or the /Zp command line option. This means that, for example, any POD type with a size smaller or equal to 8 bytes will be aligned based on its size. Anything larger will be aligned on an 8 byte boundary. If it is important that you control alignment for different processors and different compilers, then you can use a packing size of 1 and pad your structures. ``` #pragma pack(push) #pragma pack(1) struct Example { short data1; // offset 0 short padding1; // offset 2 long data2; // offset 4 }; #pragma pack(pop) ``` In this code, the `padding1` variable exists only to make sure that data2 is naturally aligned. Answer to a: Yes, that can easily cause misaligned data. On an x86 processor, this doesn't really hurt much at all. On other processors, this can result in a crash or a very slow execution. For example, the Alpha processor would throw a processor exception which would be caught by the OS. The OS would then inspect the instruction and then do the work needed to handle the misaligned data. Then execution continues. The `__unaligned` keyword can be used in VC to mark unaligned access for non-x86 programs (i.e. for CE).
By default, yes. However, it can be changed via the pack() #pragma. I don't believe the C++ Standard make any requirement in this regard, and leaves it up to the implementation.
Are POD types always aligned?
[ "", "c++", "c", "visual-c++", "" ]
I'm looking for a way to set the default language for visitors comming to a site built in EPiServer for the first time. Not just administrators/editors in the backend, people comming to the public site.
Depends on your setup. If the site languages is to change under different domains you can do this. Add to configuration -> configSections nodes in web.config: ``` <sectionGroup name="episerver"> <section name="domainLanguageMappings" allowDefinition="MachineToApplication" allowLocation="false" type="EPiServer.Util.DomainLanguageConfigurationHandler,EPiServer" /> ``` ..and add this to episerver node in web.config: ``` <domainLanguageMappings> <map domain="site.com" language="EN" /> <map domain="site.se" language="SV" /> </domainLanguageMappings> ``` Otherwhise you can do something like this. Add to appSettings in web.config: ``` <add name="EPsDefaultLanguageBranch" key="EN"/> ```
I have this on EPiServer CMS5: ``` <globalization culture="sv-SE" uiCulture="sv" requestEncoding="utf-8" responseEncoding="utf-8" resourceProviderFactoryType="EPiServer.Resources.XmlResourceProviderFactory, EPiServer" /> ```
Setting default language in EPiServer?
[ "", "c#", ".net", "episerver", "" ]
Is there a quick & dirty way of obtaining a list of all the classes within a Visual Studio 2008 (c#) project? There are quite a lot of them and Im just lazy enough not to want to do it manually.
If you open the "Class View" dialogue (View -> Class View or Ctrl+W, C) you can get a list of all of the classes in your project which you can then select and copy to the clipboard. The copy will send the fully qualified (i.e. with complete namespace) names of all classes that you have selected.
I've had success using **[doxygen](http://www.doxygen.nl/)** to generate documentation from the XML comments in my projects - a byproduct of this is a nice, hyperlinked list of classes.
List the names of all the classes within a VS2008 project
[ "", "c#", "visual-studio-2008", "class", "list", "" ]
I'm no crypto expert, but as I understand it, 3DES is a symmetric encryption algorithm, which means it doesnt use public/private keys. Nevertheless, I have been tasked with encrypting data using a public key, (specifically, a .CER file). If you ignore the whole symmetric/asymmetric thang, I should just be able to use the key data from the public key as the TripleDES key. However, I'm having difficulty extracting the key bytes from the .CER file. This is the code as it stands.. ``` TripleDESCryptoServiceProvider cryptoProvider = new TripleDESCryptoServiceProvider(); X509Certificate2 cert = new X509Certificate2(@"c:\temp\whatever.cer"); cryptoProvider.Key = cert.PublicKey.Key. ``` The simplest method I can find to extract the raw key bytes from the certificate is ToXmlString(bool), and then doing some hacky substringing upon the returned string. However, this seems so hackish I feel I must be missing a simpler, more obvious way to do it. Am I missing a simpler way to use a .cer file to provide the key data to the C# 3DES crypto class, or is hacking it out of the certificate xml string really the best way to go about this?
cryptoProvider.Key = cert.GetPublicKey()?
It's not a good idea to use keys generated for asymmetric cryptography for symmetric cryptography. There's nothing preventing you from coming up with a way of using a public key as an encryption key for 3DES, but the end result will be that anyone having access to the public key (and this means everyone!) will be able to decrypt your ciphertext.
3DES with .cer public key?
[ "", "c#", "encryption", "" ]
Suppose I attach an `blur` function to an HTML input box like this: ``` <input id="myInput" onblur="function() { ... }"></input> ``` Is there a way to get the ID of the element which caused the `blur` event to fire (the element which was clicked) inside the function? How? For example, suppose I have a span like this: ``` <span id="mySpan">Hello World</span> ``` If I click the span right after the input element has focus, the input element will lose its focus. How does the function know that it was `mySpan` that was clicked? PS: If the onclick event of the span would occur before the onblur event of the input element my problem would be solved, because I could set some status value indicating a specific element had been clicked. PPS: The background of this problem is that I want to trigger an AJAX autocompleter control externally (from a clickable element) to show its suggestions, without the suggestions disappearing immediately because of the `blur` event on the input element. So I want to check in the `blur` function if one specific element has been clicked, and if so, ignore the blur event.
Hmm... In Firefox, you can use `explicitOriginalTarget` to pull the element that was clicked on. I expected `toElement` to do the same for IE, but it does not appear to work... However, you can pull the newly-focused element from the document: ``` function showBlur(ev) { var target = ev.explicitOriginalTarget||document.activeElement; document.getElementById("focused").value = target ? target.id||target.tagName||target : ''; } ... <button id="btn1" onblur="showBlur(event)">Button 1</button> <button id="btn2" onblur="showBlur(event)">Button 2</button> <button id="btn3" onblur="showBlur(event)">Button 3</button> <input id="focused" type="text" disabled="disabled" /> ``` --- **Caveat:** This technique does *not* work for focus changes caused by *tabbing* through fields with the keyboard, and does not work at all in Chrome or Safari. The big problem with using `activeElement` (except in IE) is that it is not consistently updated until *after* the `blur` event has been processed, and may have no valid value at all during processing! This can be mitigated with a variation on [the technique Michiel ended up using](https://stackoverflow.com/questions/121499/when-onblur-occurs-how-can-i-find-out-which-element-focus-went-to/128452#128452): ``` function showBlur(ev) { // Use timeout to delay examination of activeElement until after blur/focus // events have been processed. setTimeout(function() { var target = document.activeElement; document.getElementById("focused").value = target ? target.id||target.tagName||target : ''; }, 1); } ``` This should work in most modern browsers (tested in Chrome, IE, and Firefox), with the caveat that Chrome does not set focus on buttons that are *clicked* (vs. tabbed to).
**2015 answer**: according to [UI Events](http://www.w3.org/TR/uievents), you can use the [`relatedTarget`](http://www.w3.org/TR/uievents/#widl-FocusEvent-relatedTarget) property of the event: > Used to identify a secondary [`EventTarget`](http://www.w3.org/TR/uievents/#interface-EventTarget) related to a Focus > event, depending on the type of event. For [`blur`](http://www.w3.org/TR/uievents/#event-type-blur) events, > [`relatedTarget`](http://www.w3.org/TR/uievents/#widl-FocusEvent-relatedTarget): [event target](http://www.w3.org/TR/uievents/#glossary-event-target) receiving focus. Example: ``` function blurListener(event) { event.target.className = 'blurred'; if(event.relatedTarget) event.relatedTarget.className = 'focused'; } [].forEach.call(document.querySelectorAll('input'), function(el) { el.addEventListener('blur', blurListener, false); }); ``` ``` .blurred { background: orange } .focused { background: lime } ``` ``` <p>Blurred elements will become orange.</p> <p>Focused elements should become lime.</p> <input /><input /><input /> ``` Note Firefox won't support `relatedTarget` until version 48 ([bug 962251](https://bugzilla.mozilla.org/show_bug.cgi?id=962251), [MDN](https://developer.mozilla.org/en-US/docs/Web/API/FocusEvent/relatedTarget)).
When a 'blur' event occurs, how can I find out which element focus went *to*?
[ "", "javascript", "events", "" ]
I have inherited a poorly written web application that seems to have errors when it tries to read in an xml document stored in the database that has an "&" in it. For example there will be a tag with the contents: "Prepaid & Charge". Is there some secret simple thing to do to have it not get an error parsing that character, or am I missing something obvious? EDIT: Are there any other characters that will cause this same type of parser error for not being well formed?
The problem is the xml is not well-formed. Properly generated xml would list the data like this: > `Prepaid &amp; Charge` I've fixed the same problem before, and I did it with this regex: ``` Regex badAmpersand = new Regex("&(?![a-zA-Z]{2,6};|#[0-9]{2,4};)"); ``` Combine that with a string constant defined like this: ``` const string goodAmpersand = "&amp;"; ``` Now you can say `badAmpersand.Replace(<your input>, goodAmpersand);` Note a simple `String.Replace("&", "&amp;")` isn't good enough, since you can't know in advance for a given document whether any & characters will be coded correctly, incorrectly, or even both in the same document. The catches here are you have to do this to your xml document *before* loading it into your parser, which likely means an extra pass through the document. Also, it does not account for ampersands inside of a CDATA section. Finally, it *only* catches ampersands, not other illegal characters like <. **Update:** based on the comment, I need to update the expression for hex-coded (&#x...;) entities as well. Regarding which characters can cause problems, the actual rules are a little complex. For example, certain characters are allowed in data, but not as the first letter of an element name. And there's no simple list of illegal characters. Instead, large (non-contiguous) swaths of UNICODE are [defined as legal](http://www.w3.org/TR/REC-xml#charsets), and anything outside that is illegal. When it comes down to it, you have to trust your document source to have at least a certain amount of compliance and consistency. For example, I've found people are often smart enough to make sure the tags work properly and escape <, even if they don't know that & isn't allowed, hence your problem today. However, **the best thing would be to get this fixed at the source.** Oh, and a note about the CDATA suggestion: I use that to make sure xml *I'm creating* is well-formed, but when dealing with existing xml from outside, I find the regex method easier.
The web application isn't at fault, the XML document is. Ampersands in XML should be encoded as `&amp;`. Failure to do so is a syntax error. **Edit:** in answer to the followup question, yes there are all kinds of similar errors. For example, unbalanced tags, unencoded less-than signs, unquoted attribute values, octets outside of the character encoding and various Unicode oddities, unrecognised entity references, and so on. In order to get any decent XML parser to consume a document, that document must be well-formed. The XML specification requires that a parser encountering a malformed document throw a fatal error.
Reading XML with an "&" into C# XMLDocument Object
[ "", "c#", ".net", "asp.net", "xml", "xmldocument", "" ]
I am told that good developers can spot/utilize the difference between `Null` and `False` and `0` and all the other good "nothing" entities. What *is* the difference, specifically in PHP? Does it have something to do with `===`?
## It's language specific, but in PHP : **`Null`** means "**nothing**". The var has not been initialized. **`False`** means "**not true in a boolean context**". Used to explicitly show you are dealing with logical issues. **`0`** is an **`int`**. Nothing to do with the rest above, used for mathematics. Now, what is tricky, it's that in dynamic languages like PHP, *all of them have a value in a boolean context*, which (in PHP) is `False`. If you test it with `==`, it's testing the boolean value, so you will get equality. If you test it with `===`, it will test the type, and you will get inequality. ## So why are they useful ? Well, look at the `strrpos()` function. It returns False if it did not find anything, but 0 if it has found something at the beginning of the string! ``` <?php // pitfall : if (strrpos("Hello World", "Hello")) { // never exectuted } // smart move : if (strrpos("Hello World", "Hello") !== False) { // that works ! } ?> ``` And of course, if you deal with states, you want to make a difference between the following: * `DebugMode = False` (set to off) * `DebugMode = True` (set to on) * `DebugMode = Null` (not set at all; will lead to hard debugging ;-))
`null` is `null`. `false` is `false`. Sad but true. there's not much consistency in PHP (though it is improving on latest releases, there's too much backward compatibility). Despite the design wishing some consistency (outlined in the selected answer here), it all get confusing when you consider method returns that use `false`/`null` in not-so-easy to reason ways. You will often see null being used when they are already using false for something. e.g. filter\_input(). They return false if the variable fails the filter, and null if the variable does not exists (does not existing means it also failed the filter?) Methods returning false/null/string/etc interchangeably is a hack when the author care about the type of failure, for example, with `filter_input()` you can check for `===false` or `===null` if you care why the validation failed. But if you don't it might be a pitfall as one might forget to add the check for `===null` if they only remembered to write the test case for `===false`. And most php unit test/coverage tools will not call your attention for the missing, untested code path! Lastly, here's some fun with type juggling. not even including arrays or objects. ``` var_dump( 0<0 ); #bool(false) var_dump( 1<0 ); #bool(false) var_dump( -1<0 ); #bool(true) var_dump( false<0 ); #bool(false) var_dump( null<0 ); #bool(false) var_dump( ''<0 ); #bool(false) var_dump( 'a'<0 ); #bool(false) echo "\n"; var_dump( !0 ); #bool(true) var_dump( !1 ); #bool(false) var_dump( !-1 ); #bool(false) var_dump( !false ); #bool(true) var_dump( !null ); #bool(true) var_dump( !'' ); #bool(true) var_dump( !'a' ); #bool(false) echo "\n"; var_dump( false == 0 ); #bool(true) var_dump( false == 1 ); #bool(false) var_dump( false == -1 ); #bool(false) var_dump( false == false ); #bool(true) var_dump( false == null ); #bool(true) var_dump( false == '' ); #bool(true) var_dump( false == 'a' ); #bool(false) echo "\n"; var_dump( null == 0 ); #bool(true) var_dump( null == 1 ); #bool(false) var_dump( null == -1 ); #bool(false) var_dump( null == false ); #bool(true) var_dump( null == null ); #bool(true) var_dump( null == '' ); #bool(true) var_dump( null == 'a' ); #bool(false) echo "\n"; $a=0; var_dump( empty($a) ); #bool(true) $a=1; var_dump( empty($a) ); #bool(false) $a=-1; var_dump( empty($a) ); #bool(false) $a=false; var_dump( empty($a) ); #bool(true) $a=null; var_dump( empty($a) ); #bool(true) $a=''; var_dump( empty($a) ); #bool(true) $a='a'; var_dump( empty($a)); # bool(false) echo "\n"; #new block suggested by @thehpi var_dump( null < -1 ); #bool(true) var_dump( null < 0 ); #bool(false) var_dump( null < 1 ); #bool(true) var_dump( -1 > true ); #bool(false) var_dump( 0 > true ); #bool(false) var_dump( 1 > true ); #bool(true) var_dump( -1 > false ); #bool(true) var_dump( 0 > false ); #bool(false) var_dump( 1 > true ); #bool(true) ```
Null vs. False vs. 0 in PHP
[ "", "php", "null", "" ]
Is there a cheap way to get the dimensions of an image (jpg, png, ...)? Preferably, I would like to achieve this using only the standard class library (because of hosting restrictions). I know that it should be relatively easy to read the image header and parse it myself, but it seems that something like this should be already there. Also, I’ve verified that the following piece of code reads the entire image (which I don’t want): ``` using System; using System.Drawing; namespace Test { class Program { static void Main(string[] args) { Image img = new Bitmap("test.png"); System.Console.WriteLine(img.Width + " x " + img.Height); } } } ```
Your best bet as always is to find a well tested library. However, you said that is difficult, so here is some dodgy largely untested code that should work for a fair number of cases: ``` using System; using System.Collections.Generic; using System.Drawing; using System.IO; using System.Linq; namespace ImageDimensions { public static class ImageHelper { const string errorMessage = "Could not recognize image format."; private static Dictionary<byte[], Func<BinaryReader, Size>> imageFormatDecoders = new Dictionary<byte[], Func<BinaryReader, Size>>() { { new byte[]{ 0x42, 0x4D }, DecodeBitmap}, { new byte[]{ 0x47, 0x49, 0x46, 0x38, 0x37, 0x61 }, DecodeGif }, { new byte[]{ 0x47, 0x49, 0x46, 0x38, 0x39, 0x61 }, DecodeGif }, { new byte[]{ 0x89, 0x50, 0x4E, 0x47, 0x0D, 0x0A, 0x1A, 0x0A }, DecodePng }, { new byte[]{ 0xff, 0xd8 }, DecodeJfif }, }; /// <summary> /// Gets the dimensions of an image. /// </summary> /// <param name="path">The path of the image to get the dimensions of.</param> /// <returns>The dimensions of the specified image.</returns> /// <exception cref="ArgumentException">The image was of an unrecognized format.</exception> public static Size GetDimensions(string path) { using (BinaryReader binaryReader = new BinaryReader(File.OpenRead(path))) { try { return GetDimensions(binaryReader); } catch (ArgumentException e) { if (e.Message.StartsWith(errorMessage)) { throw new ArgumentException(errorMessage, "path", e); } else { throw e; } } } } /// <summary> /// Gets the dimensions of an image. /// </summary> /// <param name="path">The path of the image to get the dimensions of.</param> /// <returns>The dimensions of the specified image.</returns> /// <exception cref="ArgumentException">The image was of an unrecognized format.</exception> public static Size GetDimensions(BinaryReader binaryReader) { int maxMagicBytesLength = imageFormatDecoders.Keys.OrderByDescending(x => x.Length).First().Length; byte[] magicBytes = new byte[maxMagicBytesLength]; for (int i = 0; i < maxMagicBytesLength; i += 1) { magicBytes[i] = binaryReader.ReadByte(); foreach(var kvPair in imageFormatDecoders) { if (magicBytes.StartsWith(kvPair.Key)) { return kvPair.Value(binaryReader); } } } throw new ArgumentException(errorMessage, "binaryReader"); } private static bool StartsWith(this byte[] thisBytes, byte[] thatBytes) { for(int i = 0; i < thatBytes.Length; i+= 1) { if (thisBytes[i] != thatBytes[i]) { return false; } } return true; } private static short ReadLittleEndianInt16(this BinaryReader binaryReader) { byte[] bytes = new byte[sizeof(short)]; for (int i = 0; i < sizeof(short); i += 1) { bytes[sizeof(short) - 1 - i] = binaryReader.ReadByte(); } return BitConverter.ToInt16(bytes, 0); } private static int ReadLittleEndianInt32(this BinaryReader binaryReader) { byte[] bytes = new byte[sizeof(int)]; for (int i = 0; i < sizeof(int); i += 1) { bytes[sizeof(int) - 1 - i] = binaryReader.ReadByte(); } return BitConverter.ToInt32(bytes, 0); } private static Size DecodeBitmap(BinaryReader binaryReader) { binaryReader.ReadBytes(16); int width = binaryReader.ReadInt32(); int height = binaryReader.ReadInt32(); return new Size(width, height); } private static Size DecodeGif(BinaryReader binaryReader) { int width = binaryReader.ReadInt16(); int height = binaryReader.ReadInt16(); return new Size(width, height); } private static Size DecodePng(BinaryReader binaryReader) { binaryReader.ReadBytes(8); int width = binaryReader.ReadLittleEndianInt32(); int height = binaryReader.ReadLittleEndianInt32(); return new Size(width, height); } private static Size DecodeJfif(BinaryReader binaryReader) { while (binaryReader.ReadByte() == 0xff) { byte marker = binaryReader.ReadByte(); short chunkLength = binaryReader.ReadLittleEndianInt16(); if (marker == 0xc0) { binaryReader.ReadByte(); int height = binaryReader.ReadLittleEndianInt16(); int width = binaryReader.ReadLittleEndianInt16(); return new Size(width, height); } binaryReader.ReadBytes(chunkLength - 2); } throw new ArgumentException(errorMessage); } } } ``` Hopefully the code is fairly obvious. To add a new file format you add it to `imageFormatDecoders` with the key being an array of the "magic bits" which appear at the beginning of every file of the given format and the value being a function which extracts the size from the stream. Most formats are simple enough, the only real stinker is jpeg.
``` using (FileStream file = new FileStream(this.ImageFileName, FileMode.Open, FileAccess.Read)) { using (Image tif = Image.FromStream(stream: file, useEmbeddedColorManagement: false, validateImageData: false)) { float width = tif.PhysicalDimension.Width; float height = tif.PhysicalDimension.Height; float hresolution = tif.HorizontalResolution; float vresolution = tif.VerticalResolution; } } ``` the `validateImageData` set to `false` prevents GDI+ from performing costly analysis of the image data, thus severely decreasing load time. [This question](https://stackoverflow.com/questions/420337/validateimagedata-parameter-and-image-fromstream) sheds more light on the subject.
Getting image dimensions without reading the entire file
[ "", "c#", ".net", "image", "image-manipulation", "" ]
I'm planning on creating a game that contains a landscape with objects on it. The landscape will be defined using a heightfield, and the objects will move about on top of, and fly over the terrain. If you've ever played the old games [Marble Madness](http://en.wikipedia.org/wiki/Marble_Madness) and [Virus/Zarch](http://en.wikipedia.org/wiki/Zarch), that's the kind of complexity and style I'm trying to create. I've seen various physics engines on the Internet, and a few tutorials about how to render heightfields as terrain, but they either lack documentation or seem overly complex for what I need. All I need is a way to draw a heightfield, place 3D objects on it and then make them roll down the hills, or fly around in the sky. I don't mind making my own code to do this, so any relevant tutorials would be good too.
Here is a more complete list, Xbox, Zune and Windows... * [Farseer](http://www.codeplex.com/FarseerPhysics) - 2d only. * [JigLibX](http://www.codeplex.com/JigLibX) * [Bullet](http://bulletphysics.com/) + [BulletX](http://www.codeplex.com/xnadevru/Wiki/View.aspx?title=Managed%20Bullet%20Physics%20Library&referringTitle=Home) + [XBAP](http://chriscavanagh.wordpress.com/2007/04/24/xbap-3d-physics-source/) * [Oops! 3D Physics Framework](http:///) * [Bepu physics](http://www.bepu-games.com/BEPUphysics/) * [Jello Physics](http://walaber.com/index.php?action=showitem&id=16) * [Physics2D.Net](http://physics2d.googlepages.com/) Windows Only... * [PhysX](http://www.ageia.com/) + [MS Robotics Studio wrapper](http://msdn2.microsoft.com/en-us/robotics/default.aspx) + [PhysXdotNet Wrapper](http://code.google.com/p/physxdotnet/) * [ODE (Open Dyamics Engine)](http://ode.org/) + [XPA (XNA Physics lib)](http://www.codeplex.com/xnadevru/Wiki/View.aspx?title=XNA%20Physics%20API%20%28XPA%29&referringTitle=Home) * [Newton Game Dynamics](http://www.newtondynamics.com/) + [Newton Physics Port to XNA](http://www.tamedtornado.com/devblog/?p=58)
If you're looking for more of a tutorial rather than a full-blown solution, have you checked the collision series at [the XNA creators site](http://creators.xna.com)? Specifically, [Collision Series 5: Heightmap Collision with Normals](http://creators.xna.com/en-us/sample/collision3dheightmapnormals) sounds like exactly what you're looking for.
What XNA based 3D terrain and physics libraries exist?
[ "", "c#", "xna", "physics", "" ]
When building some of my PHP apps, a lot of the functionality could be coded using PEAR/PECL modules, however, the fact that some people using it may not have the access to install things, It poses a puzzler for me. Should I forsake some users to use PEAR/PECL for functionality, where these will allow me to have a system coded up quicker than if I wrote my own functionality, but eans that it will exclude certain people from using it.
It partly depends on how much time you have, and the purpose of the project. If you're just trying to make something that works, go with PEAR/PECL. If you're trying to learn to be a better programmer, and you have the time, then I'd recommend taking the effort to write your own versions. Once you understand the innards of whatever you're trying to replace, you may want to switch to the PEAR/PECL version so that you're not wasting time reimplementing what has already been implemented... ...but on the other hand, preexisting tools don't always do exactly what you need, and sometimes have overhead that doesn't do you any good. This is why Unix command-line tools are so small and narrow of purpose; nobody really needs a version of 'ls' that can do anything besides what 'ls' can currently do. Your version of whatever PEAR library will, by virtue of being written by you, do exactly what you need doing. It requires some careful thought... ...but on the gripping hand, don't spend too much time thinking about it. Spend five minutes, make a decision, and start coding. Even if you make the wrong decision, you'll at least have gotten more practice coding. :-)
Save on development time by developing with the pear libraries, and provide the libraries bundled in what you distribute (though you'll have to make sure it obeys licensing requirements) I would not depend on certain PECL extensions being installed unless you're doing something particularly related to one (say an XDebug web-frontend or something), the majority of installs will be carrying a fairly vanilla set of extensions.
PHP: Use Pecl/Pear, or build my own systems?
[ "", "php", "pear", "pecl", "" ]
Does anyone know how I can check to see if a directory is writeable in PHP? The function [`is_writable`](http://php.net/manual/en/function.is-writable.php) doesn't work for folders. > Edit: It does work. See the accepted answer.
Yes, it does work for folders.... > Returns TRUE if the filename exists and is writable. The filename argument may be a directory name allowing you to check if a directory is writable.
this is the code :) ``` <?php $newFileName = '/var/www/your/file.txt'; if ( ! is_writable(dirname($newFileName))) { echo dirname($newFileName) . ' must writable!!!'; } else { // blah blah blah } ```
How do I check if a directory is writeable in PHP?
[ "", "php", "permissions", "directory", "" ]
First, yes I know about [this question](https://stackoverflow.com/questions/50182/linuxx11-input-library-without-creating-a-window), but I'm looking for a bit more information that that. I have actually, a fairly similar problem, in that I need to be able to capture input for mouse/keyboard/joystick, and I'd also like to avoid SDL if at all possible. I was more or less wondering if anyone knows where I can get some decent primers on handling input from devices in Linux, perhaps even some tutorials. SDL works great for cross-platform input handling, but I'm not going to be using anything else at all from SDL, so I'd like to cut it out altogether. Suggestion, comments, and help are all appreciated. Thanks! Edit for clarity: The point is to capture mouse motion, keyboard press/release, mouse clicks, and potentially joystick handling for a game.
Using the link below look at the function void kGUISystemX::Loop(void) This is my main loop for getting input via keyboard and mouse using X Windows on Linux. <http://code.google.com/p/kgui/source/browse/trunk/kguilinux.cpp> Here is a snippet: ``` if(XPending(m_display)) { XNextEvent(m_display, &m_e); switch(m_e.type) { case MotionNotify: m_mousex=m_e.xmotion.x; m_mousey=m_e.xmotion.y; break; case ButtonPress: switch(m_e.xbutton.button) { case Button1: m_mouseleft=true; break; case Button3: m_mouseright=true; break; case Button4:/* middle mouse wheel moved */ m_mousewheel=1; break; case Button5:/* middle mouse wheel moved */ m_mousewheel=-1; break; } break; case ButtonRelease: switch(m_e.xbutton.button) { case Button1: m_mouseleft=false; break; case Button3: m_mouseright=false; break; } break; case KeyPress: { XKeyEvent *ke; int ks; int key; ke=&m_e.xkey; kGUI::SetKeyShift((ke->state&ShiftMask)!=0); kGUI::SetKeyControl((ke->state&ControlMask)!=0); ks=XLookupKeysym(ke,(ke->state&ShiftMask)?1:0); ...... ```
If you know your project will only be run under Linux (not Windows or even one of the BSDs), you should look into the Linux kernel's input system. Download the [kernel source](http://kernel.org/pub/linux/kernel/v2.6/linux-2.6.26.5.tar.bz2) and read `Documentation/input/input.txt`, particularly the description of the `evdev` system. For a significantly higher-level (and more portable) solution, [read up on Xlib](http://users.actcom.co.il/~choo/lupg/tutorials/xlib-programming/xlib-programming.html). Obviously it requires a running X server, but it has the advantage of inheriting the user's keyboard settings. Joystick events are unfortunately not included, you'd probably need to use the kernel joystick API for those.
Capturing Input in Linux
[ "", "c++", "linux", "" ]
I am investigating the design of a work queue processor where the QueueProcessor retrieves a Command Pattern object from the Queue and executes it in a new thread. I am trying to get my head around a potential Queue lockup scenario where nested Commands may result in a deadlock. E.G. A FooCommand object is placed onto the queue which the QueueProcessor then executes in its own thread. The executing FooCommand places a BarCommand onto the queue. Assuming that the maximum allowed threads was only 1 thread, the QueueProcessor would be in a deadlocked state since the FooCommand is infinitely waiting for the BarCommand to complete. How can this situation be managed? Is a queue object the right object for the job? Are there any checks and balances that can be put into place to resolve this issue? Many thanks. ( application uses C# .NET 3.0 )
For simple cases like this an additional monitoring thread that can spin off more threads on demand is helpful. Basically every N seconds check to see if any jobs have been finished, if not, add another thread. This won't necessarily handle even more complex deadlock problems, but it will solve this one. My recommendation for the heavier problem is to restrict waits to newly spawned process, in other words, you can only wait on something you started, that way you never get deadlocks, since cycles are impossible in that situation.
You could redesign things so that FooCommand doesn't use the queue to run BarCommand but runs it directly, or you could split FooCommand into two, and have the first half stop immediately after queueing BarCommand, and have BarCommand queue the second have of FooCommand after it's done its work.
Managing ThreadPool starvation within a multithreaded work queue processor?
[ "", "c#", ".net", "multithreading", "" ]
How can I tell in JavaScript what path separator is used in the OS where the script is running?
Afair you can always use / as a path separator, even on Windows. Quote from <http://bytes.com/forum/thread23123.html>: > So, the situation can be summed up > rather simply: > > * All DOS services since DOS 2.0 and all Windows APIs accept either forward > slash or backslash. Always have. > * None of the standard command shells (CMD or COMMAND) will accept forward > slashes. Even the "cd ./tmp" example > given in a previous post fails.
Use [`path`](http://nodejs.org/docs/latest/api/path.html) module in `node.js` returns the platform-specific file separator. example ``` path.sep // on *nix evaluates to a string equal to "/" ``` Edit: As per Sebas's comment below, to use this, you need to add this at the top of your js file: ``` const path = require('path') ```
How to determine the OS path separator in JavaScript?
[ "", "javascript", "file", "directory", "" ]
What is the best way (performance wise) to paginate results in SQL Server 2000, 2005, 2008, 2012 if you also want to get the total number of results (before paginating)?
Getting the total number of results and paginating are two different operations. For the sake of this example, let's assume that the query you're dealing with is ``` SELECT * FROM Orders WHERE OrderDate >= '1980-01-01' ORDER BY OrderDate ``` In this case, you would determine the total number of results using: ``` SELECT COUNT(*) FROM Orders WHERE OrderDate >= '1980-01-01' ``` ...which may seem inefficient, but is actually pretty performant, assuming all indexes etc. are properly set up. Next, to get actual results back in a paged fashion, the following query would be most efficient: ``` SELECT * FROM ( SELECT ROW_NUMBER() OVER ( ORDER BY OrderDate ) AS RowNum, * FROM Orders WHERE OrderDate >= '1980-01-01' ) AS RowConstrainedResult WHERE RowNum >= 1 AND RowNum < 20 ORDER BY RowNum ``` This will return rows 1-19 of the original query. The cool thing here, especially for web apps, is that you don't have to keep any state, except the row numbers to be returned.
Finally, **Microsoft SQL Server 2012** was released, I really like its simplicity for a pagination, you don't have to use complex queries like answered here. For getting the next 10 rows just run this query: ``` SELECT * FROM TableName ORDER BY id OFFSET 10 ROWS FETCH NEXT 10 ROWS ONLY; ``` <https://learn.microsoft.com/en-us/sql/t-sql/queries/select-order-by-clause-transact-sql#using-offset-and-fetch-to-limit-the-rows-returned> Key points to consider when using it: * `ORDER BY` is mandatory to use `OFFSET ... FETCH` clause. * `OFFSET` clause is mandatory with `FETCH`. You cannot use `ORDER BY ... FETCH`. * `TOP` cannot be combined with `OFFSET` and `FETCH` in the same query expression.
What is the best way to paginate results in SQL Server
[ "", "sql", "sql-server", "performance", "pagination", "" ]
I'm trying to generate customized xml files from a template xml file in python. Conceptually, I want to read in the template xml, remove some elements, change some text attributes, and write the new xml out to a file. I wanted it to work something like this: ``` conf_base = ConvertXmlToDict('config-template.xml') conf_base_dict = conf_base.UnWrap() del conf_base_dict['root-name']['level1-name']['leaf1'] del conf_base_dict['root-name']['level1-name']['leaf2'] conf_new = ConvertDictToXml(conf_base_dict) ``` now I want to write to file, but I don't see how to get to ElementTree.ElementTree.write() ``` conf_new.write('config-new.xml') ``` Is there some way to do this, or can someone suggest doing this a different way?
For easy manipulation of XML in python, I like the [Beautiful Soup](http://www.crummy.com/software/BeautifulSoup/) library. It works something like this: Sample XML File: ``` <root> <level1>leaf1</level1> <level2>leaf2</level2> </root> ``` Python code: ``` from BeautifulSoup import BeautifulStoneSoup, Tag, NavigableString soup = BeautifulStoneSoup('config-template.xml') # get the parser for the xml file soup.contents[0].name # u'root' ``` You can use the node names as methods: ``` soup.root.contents[0].name # u'level1' ``` It is also possible to use regexes: ``` import re tags_starting_with_level = soup.findAll(re.compile('^level')) for tag in tags_starting_with_level: print tag.name # level1 # level2 ``` Adding and inserting new nodes is pretty straightforward: ``` # build and insert a new level with a new leaf level3 = Tag(soup, 'level3') level3.insert(0, NavigableString('leaf3') soup.root.insert(2, level3) print soup.prettify() # <root> # <level1> # leaf1 # </level1> # <level2> # leaf2 # </level2> # <level3> # leaf3 # </level3> # </root> ```
This'll get you a dict minus attributes. I don't know, if this is useful to anyone. I was looking for an xml to dict solution myself, when I came up with this. ``` import xml.etree.ElementTree as etree tree = etree.parse('test.xml') root = tree.getroot() def xml_to_dict(el): d={} if el.text: d[el.tag] = el.text else: d[el.tag] = {} children = el.getchildren() if children: d[el.tag] = map(xml_to_dict, children) return d ``` This: <http://www.w3schools.com/XML/note.xml> ``` <note> <to>Tove</to> <from>Jani</from> <heading>Reminder</heading> <body>Don't forget me this weekend!</body> </note> ``` Would equal this: ``` {'note': [{'to': 'Tove'}, {'from': 'Jani'}, {'heading': 'Reminder'}, {'body': "Don't forget me this weekend!"}]} ```
Editing XML as a dictionary in python?
[ "", "python", "xml", "dictionary", "" ]
We are running part of our app as a windows service and it needs to b able to access DSNs in order to import through ODBC. However there seem to be a lot of restrictions found through trial and error on what DSNs it can access. For example it seems that it cannot 1. access a system DSN unless the account that is running the service has admin privileges. (I get an Access Denied error, when trying to connect) 2. access a user DSN that was created by a different user (this one is understandable). 3. access a file DSN across the network I've read that the purpose of a file DSN is to allow other computers to use it to connect, however i can't seem to make that work. So does any know, or know where i can find out what all the rules and restrictions on accessing a DSN are when using a windows service. thanks
This is somewhere between your #1 and #2: sometimes correct file permissions are also necessary. I once had troubles on a Vista machine connecting to a DB2 DSN because, for whatever reason (maybe to write out temp files; although I don't know why it would do such a thing in this location instead of a user-specific one), the driver needed write access to the directory where IBM had installed the client binaries and libs, which had been done by an Administrator and was in the root of the C drive.
I think you've already discovered the three main rules yourself. :-) Except that you probably don't need admin privileges for your service account. IANANA (I am not a network administrator), but your service account probably just needs read access to one of the ODBC files or directories.
What are the access restrictions on accessing a DSN
[ "", "c++", "windows-services", "odbc", "dsn", "" ]
Having a problem getting a TreeView control to display node images. The code below works sometimes but fails to show any image at other times. ``` private TreeNode AddNodeForCore(TreeNode root, Core c) { string key = GetImageKey(c); TreeNode t = root.Nodes.Add(c.Name, c.Name, key, key); t.Tag = c; return t; } ``` Note that when it fails, the TreeView fails to show any images for any node. The TreeView does have an ImageList assigned to it, and the image key is definitely in the images collection. Edit: My google-fu is weak. Can't believe I didn't find that answer myself.
The helpful bit of the googled posts above is in fact: "This is a known bug in the Windows XP visual styles implementation. Certain controls, like ImageList, do not get properly initialized when they've been created before the app calls Application.EnableVisualStyles(). The normal Main() implementation in a C#'s Program.cs avoids this. Thanks for posting back!" So basically, guarantee that Application.EnableVisualStyles() is called before you initialise your imagelist.
A quick google search found this answer: <http://forums.microsoft.com/MSDN/ShowPost.aspx?siteid=1&PostID=965968> **Quote** from that page: > If the Form containing the TreeView is instantiated in the add-in startup function as below, the icons appear! ``` public partial class ThisApplication { Form1 frm; private void ThisApplication_Startup(object sender, System.EventArgs e) { frm = new Form1(); frm.Show(); } ``` > BUT, if instantiated with the class, as below: ``` public partial class ThisApplication { Form1 frm = new Form1(); private void ThisApplication_Startup(object sender, System.EventArgs e) { frm.Show(); } ``` > Then they do NOT appear. Furthermore, if "VisualStyles" (new with XP) are disabled, the icons work in both instances.
.NET TreeView won't show images
[ "", "c#", ".net", "winforms", "" ]
My model layer is being used by a handful of different projects and I'd like to use a single XML Spring Configuration file for the model regardless of which project is using it. My question is: Since not all beans are used in all projects am I wasting resources to any significant amount if there not being instantiated? I'm not too sure how lazy Spring is about loading them since it's never been an issue until now. Any ideas?
Taken from the [Spring Reference Manual](http://static.springframework.org/spring/docs/2.5.x/reference/beans.html#beans-factory-lazy-init): > The default behavior for ApplicationContext implementations is to eagerly pre-instantiate all singleton beans at startup. Pre-instantiation means that an ApplicationContext will eagerly create and configure all of its singleton beans as part of its initialization process. Generally this is a good thing, because it means that any errors in the configuration or in the surrounding environment will be discovered immediately (as opposed to possibly hours or even days down the line). > > However, there are times when this behavior is not what is wanted. If you do not want a singleton bean to be pre-instantiated when using an ApplicationContext, you can selectively control this by marking a bean definition as lazy-initialized. A lazily-initialized bean indicates to the IoC container whether or not a bean instance should be created at startup or when it is first requested. > > When configuring beans via XML, this lazy loading is controlled by the 'lazy-init' attribute on the [bean element] ; for example: ``` <bean id="lazy" class="com.foo.ExpensiveToCreateBean" lazy-init="true"/> ``` But, unless your beans are using up resources like file locks or database connections, I wouldn't worry too much about simple memory overhead if it is easier for you to have this one configuration for multiple (but different) profiles.
In addition to the other comments: it's also possible to specify a whole configuration file to be lazily initialized, by using the 'default-lazy-init' attribute on the `<beans/>` element; for example: ``` <beans default-lazy-init="true"> <!-- no beans will be pre-instantiated... --> </beans> ``` This is much easier than adding the `lazy-init` attribute to every bean, if you have a lot of them.
Does having many unused beans in a Spring Bean Context waste significant resources?
[ "", "java", "spring", "" ]
Maps are great to get things done easily, but they are memory hogs and suffer from caching issues. And when you have a map in a critical loop that can be bad. So I was wondering if anyone can recommend another container that has the same API but uses lets say a vector or hash implementation instead of a tree implementation. My goal here is to swap the containers and not have to rewrite all the user code that relies on the map. Update: performance wise the best solution would be a tested map facade on a std::vector
See [Loki::AssocVector](http://loki-lib.sourceforge.net/) and/or hash\_map (most of STL implementations have this one).
You can use std::tr1::unordered\_map, which is already present in most STL implementations, and is part of the C++0x standard. Here is it's current signature : ``` template <class Key, class T, class Hash = std::tr1::hash<Key>, class Pred = std::equal_to<Key>, class Alloc = std::allocator<std::pair<const Key, T> > > class unordered_map; ```
Can anyone recommend a C++ std::map replacement container?
[ "", "c++", "stdmap", "" ]
I'm working on a fairly complex project, a custom encryption routine if you will (just for fun) and I've run into this issue in designing my code layout. I have a number of functions that I want to be able to call by index. Specifically, I need to be able to call one randomly for the encrypt process, but then address that by a specific index in the decrypt process. I was considering a classic function array, but my main concern is that a function array would be tricky to maintain, and a little ugly. (The goal is to get each function pair in a separate file, to reduce compile times and make the code easier to manage.) Does anyone have a more elegant C++ solution as an alternative to a function array? Speed isn't really an issue, I'm more worried about maintainability. -Nicholas
You could write something like: ``` class EncryptionFunction { public: virtual Foo Run(Bar input) = 0; virtual ~MyFunction() {} }; class SomeSpecificEncryptionFunction : public EncryptionFunction { // override the Run function }; // ... std::vector<EncryptionFunction*> functions; // ... functions[2]->Run(data); ``` You could use `operator()` instead of `Run` as the function name, if you prefer.
What's wrong with function array? You need to call functions by index. So they must be put into some "indexable by index" structure *somehow*. Array is probably the simplest structure that suits this need. Example (typing out of my head, might not compile): ``` struct FunctionPair { EncodeFunction encode; DecodeFunction decode; }; FunctionPair g_Functions[] = { { MyEncode1, MyDecode1 }, { MySuperEncode, MySuperDecode }, { MyTurboEncode, MyTurboDecode }, }; ``` What is "ugly" or "hard to maintain" in the approach above?
C++ Function List
[ "", "c++", "function", "maintainability", "" ]
I was reading about output buffering in JavaScript **[here](http://www.webreference.com/programming/javascript/jkm3/4.html),** and was trying to get my head around the script the author says was the fastest at printing 1 to 1,000,000 to a web page. (Scroll down to the header "The winning one million number script".) After studying it a bit, I have a few questions: * What makes this script so efficient compared to other approaches? * Why does buffering speed things up? * How do you determine the proper buffer size to use? * Does anyone here have any tricks up her/his sleeve that could optimize this script further? (I realize this is probably CS101, but I'm one of those blasted, self-taught hackers and I was hoping to benefit from the wisdom of the collective on this one. Thanks!)
## What makes this script so efficient compared to other approaches? There are several optimizations that the author is making to this algorithm. Each of these requires a fairly deep understanding of how the are underlying mechanisms utilized (e.g. Javascript, CPU, registers, cache, video card, etc.). I think there are 2 key optimizations that he is making (the rest are just icing): * Buffering the output * Using integer math rather than string manipulation I'll discuss buffering shortly since you ask about it explicitly. The integer math that he's utilizing has two performance benefits: integer addition is cheaper per operation than string manipulation and it uses less memory. I don't know how JavaScript and web browsers handle the conversion of an integer to a display glyph in the browser, so there may be a penalty associated with passing an integer to document.write when compared to a string. However, he is performing (1,000,000 / 1000) document.write calls versus 1,000,000 - 1,000 integer additions. This means he is performing roughly 3 orders of magnitude more operations to form the message than he is to send it to the display. Therefore the penalty for sending an integer vs a string to document.write would have to exceed 3 orders of magnitude offset the performance advantage of manipulating integers. ## Why does buffering speed things up? The specifics of why it works vary depending on what platform, hardware, and implementation you are using. In any case, it's all about balancing your algorithm to your bottleneck inducing resources. For instance, in the case of file I/O, buffer is helpful because it takes advantage of the fact that a rotating disk can only write a certain amount at a time. Give it too little work and it won't be using every available bit that passes under the head of the spindle as the disk rotates. Give it too much, and your application will have to wait (or be put to sleep) while the disk finishes your write - time that could be spent getting the next record ready for writing! Some of the key factors that determine ideal buffer size for file I/O include: sector size, file system chunk size, interleaving, number of heads, rotation speed, and areal density among others. In the case of the CPU, it's all about keeping the pipeline full. If you give the CPU too little work, it will spend time spinning NO OPs while it waits for you to task it. If you give the CPU too much, you may not dispatch requests to other resources, such as the disk or the video card, which could execute in parallel. This means that later on the CPU will have to wait for these to return with nothing to do. The primary factor for buffering in the CPU is keeping everything you need (for the CPU) as close to the FPU/ALU as possible. In a typical architecture this is (in order of decreasing proximity): registers, L1 cache, L2 cache, L3 cache, RAM. In the case of writing a million numbers to the screen, it's about drawing polygons on your screen with your video card. Think about it like this. Let's say that for each new number that is added, the video card must do 100,000,000 operations to draw the polygons on your screen. At one extreme, if put 1 number on the page at a time and then have your video card write it out and you do this for 1,000,000 numbers, the video card will have to do 10^14 operations - 100 trillion operations! At the other extreme, if you took the entire 1 million numbers and sent it to the video card all at once, it would take only 100,000,000 operations. The optimal point is some where in the middle. If you do it one a time, the CPU does a unit of work, and waits around for a long time while the GPU updates the display. If you write the entire 1M item string first, the GPU is doing nothing while the CPU churns away. ## How do you determine which buffer size to use? Unless you are targeting a very specific and well defined platform with a specific algorithm (e.g. writing packet routing for an internet routing) you typically cannot determine this mathematically. Typically, you find it empirically. Guess a value, try it, record the results, then pick another. You can make some educated guesses of where to start and what range to investigate based on the bottlenecks you are managing. ## Does anyone here have any tricks up her/his sleeve that could optimize this script further? I don't know if this would work and I have not tested it however, buffer sizes typically come in multiples of 2 since the under pinnings of computers are binary and word sizes are *typically* in multiples of two (but this isn't always the case!). For example, 64 bytes is more likely to be optimal than 60 bytes and 1024 is more likely to be optimal than 1000. One of the bottlenecks specific to this problem is that most browsers to date (Google Chrome being the first exception that I'm aware of) have javascript run serially within the same thread as the rest of the web page rendering mechanics. This means that the javascript does some work filling the buffer and then waits a long time until the document.write call returns. If the javascript was run as separate process, asynchronously, like in chrome, you would likely get a major speed up. This is of course attacking the source of the bottleneck not the algorithm that uses it, but sometimes that is the best option. Not nearly as succinct as I would like it, but hopefully it's a good starting point. Buffering is an important concept for all sorts of performance issues in computing. Having an good understanding of the underlying mechanisms that your code is using (both hardware and software) is extremely useful in avoiding or addressing performance issues.
I would bet the slowest thing in printing 1m numbers is the browser redrawing the page, so the fewer times you call document.write(), the better. Of course this needs to be balanced against large string concatenations (because they involve allocating and copying). Determining the right buffer size is found through experimentation. In other examples, buffering helps align along natural boundaries. Here are some examples * 32 bit CPUs can transfer 32 bits more efficiently. * TCP/IP packets have maximum sizes. * File I/O classes have internal buffers. * Images, like TIFFs, may be stored with their data in strips. Aligning with the natural boundaries of other systems can often have performance benefits.
What makes this the fastest JavaScript for printing 1 to 1,000,000 (separated by spaces) in a web browser?
[ "", "javascript", "optimization", "buffer", "" ]
Is there a way to prevent someone from faking a mime type on a file upload and then running a php/exe/etc... I have to make the file upload directory writeable and executable so that the files can be stored, but this allows anyone to run a script after. One thing I can do is add random data to the file name so they can't guess the file name after (since they still can't read from the directory to get a listing). I'm using file upload with php for the first time and I'm trying to cover all of the security issues.
The file upload directory should not be accessible to the web browser. I.e. don't allow somebody to upload a file, say "`remove_all_my_files.php`", and then execute it on your system by giving the url to it, say "`http://xample.com/uploads/remove_all_my_files.php`".
The information in `$_FILES` always comes from the client, so what you want to do is accept the file and scan it on the server. I'd either recommend using [finfo](http://php.net/manual/en/ref.fileinfo.php), which is a PHP extension and it makes it easy: ``` <?php // example :-) $finfo = finfo_open(FILEINFO_MIME); echo finfo_file($finfo, '/path/to/your/upload/file); finfo_close($finfo); ?> ``` There is also a OO-interface if you don't like procedural. If finfo is not an option, you could use the unix command [file](http://unixhelp.ed.ac.uk/CGI/man-cgi?file) to check. Also, many people suggest serving files through a wrapper. I am torn on this one, it can be a solution but it's far from ideal because a) the files are still on your server and b) it's expensive to serve files like that.
prevent mime faking on php uploads
[ "", "php", "security", "" ]
Mending a bug in our SAP BW web application, I need to call two javascript functions from the web framework library upon page load. The problem is that each of these functions reloads the page as a side-effect. In addition, I don't have access to modify these functions. Any great ideas on how to execute a piece of code on "real" page load, then another piece of code on the subsequent load caused by this function, and then execute no code the third reload? My best idea so far it to set a cookie on each go to determine what to run. I don't greatly love this solution. Anything better would be very welcome. And by the way, I do realize loading a page three times is absolutely ridiculous, but that's how we roll with SAP.
A cookie would work just fine. Or you could modify the query string each time with a "mode=x" or "load=x" parameter. This would present a problem if the user tries to bookmark the final page, though. If that's an option, the cookie solution is fine. I would guess they need cookies enabled to get that far in the app anyway?
A cookie, or pass a query string parameter indicating which javascript function has been run. We had to do something along these lines to trip out a piece of our software. That's really the best I got.
Executing JavaScript on page load selectively
[ "", "javascript", "" ]
Is there such a thing as an x86 assembler that I can call through C#? I want to be able to pass x86 instructions as a string and get a byte array back. If one doesn't exist, how can I make my own? To be clear - I don't want to *call* assembly code from C# - I just want to be able to assemble code from instructions and get the machine code in a byte array. I'll be injecting this code (which will be generated on the fly) to inject into another process altogether.
As part of some early prototyping I did on a personal project, I wrote quite a bit of code to do something like this. It doesn't take strings -- x86 opcodes are methods on an X86Writer class. Its not documented at all, and has nowhere near complete coverage, but if it would be of interest, I would be willing to open-source it under the New BSD license. **UPDATE:** Ok, I've created that project -- [Managed.X86](http://code.google.com/p/managed-x86/)
See this project: <https://github.com/ZenLulz/MemorySharp> This project wraps the FASM assembler, which is written in assembly and as a compiled as Microsoft coff object, wrapped by a C++ project, and then again wrapped in C#. This can do exactly what you want: given a string of x86/x64 assembly, this will produce the bytes needed. If you require the opposite, there is a port of the Udis86 disassembler, fully ported to C#, here: <https://github.com/spazzarama/SharpDisasm> This will convert an array of bytes into the instruction strings for x86/x64
Assembler library for .NET, assembling runtime-variable strings into machine code for injection
[ "", "c#", "assembly", "x86", "" ]
Is there any good reason to use C strings in C++ nowadays? My textbook uses them in examples at some points, and I really feel like it would be easier just to use a std::string.
The only reasons I've had to use them is when interfacing with 3rd party libraries that use C style strings. There might also be esoteric situations where you would use C style strings for performance reasons, but more often than not, using methods on C++ strings is probably faster due to inlining and specialization, etc. You can use the [`c_str()`](http://en.cppreference.com/w/cpp/string/basic_string/c_str) method in many cases when working with those sort of APIs, but you should be aware that the char \* returned is const, and you should not modify the string via that pointer. In those sort of situations, you can still use a vector<char> instead, and at least get the benefit of easier memory management.
A couple more memory control notes: C strings are POD types, so they can be allocated in your application's read-only data segment. If you declare and define `std::string` constants at namespace scope, the compiler will generate additional code that runs before `main()` that calls the `std::string` constructor for each constant. If your application has many constant strings (e.g. if you have generated C++ code that uses constant strings), C strings may be preferable in this situation. Some implementations of `std::string` support a feature called SSO ("short string optimization" or "small string optimization") where the `std::string` class contains storage for strings up to a certain length. This increases the size of `std::string` but often significantly reduces the frequency of free-store allocations/deallocations, improving performance. If your implementation of `std::string` does not support SSO, then constructing an empty `std::string` on the stack will still perform a free-store allocation. If that is the case, using temporary stack-allocated C strings may be helpful for performance-critical code that uses strings. Of course, you have to be careful not to shoot yourself in the foot when you do this.
Why use C strings in C++?
[ "", "c++", "c", "stl", "string", "c-strings", "" ]
I'm writing an application to start and monitor other applications in C#. I'm using the System.Diagnostics.Process class to start applications and then monitor the applications using the Process.Responding property to poll the state of the application every 100 milisecs. I use Process.CloseMainWindow to stop the application or Process.Kill to kill it if it's not responding. I've noticed a weird behaviour where sometimes the process object gets into a state where the responding property always returns true even when the underlying process hangs in a loop and where it doesn't respond to CloseMainWindow. One way to reproduce it is to poll the Responding property right after starting the process instance. So for example ``` _process.Start(); bool responding = _process.Responding; ``` will reproduce the error state while ``` _process.Start(); Thread.Sleep(1000); bool responding = _process.Responding; ``` will work. Reducing the sleep period to 500 will introduce the error state again. Something in calling \_process.Responding too fast after starting seems to prevent the object from getting the right windows message queue handler. I guess I need to wait for \_process.Start to finish doing it's asynchronous work. Is there a better way to wait for this than calling Thread.Sleep ? I'm not too confident that the 1000 ms will always be enough.
Now, I need to check this out later, but I am sure there is a method that tells the thread to wait until it is ready for input. Are you monitoring GUI processes only? Isn't [Process.WaitForInputIdle](http://msdn.microsoft.com/en-us/library/kcdbkyt4.aspx) of any help to you? Or am I missing the point? :) ## Update Following a chit-chat on Twitter (or tweet-tweet?) with Mendelt I thought I should update my answer so the community is fully aware.. * `WaitForInputIdle` will only work on applications that have a GUI. * You specify the time to wait, and the method returns a bool if the process reaches an idle state within that time frame, you can obviously use this to loop if required, or handle as appropriate. Hope that helps :)
I think it may be better to enhance the check for \_process.Responding so that you only try to stop/kill the process if the Responding property returns false for more than 5 seconds (for example). I think you may find that quite often, applications may be "not responding" for a split second whilst they are doing more intensive processing. I believe a more lenient approach will work better, allowing a process to be "not responding" for a short amount of time, only taking action if it is repeatedly "not responding" for several seconds (or however long you want). Further note: The Microsoft documentation indicates that the Responding property specifically relates to the user interface, which is why a newly started process may not have it's UI responding immediately.
System.Diagnostics.Process.Start weird behaviour
[ "", "c#", ".net", "asynchronous", "" ]
Is there an easier way to step through the code than to start the service through the Windows Service Control Manager and then attaching the debugger to the thread? It's kind of cumbersome and I'm wondering if there is a more straightforward approach.
If I want to quickly debug the service, I just drop in a `Debugger.Break()` in there. When that line is reached, it will drop me back to VS. Don't forget to remove that line when you are done. **UPDATE:** As an alternative to `#if DEBUG` pragmas, you can also use `Conditional("DEBUG_SERVICE")` attribute. ``` [Conditional("DEBUG_SERVICE")] private static void DebugMode() { Debugger.Break(); } ``` On your `OnStart`, just call this method: ``` public override void OnStart() { DebugMode(); /* ... do the rest */ } ``` There, the code will only be enabled during Debug builds. While you're at it, it might be useful to create a separate Build Configuration for service debugging.
I also think having a separate "version" for normal execution and as a service is the way to go, but is it really required to dedicate a separate command line switch for that purpose? Couldn't you just do: ``` public static int Main(string[] args) { if (!Environment.UserInteractive) { // Startup as service. } else { // Startup as application } } ``` That would have the "benefit", that you can just start your app via doubleclick (OK, if you really need that) and that you can just hit `F5` in Visual Studio (without the need to modify the project settings to include that `/console` Option). Technically, the `Environment.UserInteractive` checks if the `WSF_VISIBLE` Flag is set for the current window station, but is there any other reason where it would return `false`, apart from being run as a (non-interactive) service?
Easier way to debug a Windows service
[ "", "c#", ".net", "windows", "debugging", "windows-services", "" ]
Specifically I have a PHP command-line script that at a certain point requires input from the user. I would like to be able to execute an external editor (such as vi), and wait for the editor to finish execution before resuming the script. My basic idea was to use a temporary file to do the editing in, and to retrieve the contents of the file afterwards. Something along the lines of: ``` $filename = '/tmp/script_' . time() . '.tmp'; get_user_input ($filename); $input = file_get_contents ($filename); unlink ($filename); ``` I suspect that this isn't possible from a PHP command-line script, however I'm hoping that there's some sort of shell scripting trick that can be employed to achieve the same effect. Suggestions for how this can be achieved in other scripting languages are also more than welcome.
You can redirect the editor's output to the terminal: ``` system("vim > `tty`"); ```
I just tried this and it works fine in windows, so you can probably replicate with vi or whatever app you want on Linux. The key is that `exec()` hangs the php process while notepad (in this case) is running. ``` <?php exec('notepad c:\test'); echo file_get_contents('c:\test'); ?> $ php -r test.php ``` Edit: As your attempt shows and bstark pointed out, my notepad test fires up a new window so all is fine, but any editor that runs in console mode fails because it has no terminal to attach to. That being said, I tried on a Linux box with `exec('nano test'); echo file_get_contents('test');` and it doesn't fail as badly as vi, it just runs without displaying anything. I could type some stuff, press "ctrl-X, y" to close and save the file, and then the php script continued and displayed what I had written. Anyway.. I found the proper solution, so new answer coming in.
How do I escape a PHP script to an external editor and return afterwards?
[ "", "php", "shell", "command-line", "user-input", "" ]
I want to maintain a list of global messages that will be displayed to all users of a web app. I want each user to be able to mark these messages as read individually. I've created 2 tables; `messages (id, body)` and `messages_read (user_id, message_id)`. Can you provide an sql statement that selects the unread messages for a single user? Or do you have any suggestions for a better way to handle this? Thanks!
If the table definitions you mentioned are complete, you might want to include a date for each message, so you can order them by date. Also, this might be a slightly more efficient way to do the select: ``` SELECT id, message FROM messages LEFT JOIN messages_read ON messages_read.message_id = messages.id AND messages_read.[user_id] = @user_id WHERE messages_read.message_id IS NULL ```
Well, you could use ``` SELECT id FROM messages m WHERE m.id NOT IN( SELECT message_id FROM messages_read WHERE user_id = ?) ``` Where ? is passed in by your app.
Fetch unread messages, by user
[ "", "sql", "mysql", "database", "" ]
I'm starting to learn Python and I've come across generator functions, those that have a yield statement in them. I want to know what types of problems that these functions are really good at solving.
Generators give you lazy evaluation. You use them by iterating over them, either explicitly with 'for' or implicitly by passing it to any function or construct that iterates. You can think of generators as returning multiple items, as if they return a list, but instead of returning them all at once they return them one-by-one, and the generator function is paused until the next item is requested. Generators are good for calculating large sets of results (in particular calculations involving loops themselves) where you don't know if you are going to need all results, or where you don't want to allocate the memory for all results at the same time. Or for situations where the generator uses *another* generator, or consumes some other resource, and it's more convenient if that happened as late as possible. Another use for generators (that is really the same) is to replace callbacks with iteration. In some situations you want a function to do a lot of work and occasionally report back to the caller. Traditionally you'd use a callback function for this. You pass this callback to the work-function and it would periodically call this callback. The generator approach is that the work-function (now a generator) knows nothing about the callback, and merely yields whenever it wants to report something. The caller, instead of writing a separate callback and passing that to the work-function, does all the reporting work in a little 'for' loop around the generator. For example, say you wrote a 'filesystem search' program. You could perform the search in its entirety, collect the results and then display them one at a time. All of the results would have to be collected before you showed the first, and all of the results would be in memory at the same time. Or you could display the results while you find them, which would be more memory efficient and much friendlier towards the user. The latter could be done by passing the result-printing function to the filesystem-search function, or it could be done by just making the search function a generator and iterating over the result. If you want to see an example of the latter two approaches, see os.path.walk() (the old filesystem-walking function with callback) and os.walk() (the new filesystem-walking generator.) Of course, if you really wanted to collect all results in a list, the generator approach is trivial to convert to the big-list approach: ``` big_list = list(the_generator) ```
One of the reasons to use generator is to make the solution clearer for some kind of solutions. The other is to treat results one at a time, avoiding building huge lists of results that you would process separated anyway. If you have a fibonacci-up-to-n function like this: ``` # function version def fibon(n): a = b = 1 result = [] for i in xrange(n): result.append(a) a, b = b, a + b return result ``` You can more easily write the function as this: ``` # generator version def fibon(n): a = b = 1 for i in xrange(n): yield a a, b = b, a + b ``` The function is clearer. And if you use the function like this: ``` for x in fibon(1000000): print x, ``` in this example, if using the generator version, the whole 1000000 item list won't be created at all, just one value at a time. That would not be the case when using the list version, where a list would be created first.
What can you use generator functions for?
[ "", "python", "generator", "" ]
I need to allow other Python applications to register callback functions for events in my application. These need to have priorities associated with them (so a callback with a priority of 10 runs before a callback with a priority of 1) and callback functions need to be able to signal errors to the dispatcher. These are all lightweight callbacks running in the same process, so I don't need to send signals across process boundaries. Is there a good Python library to handle this, or do I need to write my own?
Are these other applications running in another address space? If so, you'll need to use an interprocess communication library like [D-BUS](http://www.freedesktop.org/wiki/Software/dbus). If you're just sending signals in the same process, try [PyDispatcher](http://pypi.python.org/pypi/PyDispatcher/2.0.1)
What platform are you running under? GObject is the basis of the GTK GUI that's widely-used under Linux, and it supports event loops with prioritizable events like this.
What's the best dispatcher/callback library in Python?
[ "", "python", "" ]
Why does the following method hang? ``` public void pipe(Reader in, Writer out) { CharBuffer buf = CharBuffer.allocate(DEFAULT_BUFFER_SIZE); while( in.read(buf) >= 0 ) { out.append(buf.flip()); } } ```
Answering my own question: you have to call `buf.clear()` between `read`s. Presumably, `read` is hanging because the buffer is full. The correct code is ``` public void pipe(Reader in, Writer out) { CharBuffer buf = CharBuffer.allocate(DEFAULT_BUFFER_SIZE); while( in.read(buf) >= 0 ) { out.append(buf.flip()); buf.clear(); } } ```
I would assume that it is a deadlock. The in.read(buf) locks the CharBuffer and prevents the out.append(buf) call. That is assuming that CharBuffer uses locks (of some kind)in the implementation. What does the API say about the class CharBuffer? Edit: Sorry, some kind of short circuit in my brain... I confused it with something else.
Why does "piping" a CharBuffer hang?
[ "", "java", "io", "pipe", "" ]
I have a stored procedure in SQL 2005. The Stored Procedure is actually creating temporary tables in the beginning of SP and deleting it in the end. I am now debugging the SP in VS 2005. In between the SP i would want to know the contents into the temporary table. Can anybody help in in viewing the contents of the temporary table at run time. Thanks Vinod T
There are several kinds of temporary tables, I think you could use the table which is not dropped after SP used it. Just make sure you don't call the same SP twice or you'll get an error trying to create an existing table. Or just drop the temp table after you see it's content. So instead of using a table variable (`@table`) just use `#table` or `##table` --- From <http://arplis.com/temporary-tables-in-microsoft-sql-server/>: ## Local Temporary Tables * Local temporary tables prefix with single number sign (#) as the first character of their names, like (#table\_name). * Local temporary tables are visible only in the current session OR you can say that they are visible only to the current connection for the user. They are deleted when the user disconnects from instances of Microsoft SQL Server. ## Global temporary tables * Global temporary tables prefix with double number sign (##) as the first character of their names, like (##table\_name). * Global temporary tables are visible to all sessions OR you can say that they are visible to any user after they are created. * They are deleted when all users referencing the table disconnect from Microsoft SQL Server.
Edit the stored procedure to temporarily select \* from the temp tables (possibly into another table or file, or just to the output pane) as it runs..? You can then change it back afterwards. If you can't mess with the original procedure, copy it and edit the copy.
View Temporary Table Created from Stored Procedure
[ "", "sql", "sql-server", "stored-procedures", "" ]
I was wondering if there was a way to use "find\_by\_sql" within a named\_scope. I'd like to treat custom sql as named\_scope so I can chain it to my existing named\_scopes. It would also be good for optimizing a sql snippet I use frequently.
While you can put any SQL you like in the conditions of a named scope, if you then call `find_by_sql` then the 'scopes' get thrown away. Given: ``` class Item # Anything you can put in an sql WHERE you can put here named_scope :mine, :conditions=>'user_id = 12345 and IS_A_NINJA() = 1' end ``` This works (it just sticks the SQL string in there - if you have more than one they get joined with AND) ``` Item.mine.find :all => SELECT * FROM items WHERE ('user_id' = 887 and IS_A_NINJA() = 1) ``` However, this doesn't ``` Items.mine.find_by_sql 'select * from items limit 1' => select * from items limit 1 ``` So the answer is "No". If you think about what has to happen behind the scenes then this makes a lot of sense. In order to build the SQL rails has to know how it fits together. When you create normal queries, the `select`, `joins`, `conditions`, etc are all broken up into distinct pieces. Rails knows that it can add things to the conditions without affecting everything else (which is how `with_scope` and `named_scope` work). With `find_by_sql` however, you just give rails a big string. It doesn't know what goes where, so it's not safe for it to go in and add the things it would need to add for the scopes to work.
This doesn't address exactly what you asked about, but you might investigate 'contruct\_finder\_sql'. It lets you can get the SQL of a named scope. ``` named_scope :mine, :conditions=>'user_id = 12345 and IS_A_NINJA() = 1' named_scope :additional { :condtions => mine.send(:construct_finder_sql,{}) + " additional = 'foo'" } ```
Encapsulating SQL in a named_scope
[ "", "sql", "ruby-on-rails", "named-scope", "" ]
When java was young, people were excited about writing applets. They were cool and popular, for a little while. Now, I never see them anymore. Instead we have flash, javascript, and a plethora of other web app-building technologies. Why don't sites use java applets anymore? I'm also curious: historically, why do you think this occurred? What could have been done differently to keep Java applets alive?
I think Java applets were overshadowed by Flash and ActionScript (pun unintended), being much easier to use for what Java Applets were being used at the time (animations + stateful applications). Flash's success in this respect in turn owes to its much smaller file sizes, as well as benefiting from the Sun vs. Microsoft suit that resulted in Microsoft removing the MSJVM from Internet Explorer, at a time of Netscape's demise and IE's heavy dominance.
1) AWT made for horrid UIs. Swing improved on that but it was too late, because... 2) Microsoft dropped support for Java in the browser (its propietary MSJVM), and before it did, it would only support the last version it released, which was roughly JDK 1.1 compatible. 3) So today you cannot be sure that an applet will run on the majority of non-developer machines, unlike flash. Same can be said of ActiveX by the way.
Where did all the java applets go?
[ "", "java", "applet", "" ]
In JavaScript, the "this" operator can refer to different things under different scenarios. Typically in a method within a JavaScript "object", it refers to the current object. But when used as a callback, it becomes a reference to the calling object. I have found that this causes problems in code, because if you use a method within a JavaScript "object" as a callback function you can't tell whether "this" refers to the current "object" or whether "this" refers to the calling object. Can someone clarify usage and best practices regarding how to get around this problem? ``` function TestObject() { TestObject.prototype.firstMethod = function(){ this.callback(); YAHOO.util.Connect.asyncRequest(method, uri, callBack); } TestObject.prototype.callBack = function(o){ // do something with "this" //when method is called directly, "this" resolves to the current object //when invoked by the asyncRequest callback, "this" is not the current object //what design patterns can make this consistent? this.secondMethod(); } TestObject.prototype.secondMethod = function() { alert('test'); } } ```
In JavaScript, `this` always refers to the object invoking the function that is being executed. So if the function is being used as an event handler, `this` will refer to the node that fired the event. But if you have an object and call a function on it like: ``` myObject.myFunction(); ``` Then `this` inside `myFunction` will refer to `myObject`. Does it make sense? To get around it you need to use closures. You can change your code as follows: ``` function TestObject() { TestObject.prototype.firstMethod = function(){ this.callback(); YAHOO.util.Connect.asyncRequest(method, uri, callBack); } var that = this; TestObject.prototype.callBack = function(o){ that.secondMethod(); } TestObject.prototype.secondMethod = function() { alert('test'); } } ```
Quick advice on best practices before I babble on about the magic *this* variable. If you want Object-oriented programming (OOP) in Javascript that closely mirrors more traditional/classical inheritance patterns, pick a framework, learn its quirks, and don't try to get clever. If you want to get clever, learn javascript as a functional language, and avoid thinking about things like classes. Which brings up one of the most important things to keep in mind about Javascript, and to repeat to yourself when it doesn't make sense. Javascript does not have classes. If something looks like a class, it's a clever trick. Javascript has **objects** (no derisive quotes needed) and **functions**. (that's not 100% accurate, functions are just objects, but it can sometimes be helpful to think of them as separate things) The *this* variable is attached to functions. Whenever you invoke a function, *this* is given a certain value, depending on how you invoke the function. This is often called the invocation pattern. There are four ways to invoke functions in javascript. You can invoke the function as a *method*, as a *function*, as a *constructor*, and with *apply*. ## As a Method A method is a function that's attached to an object ``` var foo = {}; foo.someMethod = function(){ alert(this); } ``` When invoked as a method, *this* will be bound to the object the function/method is a part of. In this example, this will be bound to foo. ## As A Function If you have a stand alone function, the *this* variable will be bound to the "global" object, almost always the *window* object in the context of a browser. ``` var foo = function(){ alert(this); } foo(); ``` **This may be what's tripping you up**, but don't feel bad. Many people consider this a bad design decision. Since a callback is invoked as a function and not as a method, that's why you're seeing what appears to be inconsistent behaviour. Many people get around the problem by doing something like, um, this ``` var foo = {}; foo.someMethod = function (){ var that=this; function bar(){ alert(that); } } ``` You define a variable *that* which points to *this*. Closure (a topic all it's own) keeps `that` around, so if you call bar as a callback, it still has a reference. ## As a Constructor You can also invoke a function as a constructor. Based on the naming convention you're using (`TestObject`) this also **may be what you're doing and is what's tripping you up**. You invoke a function as a Constructor with the `new` keyword. ``` function Foo(){ this.confusing = 'hell yeah'; } var myObject = new Foo(); ``` When invoked as a constructor, a new Object will be created, and *this* will be bound to that object. Again, if you have inner functions and they're used as callbacks, you'll be invoking them as functions, and *this* will be bound to the global object. Use that `var that = this;` trick/pattern. Some people think the constructor/new keyword was a bone thrown to Java/traditional OOP programmers as a way to create something similar to classes. ## With the Apply Method. Finally, every function has a method (yes, functions are objects in Javascript) named `apply`. Apply lets you determine what the value of *this* will be, and also lets you pass in an array of arguments. Here's a useless example. ``` function foo(a,b){ alert(a); alert(b); alert(this); } var args = ['ah','be']; foo.apply('omg',args); ```
In Javascript, why is the "this" operator inconsistent?
[ "", "javascript", "" ]
Is there some way to do multi-threading in JavaScript?
See <http://caniuse.com/#search=worker> for the most up-to-date support info. The following was the state of support circa 2009. --- The words you want to google for are [JavaScript Worker Threads](http://www.google.com/search?q=JavaScript+worker+threads) Apart from from [Gears](http://gears.google.com/) there's nothing available right now, but there's plenty of talk about how to implement this so I guess watch this question as the answer will no doubt change in future. Here's the relevant documentation for Gears: [WorkerPool API](http://code.google.com/apis/gears/api_workerpool.html) WHATWG has a Draft Recommendation for worker threads: [Web Workers](http://www.whatwg.org/specs/web-workers/current-work/) And there's also Mozilla’s [DOM Worker Threads](https://wiki.mozilla.org/DOMWorkerThreads) --- **Update:** June 2009, current state of browser support for JavaScript threads **Firefox 3.5** has web workers. Some demos of web workers, if you want to see them in action: * [Simulated Annealing](http://blog.mozbox.org/post/2009/04/10/Web-Workers-in-action) ("Try it" link) * [Space Invaders](https://web.archive.org/web/20120406122342/https://developer.mozilla.org/web-tech/2008/12/04/web-workers-part-2) (link at end of post) * [MoonBat JavaScript Benchmark](http://www.yafla.com/dforbes/Web_Workers_and_You__A_Faster_More_Powerful_JavaScript_World) (first link) The Gears plugin can also be installed in Firefox. **Safari 4**, and the **WebKit nightlies** have worker threads: * [JavaScript Ray Tracer](http://blog.owensperformance.com/2009/02/safari-4-worker-threads-javascript-domination/) **Chrome** has Gears baked in, so it can do threads, although it requires a confirmation prompt from the user (and it uses a different API to web workers, although it will work in any browser with the Gears plugin installed): * [Google Gears WorkerPool Demo](http://code.google.com/apis/gears/samples/hello_world_workerpool/hello_world_workerpool.html) (not a good example as it runs too fast to test in Chrome and Firefox, although IE runs it slow enough to see it blocking interaction) **IE8** and **IE9** can only do threads with the Gears plugin installed
# Different way to do multi-threading and Asynchronous in JavaScript Before HTML5 JavaScript only allowed the execution of one thread per page. There was some hacky way to simulate an asynchronous execution with *Yield*, `setTimeout()`, `setInterval()`, `XMLHttpRequest` or *event handlers* (see the end of this post for an example with *yield* and `setTimeout()`). But with HTML5 we can now use Worker Threads to parallelize the execution of functions. Here is an example of use. --- # Real multi-threading ## Multi-threading: JavaScript Worker Threads *HTML5* introduced Web Worker Threads (see: [browsers compatibilities](http://caniuse.com/#search=worker)) Note: IE9 and earlier versions do not support it. These worker threads are JavaScript threads that run in background without affecting the performance of the page. For more information about **Web Worker** [read the documentation](http://www.w3.org/TR/2009/WD-workers-20091029/) or [this tutorial](http://www.html5rocks.com/en/tutorials/workers/basics/). Here is a simple example with 3 Web Worker threads that count to MAX\_VALUE and show the current computed value in our page: ``` //As a worker normally take another JavaScript file to execute we convert the function in an URL: http://stackoverflow.com/a/16799132/2576706 function getScriptPath(foo){ return window.URL.createObjectURL(new Blob([foo.toString().match(/^\s*function\s*\(\s*\)\s*\{(([\s\S](?!\}$))*[\s\S])/)[1]],{type:'text/javascript'})); } var MAX_VALUE = 10000; /* * Here are the workers */ //Worker 1 var worker1 = new Worker(getScriptPath(function(){ self.addEventListener('message', function(e) { var value = 0; while(value <= e.data){ self.postMessage(value); value++; } }, false); })); //We add a listener to the worker to get the response and show it in the page worker1.addEventListener('message', function(e) { document.getElementById("result1").innerHTML = e.data; }, false); //Worker 2 var worker2 = new Worker(getScriptPath(function(){ self.addEventListener('message', function(e) { var value = 0; while(value <= e.data){ self.postMessage(value); value++; } }, false); })); worker2.addEventListener('message', function(e) { document.getElementById("result2").innerHTML = e.data; }, false); //Worker 3 var worker3 = new Worker(getScriptPath(function(){ self.addEventListener('message', function(e) { var value = 0; while(value <= e.data){ self.postMessage(value); value++; } }, false); })); worker3.addEventListener('message', function(e) { document.getElementById("result3").innerHTML = e.data; }, false); // Start and send data to our worker. worker1.postMessage(MAX_VALUE); worker2.postMessage(MAX_VALUE); worker3.postMessage(MAX_VALUE); ``` ``` <div id="result1"></div> <div id="result2"></div> <div id="result3"></div> ``` We can see that the three threads are executed in concurrency and print their current value in the page. They don't freeze the page because they are executed in the background with separated threads. --- ## Multi-threading: with multiple iframes Another way to achieve this is to use multiple *iframes*, each one will execute a thread. We can give the *iframe* some parameters by the URL and the *iframe* can communicate with his parent in order to get the result and print it back (the *iframe* must be in the same domain). **This example doesn't work in all browsers!** *iframes* usually run in the same thread/process as the main page (but Firefox and Chromium seem to handle it differently). Since the code snippet does not support multiple HTML files, I will just provide the different codes here: **index.html:** ``` //The 3 iframes containing the code (take the thread id in param) <iframe id="threadFrame1" src="thread.html?id=1"></iframe> <iframe id="threadFrame2" src="thread.html?id=2"></iframe> <iframe id="threadFrame3" src="thread.html?id=3"></iframe> //Divs that shows the result <div id="result1"></div> <div id="result2"></div> <div id="result3"></div> <script> //This function is called by each iframe function threadResult(threadId, result) { document.getElementById("result" + threadId).innerHTML = result; } </script> ``` **thread.html:** ``` //Get the parameters in the URL: http://stackoverflow.com/a/1099670/2576706 function getQueryParams(paramName) { var qs = document.location.search.split('+').join(' '); var params = {}, tokens, re = /[?&]?([^=]+)=([^&]*)/g; while (tokens = re.exec(qs)) { params[decodeURIComponent(tokens[1])] = decodeURIComponent(tokens[2]); } return params[paramName]; } //The thread code (get the id from the URL, we can pass other parameters as needed) var MAX_VALUE = 100000; (function thread() { var threadId = getQueryParams('id'); for(var i=0; i<MAX_VALUE; i++){ parent.threadResult(threadId, i); } })(); ``` --- # Simulate multi-threading ## Single-thread: emulate JavaScript concurrency with setTimeout() The 'naive' way would be to execute the function `setTimeout()` one after the other like this: ``` setTimeout(function(){ /* Some tasks */ }, 0); setTimeout(function(){ /* Some tasks */ }, 0); [...] ``` But this method **does not work** because each task will be executed one after the other. We can simulate asynchronous execution by calling the function recursively like this: ``` var MAX_VALUE = 10000; function thread1(value, maxValue){ var me = this; document.getElementById("result1").innerHTML = value; value++; //Continue execution if(value<=maxValue) setTimeout(function () { me.thread1(value, maxValue); }, 0); } function thread2(value, maxValue){ var me = this; document.getElementById("result2").innerHTML = value; value++; if(value<=maxValue) setTimeout(function () { me.thread2(value, maxValue); }, 0); } function thread3(value, maxValue){ var me = this; document.getElementById("result3").innerHTML = value; value++; if(value<=maxValue) setTimeout(function () { me.thread3(value, maxValue); }, 0); } thread1(0, MAX_VALUE); thread2(0, MAX_VALUE); thread3(0, MAX_VALUE); ``` ``` <div id="result1"></div> <div id="result2"></div> <div id="result3"></div> ``` As you can see this second method is very slow and freezes the browser because it uses the main thread to execute the functions. --- ## Single-thread: emulate JavaScript concurrency with yield *[Yield](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/yield)* is a new feature in **ECMAScript 6**, it only works on the oldest version of Firefox and Chrome (in Chrome you need to enable **Experimental JavaScript** appearing in *chrome://flags/#enable-javascript-harmony*). > The yield keyword causes generator function execution to pause and the value of the expression following the yield keyword is returned to the generator's caller. It can be thought of as a generator-based version of the return keyword. A generator allows you to suspend execution of a function and resume it later. A generator can be used to schedule your functions with a technique called [trampolining](http://www.neilmix.com/2007/02/07/threading-in-javascript-17/). Here is the example: ``` var MAX_VALUE = 10000; Scheduler = { _tasks: [], add: function(func){ this._tasks.push(func); }, start: function(){ var tasks = this._tasks; var length = tasks.length; while(length>0){ for(var i=0; i<length; i++){ var res = tasks[i].next(); if(res.done){ tasks.splice(i, 1); length--; i--; } } } } } function* updateUI(threadID, maxValue) { var value = 0; while(value<=maxValue){ yield document.getElementById("result" + threadID).innerHTML = value; value++; } } Scheduler.add(updateUI(1, MAX_VALUE)); Scheduler.add(updateUI(2, MAX_VALUE)); Scheduler.add(updateUI(3, MAX_VALUE)); Scheduler.start() ``` ``` <div id="result1"></div> <div id="result2"></div> <div id="result3"></div> ```
JavaScript and Threads
[ "", "javascript", "multithreading", "" ]
I've run into a problem where I make changes to a few JavaScript files that are referenced in an HTML file, but the browser doesn't see the changes. It holds onto the copy cached in the browser, even though the web server has a newer version. Not until I force the browser to clear the cache do I see the changes. Is this a web-server configuration? Do I need to set my JavaScript files to never cache? I've seen some interesting techniques in the [Google Web Toolkit](http://code.google.com/docreader/#p=google-web-toolkit-doc-1-5&s=google-web-toolkit-doc-1-5&t=FAQ_GWTApplicationFiles) where they actually create a **new** JavaScript file name any time an update is made. I believe this is to prevent proxies and browsers from keeping old versions of the JavaScript files with the same names. Is there a list of best practices somewhere?
We append a product build number to the end of all Javascript (and CSS etc.) like so: ``` <script src="MyScript.js?4.0.8243"> ``` Browsers ignore everything after the question mark but upgrades cause a new URL which means cache-reload. This has the additional benefit that you can set HTTP headers that mean "never cache!"
> It holds onto the copy cached in the browser, even though the web server has a newer version. This is probably because the HTTP Expires / Cache-Control headers are set. <http://developer.yahoo.com/performance/rules.html#expires> I wrote about this here: <http://www.codinghorror.com/blog/archives/000932.html> > This isn't bad advice, per se, but it can cause huge problems if you get it wrong. In Microsoft's IIS, for example, the Expires header is always turned off by default, probably for that very reason. By setting an Expires header on HTTP resources, you're telling the client to *never check for new versions of that resource* -- at least not until the expiration date on the Expires header. **When I say never, I mean it -- the browser won't even *ask* for a new version; it'll just assume its cached version is good to go until the client clears the cache, or the cache reaches the expiration date.** Yahoo notes that they change the filename of these resources when they need them refreshed. > > All you're really saving here is the cost of the client pinging the server for a new version and getting a 304 not modified header back in the common case that the resource hasn't changed. That's not much overhead.. unless you're Yahoo. Sure, if you have a set of images or scripts that almost never change, definitely exploit client caching and turn on the Cache-Control header. Caching is critical to browser performance; every web developer should have a deep understanding of how HTTP caching works. But only use it in a surgical, limited way for those specific folders or files that can benefit. For anything else, the risk outweighs the benefit. It's certainly not something you want turned on as a blanket default for your entire website.. unless you like changing filenames every time the content changes.
Aggressive JavaScript caching
[ "", "javascript", "caching", "" ]
I'd sort of like to use SQLite from within C#.Net, but I can't seem to find an appropriate library. Is there one? An official one? Are there other ways to use SQLite than with a wrapper?
From [https://system.data.sqlite.org](https://system.data.sqlite.org/): > System.Data.SQLite is an ADO.NET adapter for SQLite. > > System.Data.SQLite was started by Robert Simpson. Robert still has commit privileges on this repository but is no longer an active contributor. Development and maintenance work is now mostly performed by the SQLite Development Team. The SQLite team is committed to supporting System.Data.SQLite long-term. "System.Data.SQLite is the original SQLite database engine and a complete ADO.NET 2.0 provider all rolled into a single mixed mode assembly. It is a complete drop-in replacement for the original sqlite3.dll (you can even rename it to sqlite3.dll). Unlike normal mixed assemblies, it has no linker dependency on the .NET runtime so it can be distributed independently of .NET." It even supports Mono.
Here are the ones I can find: * [managed-sqlite](http://code.google.com/p/managed-sqlite/) * SQLite.NET wrapper * [System.Data.SQLite](http://system.data.sqlite.org/) Sources: * [sqlite.org](http://www.sqlite.org/cvstrac/wiki?p=SqliteWrappers) * other posters
Is there a .NET/C# wrapper for SQLite?
[ "", "c#", ".net", "database", "sqlite", "" ]
I have an XML file that's the output from a database. I'm using the Java SAX parser to parse the XML and output it in a different format. The XML contains some invalid characters and the parser is throwing errors like 'Invalid Unicode character (0x5)' Is there a good way to strip all these characters out besides pre-processing the file line-by-line and replacing them? So far I've run into 3 different invalid characters (0x5, 0x6 and 0x7). It's a ~4gb database dump and we're going to be processing it a bunch of times, so having to wait an extra 30 minutes each time we get a new dump to run a pre-processor on it is going to be a pain, and this isn't the first time I've run into this issue.
I haven't used this personally but Atlassian made a command line XML cleaner that may suit your needs (it was made mainly for JIRA but XML is XML): > Download [atlassian-xml-cleaner-0.1.jar](http://repository.atlassian.com/atlassian-xml-cleaner/jars/atlassian-xml-cleaner-0.1.jar) > > Open a DOS console or shell, and locate the XML or ZIP backup file on your computer, here assumed to be called data.xml > > Run: > java -jar atlassian-xml-cleaner-0.1.jar data.xml > data-clean.xml > > This will write a copy of data.xml to data-clean.xml, with invalid characters removed.
I used Xalan `org.apache.xml.utils.XMLChar` class: ``` public static String stripInvalidXmlCharacters(String input) { StringBuilder sb = new StringBuilder(); for (int i = 0; i < input.length(); i++) { char c = input.charAt(i); if (XMLChar.isValid(c)) { sb.append(c); } } return sb.toString(); } ```
Stripping Invalid XML characters in Java
[ "", "java", "xml", "" ]
I'm writing a small tool in C# which will need to send and receive data to/from a website using POST and json formatting. I've never done anything like this before in C# (or any language really) so I'm struggling to find some useful information to get me started. I've found some information on the WebRequest class in C# (specifically from [here](http://msdn.microsoft.com/en-us/library/debx8sh9.aspx)) but before I start diving into it, I wondered if this was the right tool for the job. I've found plenty of tools to convert data into the json format but not much else, so any information would be really helpful here in case I end up down a dead end.
WebRequest and more specifically the HttpWebRequest class is a good starting point for what you want to achieve. To create the request you will use the WebRequest.Create and cast the created request to an HttpWebRequest to actually use it. You will then create your post data and send it to the stream like: ``` HttpWebRequest req = (HttpWebRequest) WebRequest.Create("http://mysite.com/index.php"); req.Method = "POST"; req.ContentType = "application/x-www-form-urlencoded"; string postData = "var=value1&var2=value2"; req.ContentLength = postData.Length; StreamWriter stOut = new StreamWriter(req.GetRequestStream(), System.Text.Encoding.ASCII); stOut.Write(postData); stOut.Close(); ``` Similarly you can read the response back by using the GetResponse method which will allow you to read the resultant response stream and do whatever else you need to do. You can find more info on the class at: <http://msdn.microsoft.com/en-us/library/system.net.httpwebrequest.aspx>
[WebClient](http://msdn.microsoft.com/en-us/library/system.net.webclient(VS.80).aspx) is sometimes easier to use than [WebRequest](http://msdn.microsoft.com/en-us/library/system.net.webrequest.aspx). You may want to take a look at it. For JSON deserialization you are going to want to look at the [JavaScriptSerializer](http://msdn.microsoft.com/en-us/library/system.web.script.serialization.javascriptserializer_members.aspx) class. WebClient example: ``` using (WebClient client = new WebClient ()) { //manipulate request headers (optional) client.Headers.Add (HttpRequestHeader.UserAgent, "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.2; .NET CLR 1.0.3705;)"); //execute request and read response as string to console using (StreamReader reader = new StreamReader(client.OpenRead(targetUri))) { string s = reader.ReadToEnd (); Console.WriteLine (s); } } ``` **Marked as wiki in case someone wants to update the code**
Is WebRequest The Right C# Tool For Interacting With Websites?
[ "", "c#", ".net", "webrequest", "" ]
Do you know any easy or simple way to make a map object (from the STL library) persistent (i.e. write it to a file) so that you can recover its state later when the program in run later ?? Thanks for your help
I believe the [Boost Serialization](http://www.boost.org/doc/libs/1_36_0/libs/serialization/doc/index.html) library is capable of serializing std::map, but the standard library itself provides no means. Serialization is a great library with a lot of features and is easy to use and to extend to your own types.
If you want to do it manually, the same way you'd persist any other container structure, write out the individual parts to disk: ``` outputFile.Write(thisMap.size()); for (map<...>::const_iterator i = thisMap.begin(); i != thisMap.end(); ++iMap) { outputFile.Write(i->first); outputFile.Write(i->second); } ``` and then read them back in: ``` size_t mapSize = inputFile.Read(); for (size_t i = 0; i < mapSize; ++i) { keyType key = inputFile.Read(); valueType value = inputFile.Read(); thisMap[key] = value; } ``` Obviously, you'll need to make things work based on your map type and file i/o library. Otherwise try [boost serialization](http://www.boost.org/doc/libs/1_36_0/libs/serialization/doc/index.html), or [google's new serialization library](http://code.google.com/apis/protocolbuffers/).
Persistence of std::map in C++
[ "", "c++", "dictionary", "persistence", "" ]
In C#, I can do this: ``` class Program { static void Main(string[] args) { List<Animal> animals = new List<Animal>(); animals.Add(new Dog()); animals.Add(new Cat()); foreach (Animal a in animals) { Console.WriteLine(a.MakeNoise()); a.Sleep(); } } } public class Animal { public virtual string MakeNoise() { return String.Empty; } public void Sleep() { Console.Writeline(this.GetType().ToString() + " is sleeping."); } } public class Dog : Animal { public override string MakeNoise() { return "Woof!"; } } public class Cat : Animal { public override string MakeNoise() { return "Meow!"; } } ``` Obviously, the output is (Slightly paraphrased): * Woof * Dog is Sleeping * Meow * Cat is Sleeping Since C# is often mocked for its verbose type syntax, how do you handle polymorphism/virtual methods in a duck typed language such as Ruby?
edit: added more code for your updated question disclaimer: I haven't used Ruby in a year or so, and don't have it installed on this machine, so the syntax might be entirely wrong. But the concepts are correct. --- The exact same way, with classes and overridden methods: ``` class Animal def MakeNoise return "" end def Sleep print self.class.name + " is sleeping.\n" end end class Dog < Animal def MakeNoise return "Woof!" end end class Cat < Animal def MakeNoise return "Meow!" end end animals = [Dog.new, Cat.new] animals.each {|a| print a.MakeNoise + "\n" a.Sleep } ```
All the answers so far look pretty good to me. I thought I'd just mention that the whole inheritance thing is not entirely necessary. Excluding the "sleep" behaviour for a moment, we can achieve the whole desired outcome using duck-typing and omitting the need to create an Animal base class at all. Googling for "duck-typing" should yield any number of explanations, so for here let's just say "if it walks like a duck and quacks like a duck..." The "sleep" behaviour could be provided by using a mixin module, like Array, Hash and other Ruby built-in classes inclue Enumerable. I'm not suggesting it's necessarily better, just a different and perhaps more idiomatically Ruby way of doing it. ``` module Animal def sleep puts self.class.name + " sleeps" end end class Dog include Animal def make_noise puts "Woof" end end class Cat include Animal def make_noise puts "Meow" end end ``` You know the rest...
How do you do polymorphism in Ruby?
[ "", "c#", "ruby", "polymorphism", "" ]
What is an efficient way to implement a singleton design pattern in Java?
Use an enum: ``` public enum Foo { INSTANCE; } ``` Joshua Bloch explained this approach in his [Effective Java Reloaded](http://sites.google.com/site/io/effective-java-reloaded) talk at Google I/O 2008: [link to video](http://www.youtube.com/watch?v=pi_I7oD_uGI#t=28m50s). Also see slides 30-32 of his presentation ([effective\_java\_reloaded.pdf](https://14b1424d-a-62cb3a1a-s-sites.googlegroups.com/site/io/effective-java-reloaded/effective_java_reloaded.pdf?attachauth=ANoY7crKCOet2NEUGW7RV1XfM-Jn4z8YJhs0qJM11OhLRnFW_JbExkJtvJ3UJvTE40dhAciyWcRIeGJ-n3FLGnMOapHShHINh8IY05YViOJoZWzaohMtM-s4HCi5kjREagi8awWtcYD0_6G7GhKr2BndToeqLk5sBhZcQfcYIyAE5A4lGNosDCjODcBAkJn8EuO6572t2wU1LMSEUgjvqcf4I-Fp6VDhDvih_XUEmL9nuVJQynd2DRpxyuNH1SpJspEIdbLw-WWZ&attredirects=0)): > ### The Right Way to Implement a Serializable Singleton > > ``` > public enum Elvis { > INSTANCE; > private final String[] favoriteSongs = > { "Hound Dog", "Heartbreak Hotel" }; > public void printFavorites() { > System.out.println(Arrays.toString(favoriteSongs)); > } > } > ``` **Edit:** An [online portion of "Effective Java"](http://www.ddj.com/java/208403883?pgno=3) says: > "This approach is functionally equivalent to the public field approach, except that it is more concise, provides the serialization machinery for free, and provides an ironclad guarantee against multiple instantiation, even in the face of sophisticated serialization or reflection attacks. While this approach has yet to be widely adopted, **a single-element enum type is the best way to implement a singleton**."
Depending on the usage, there are several "correct" answers. Since Java 5, the best way to do it is to use an enum: ``` public enum Foo { INSTANCE; } ``` Pre Java 5, the most simple case is: ``` public final class Foo { private static final Foo INSTANCE = new Foo(); private Foo() { if (INSTANCE != null) { throw new IllegalStateException("Already instantiated"); } } public static Foo getInstance() { return INSTANCE; } public Object clone() throws CloneNotSupportedException{ throw new CloneNotSupportedException("Cannot clone instance of this class"); } } ``` Let's go over the code. First, you want the class to be final. In this case, I've used the `final` keyword to let the users know it is final. Then you need to make the constructor private to prevent users to create their own Foo. Throwing an exception from the constructor prevents users to use reflection to create a second Foo. Then you create a `private static final Foo` field to hold the only instance, and a `public static Foo getInstance()` method to return it. The Java specification makes sure that the constructor is only called when the class is first used. When you have a very large object or heavy construction code *and* also have other accessible static methods or fields that might be used before an instance is needed, then and only then you need to use lazy initialization. You can use a `private static class` to load the instance. The code would then look like: ``` public final class Foo { private static class FooLoader { private static final Foo INSTANCE = new Foo(); } private Foo() { if (FooLoader.INSTANCE != null) { throw new IllegalStateException("Already instantiated"); } } public static Foo getInstance() { return FooLoader.INSTANCE; } } ``` Since the line `private static final Foo INSTANCE = new Foo();` is only executed when the class FooLoader is actually used, this takes care of the lazy instantiation, and is it guaranteed to be thread safe. When you also want to be able to serialize your object you need to make sure that deserialization won't create a copy. ``` public final class Foo implements Serializable { private static final long serialVersionUID = 1L; private static class FooLoader { private static final Foo INSTANCE = new Foo(); } private Foo() { if (FooLoader.INSTANCE != null) { throw new IllegalStateException("Already instantiated"); } } public static Foo getInstance() { return FooLoader.INSTANCE; } @SuppressWarnings("unused") private Foo readResolve() { return FooLoader.INSTANCE; } } ``` The method `readResolve()` will make sure the only instance will be returned, even when the object was serialized in a previous run of your program.
What is an efficient way to implement a singleton pattern in Java?
[ "", "java", "singleton", "design-patterns", "" ]
I've been looking into [OSGi](http://www.osgi.org/Main/HomePage) recently and think it looks like a really good idea for modular Java apps. However, I was wondering how OSGi would work in a web application, where you don't just have code to worry about - also HTML, images, CSS, that sort of thing. At work we're building an application which has multiple 'tabs', each tab being one part of the app. I think this could really benefit from taking an OSGi approach - however I'm really not sure what would be the best way to handle all the usual web app resources. I'm not sure whether it makes any difference, but we're using JSF and [IceFaces](http://www.icefaces.org) (which adds another layer of problems because you have navigation rules and you have to specify all faces config files in your web.xml... doh!) Edit: according to [this thread](http://forums.sun.com/thread.jspa?messageID=3339002), faces-config.xml files can be loaded up from JAR files - so it is actually possible to have multiple faces-config.xml files included without modifying web.xml, provided you split up into JAR files. Any suggestions would be greatly appreciated :-)
You are very right in thinking there are synergies here, we have a modular web app where the app itself is assembled automatically from independent components (OSGi bundles) where each bundle contributes its own pages, resources, css and optionally javascript. We don't use JSF (Spring MVC here) so I can't comment on the added complexity of that framework in an OSGi context. Most frameworks or approaches out there still adhere to the "old" way of thinking: one WAR file representing your webapp and then many OSGi bundles and services but almost none concern themselves with the modularisation of the GUI itself. ## Prerequisites for a Design With OSGi the first question to solve is: what is your deployment scenario and who is the primary container? What I mean is that you can deploy your application on an OSGi runtime and use its infrastructure for everything. Alternatively, you can embed an OSGi runtime in a traditional app server and then you will need to re-use some infrastructure, specifically you want to use the AppServer's servlet engine. Our design is currently based on OSGi as the container and we use the HTTPService offered by OSGi as our servlet container. We are looking into providing some sort of transparent bridge between an external servlet container and the OSGi HTTPService but that work is ongoing. ## Architectural Sketch of a Spring MVC + OSGi modular webapp So the goal is not to just serve a web application over OSGi but to also apply OSGi's component model to the web UI itself, to make it composable, re-usable, dynamic. These are the components in the system: * 1 central bundle that takes care of bridging Spring MVC with OSGi, specifically it uses [code by Bernd Kolb](http://thegoodthebadtheugly.wordpress.com/2007/05/20/springosgi/) to allow you to register the Spring DispatcherServlet with OSGi as a servlet. * 1 custom URL Mapper that is injected into the DispatcherServlet and that provides the mapping of incoming HTTP requests to the correct controller. * 1 central Sitemesh based decorator JSP that defines the global layout of the site, as well as the central CSS and Javascript libraries that we want to offer as defaults. * Each bundle that wants to contribute pages to our web UI has to publish 1 or more Controllers as OSGi Services and make sure to **register its own servlet and its own resources (CSS, JSP, images, etc)** with the OSGi HTTPService. The registering is done with the HTTPService and the key methods are: httpService.registerResources() and httpService.registerServlet() When a web ui contributing bundle activates and publishes its controllers, they are automatically picked up by our central web ui bundle and the aforementioned custom URL Mapper gathers these Controller services and keeps an up to date map of URLs to Controller instances. Then when an HTTP request comes in for a certain URL, it finds the associated controller and dispatches the request there. The Controller does its business and then returns any data that should be rendered **and** the name of the view (a JSP in our case). This JSP is located in the Controller's bundle and can be accessed and rendered by the central web ui bundle exactly because we went and registered the resource location with the HTTPService. Our central view resolver then merges this JSP with our central Sitemesh decorator and spits out the resulting HTML to the client. In know this is rather high level but without providing the complete implementation it's hard to fully explain. Our key learning point for this was to look at [what Bernd Kolb did](http://thegoodthebadtheugly.wordpress.com/2007/05/20/springosgi/) with his example JPetstore conversion to OSGi and to use that information to design our own architecture. IMHO there is currently way too much hype and focus on getting OSGi somehow embedded in traditional Java EE based apps and very little thought being put into actually making use of OSGi idioms and its excellent component model to really allow the design of componentized web applications.
Check out [SpringSource dm Server](http://www.springsource.org/dmserver) - an application server built entirely in terms of OSGi and supporting modular web applications. It is available in free, open source, and commercial versions. You can start by deploying a standard WAR file and then gradually break your application into OSGi modules, or 'bundles' in OSGi-speak. As you might expect of SpringSource, the server has excellent support for the Spring framework and related Spring portfolio products. Disclaimer: I work on this product.
Modular web apps
[ "", "java", "web-applications", "jsf", "modularity", "" ]
``` $a = '{ "tag": "<b></b>" }'; echo json_encode( json_decode($a) ); ``` This outputs: ``` {"tag":"<b><\/b>"} ``` when you would think it would output exactly the input. For some reason json\_encode adds an extra slash.
Because it's part of the JSON standard <http://json.org/> > char > > ``` > any-Unicode-character- > except-"-or-\-or- > control-character > \" > \\ > \/ <---- see here? > \b > \f > \n > \r > \t > \u four-hex-digits > ```
use this: ``` echo json_encode($a,JSON_HEX_TAG) ``` Result will be: ``` ["\u003C\u003E"] ``` You can read this article to improve your knowledge about JSON\_ENCODE <http://php.net/manual/en/function.json-encode.php>
Can anyone explain this PHP code using json_encode and json_decode?
[ "", "php", "json", "" ]
Is there a more concise/standard idiom (e.g., a JDK method) for "piping" an input to an output in Java than the following? ``` public void pipe(Reader in, Writer out) { CharBuffer buf = CharBuffer.allocate(DEFAULT_BUFFER_SIZE); while (in.read(buf) >= 0 ) { out.append(buf.flip()); buf.clear(); } } ``` [EDIT] Please note the `Reader` and `Writer` are *given*. The correct answer will demonstrate how to take `in` and `out` and form a pipe (preferably with no more than 1 or 2 method calls). I will accept answers where `in` and `out` are an `InputStream` and an `OutputStream` (preferably with a conversion from/to `Reader`/`Writer`). I will not accept answers where either `in` or `out` is a *subclass* of `Reader`/`InputStream` or `Writer`/`OutputStrem`.
[IOUtils](http://commons.apache.org/io/) from the Apache Commons project has a number of [utilily methods](http://commons.apache.org/io/api-1.4/org/apache/commons/io/IOUtils.html) that do exactly what you need. `IOUtils.copy(in, out)` will perform a buffered copy of all input to the output. If there is more than one spot in your codebase that requires `Stream` or `Reader`/`Writer` handling, using IOUtils could be a good idea.
Take a look at [java.io.PipedInputStream](http://java.sun.com/javase/6/docs/api/java/io/PipedInputStream.html) and [PipedOutputStream](http://java.sun.com/javase/6/docs/api/java/io/PipedOutputStream.html), or [PipedReader](http://java.sun.com/javase/6/docs/api/java/io/PipedReader.html)/[PipedWriter](http://java.sun.com/javase/6/docs/api/java/io/PipedWriter.html) from the same package. From the Documentation of PipedInputStream: *A piped input stream should be connected to a piped output stream; the piped input stream then provides whatever data bytes are written to the piped output stream. Typically, data is read from a PipedInputStream object by one thread and data is written to the corresponding PipedOutputStream by some other thread. Attempting to use both objects from a single thread is not recommended, as it may deadlock the thread. The piped input stream contains a buffer, decoupling read operations from write operations, within limits. A pipe is said to be broken if a thread that was providing data bytes to the connected piped output stream is no longer alive.*
Java idiom for "piping"
[ "", "java", "io", "pipe", "" ]
I need to enumerate all the user defined types created in a `SQL Server` database with `CREATE TYPE`, and/or find out whether they have already been defined. With tables or stored procedures I'd do something like this: ``` if exists (select * from dbo.sysobjects where name='foobar' and xtype='U') drop table foobar ``` However I can't find the equivalent (or a suitable alternative) for user defined types! I definitely can't see them anywhere in `sysobjects`. Can anyone enlighten me?
Types and UDTs don't appear in sys.objects. You should be able to get what you're looking for with the following: ``` select * from sys.types where is_user_defined = 1 ```
Although the post is old, I found it useful to use a query similar to this. You may not find some of the formatting useful, but I wanted the fully qualified type name and I wanted to see the columns listed in order. You can just remove all of the SUBSTRING stuff to just get the column name by itself. ``` SELECT USER_NAME(TYPE.schema_id) + '.' + TYPE.name AS "Type Name", COL.column_id, SUBSTRING(CAST(COL.column_id + 100 AS char(3)), 2, 2) + ': ' + COL.name AS "Column", ST.name AS "Data Type", CASE COL.Is_Nullable WHEN 1 THEN '' ELSE 'NOT NULL' END AS "Nullable", COL.max_length AS "Length", COL.[precision] AS "Precision", COL.scale AS "Scale", ST.collation AS "Collation" FROM sys.table_types TYPE JOIN sys.columns COL ON TYPE.type_table_object_id = COL.object_id JOIN sys.systypes AS ST ON ST.xtype = COL.system_type_id where TYPE.is_user_defined = 1 ORDER BY "Type Name", COL.column_id ```
How do I list user defined types in a SQL Server database?
[ "", "sql", "sql-server", "t-sql", "" ]
How can I format Floats in Java so that the float component is displayed only if it's not zero? For example: ``` 123.45 -> 123.45 99.0 -> 99 23.2 -> 23.2 45.0 -> 45 ``` Edit: I forgot to mention - I'm still on Java 1.4 - sorry!
If you use [DecimalFormat](http://java.sun.com/j2se/1.4.2/docs/api/java/text/DecimalFormat.html) and specify # in the pattern it only displays the value if it is not zero. See my question [How do I format a number in java?](https://stackoverflow.com/questions/50532/how-do-i-format-a-number-in-java) Sample Code ``` DecimalFormat format = new DecimalFormat("###.##"); double[] doubles = {123.45, 99.0, 23.2, 45.0}; for(int i=0;i<doubles.length;i++){ System.out.println(format.format(doubles[i])); } ```
Check out the [DecimalFormat](http://java.sun.com/j2se/1.5.0/docs/api/java/text/DecimalFormat.html) class, e.g. new DecimalFormat("0.##").format(99.0) will return "99".
Conditional Number Formatting In Java
[ "", "java", "java1.4", "" ]
We would like to give access to some of our EJBs from Excel. The goal is to give an API usable from VBA. Our EJBs are mostly Stateless Session Beans that do simple CRUD operations with POJOs. Some possible solutions: * Exposing the EJBs as WebServices and create a VB/C# dll wrapping them, * Using Corba to access the EJBs from C#, * Creating a COM Library that uses Java to access the EJBs, Pointers to frameworks for these solution or other ideas are welcome.
You could take a look at [IIOP.NET](http://www.codeproject.com/KB/cs/iiop_net_and_ejb.aspx), which addresses this issue.
If you have a fairly recent ejb container, the cheapest and easiest should be to expose your beans as web services and call it from VB/C#. This doesn't require any extra tool or library.
Best solution for using EJBs from Excel
[ "", "java", "excel", "ejb", "corba", "vba", "" ]
I'm in the process of designing a PHP-based content management system for personal use and eventually to be distributed. I know there are a lot of CMS's already out there, but I really haven't found one that meets my all of my needs and I also would like to have the learning experience. Security is a large focus, as are extensibility and ease of use. For those of you out there who have built your own CMS, what advice can you offer? What features are essential for a core? What are must have add-ons? What did you wish you knew before starting? What's the biggest potential roadblock/problem? Any and all advice is welcome. Edit: Any advice on marketing do's and don't's would also be appreciated.
Well, building your own CMS actually implies that it is not an enterprise-level product. What this means is that you will not be able to actually implement all features that make CMS users happy. Not even most features. I want to clarify that by CMS I actually mean a platform for creating web applications or web sites, not a blogging platform or a scaled-down version. From personal experience I can tell you the things I want most in a CMS. 1. Extensible - provide a clean and robust API so that a programmer can do most things through code, instead of using the UI 2. Easy page creation and editing - use templates, have several URLs for a single page, provide options for URL rewriting 3. Make it component-based. Allow users to add custom functionality. Make it easy for someone to add his code to do something 4. Make it SEO-friendly. This includes metadata, again URL rewriting, good sitemap, etc. Now there are these enterprise features that I also like, but i doubt you'll have the desire to dive into their implementation from the beginning. They include workflow (an approval process for content-creation, customizable), Built-in modules for common functionality (blogs, e-commerce, news), ability to write own modules, permissions for different users, built-in syndication, etc. After all I speak from a developer's point of view and my opinion might not be mainstream, so you have to decide on your own in the end. Just as ahockley said - you have to know why you need to build your own CMS.
In building a few iterations of CMSs, some of the key things turned out to be: * Having a good rich text editor - end-users really don't want to do HTML. Consensus seems to be that FCKEditor is the best - there have been a couple of questions on this here recently * Allowing people to add new pages and easily create a menu/tab structure or cross-link between pages * Determining how to fit content into a template and/or allowing users to develop the templates themselves * Figuring out how (and whether) to let people paste content from Microsoft Word - converting magic quotes, emdashes and the weirdish Wordish HTML * Including a spellchecking feature (though Firefox has something built-in and iespell may do the job for IE) Some less critical but useful capabilities are: - Ability to dynamically create readable and SEO-friendly URLs (the StackOverflow way is not bad) - Ability to show earlier versions of content after it's modified - Ability to have a sandbox for content to let it be proofread or checked before release - Handling of multiple languages and non-English/non-ASCII characters
Advice on building a distributed CMS?
[ "", "php", "content-management-system", "" ]
What's the fastest way to count the number of keys/properties of an object? Is it possible to do this without iterating over the object? I.e., without doing: ``` var count = 0; for (k in myobj) if (myobj.hasOwnProperty(k)) ++count; ``` (Firefox did provide a magic `__count__` property, but this was removed somewhere around version 4.)
To do this in any *[ES5](https://en.wikipedia.org/wiki/ECMAScript#5th_Edition)-compatible environment*, such as [Node.js](http://nodejs.org), Chrome, [Internet Explorer 9+](https://en.wikipedia.org/wiki/Internet_Explorer_9), Firefox 4+, or Safari 5+: ``` Object.keys(obj).length ``` * [Browser compatibility](https://kangax.github.io/compat-table/es5/) * [Object.keys documentation](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/keys) (includes a method you can add to non-ES5 browsers)
You could use this code: ``` if (!Object.keys) { Object.keys = function (obj) { var keys = [], k; for (k in obj) { if (Object.prototype.hasOwnProperty.call(obj, k)) { keys.push(k); } } return keys; }; } ``` Then you can use this in older browsers as well: ``` var len = Object.keys(obj).length; ```
How to efficiently count the number of keys/properties of an object in JavaScript
[ "", "javascript", "performance", "properties", "count", "key", "" ]
Have you managed to get Aptana Studio debugging to work? I tried following this, but I don't see `Windows -> Preferences -> Aptana -> Editors -> PHP -> PHP Interpreters` in my menu (I have `PHP plugin` installed) and any attempt to set up the servers menu gives me "socket error" when I try to debug. `Xdebug` is installed, confirmed through `php info`.
I've been using ZendDebugger with Eclipse (on OS X) for a while now and it works great! Here's the recipe that's worked well for me. 1. install Eclipse PDT via "All in one" package at: <http://www.zend.com/en/community/pdt> 2. install ZendDebugger.so (<http://www.zend.com/en/community/pdt>) 3. configure your php.ini w/ the ZendDebugger extenssion (info below) Configuring ZendDebugger: 1. edit php.ini 2. add the following: [Zend] zend\_extension=/full/path/to/ZendDebugger.so zend\_debugger.allow\_hosts=127.0.0.1 zend\_debugger.expose\_remotely=always zend\_debugger.connector\_port=10013 Now run "php -m" in the command line to output all the installed modules. If you see the following then its installed just fine ``` [Zend Modules] Zend Debugger ``` Now restart Apache so that it reloads PHP w/ the ZendDebugger. Create a dummy page with in it and examine the output to make sure the PHP apache module picked up ZendDebugger as well. If it's setup right you will see something like the following text somewhere in phpinfo()'s output. > with Zend Debugger v5.2.14, Copyright (c) 1999-2008, by Zend Technologies OK - but you wanted Aptana Studio... at this point I install the Aptana Studio Plugin into the PDT build of Eclipse. The instructions for that are at: <http://www.aptana.com/docs/index.php/Plugging_Aptana_into_an_existing_Eclipse_configuration> That setup has served me well for a while - hopefully it helps you too -Arin
This is not related to Aptana Studio, but if you are looking for a PHP XDebug debugger client on OS X, you can try [MacGDBp](http://www.bluestatic.org/software/macgdbp/) (Free/GPL).
Php debugging with Aptana Studio and Xdebug or Zend debugger on OS X
[ "", "php", "debugging", "macos", "aptana", "xdebug", "" ]
There are a lot of new features that came with the .Net Framework 3.5. Most of the posts and info on the subject list stuff about new 3.5 features and C# 3 changes at the same time. But C# 3 can be used without .Net 3.5. Does anyone know of a good post describing the changes to the language? (Besides the boring, explicit official specs at [MSDN](http://msdn.microsoft.com/en-us/library/bb308966.aspx) that is.)
Update: I can certainly understand. Eric Lippert has some more indepth posts..[Check them out](http://blogs.msdn.com/ericlippert/archive/tags/C_2300_/Lambda+Expressions/default.aspx). --- I liked the series of posts by [scottgu](http://weblogs.asp.net/scottgu/archive/2007/03/08/new-c-orcas-language-features-automatic-properties-object-initializers-and-collection-initializers.aspx) on the new language features.. Some more info here as well <http://www.danielmoth.com/Blog/2007/11/top-10-things-to-know-about-visual.html> esp the section on language features.
There's a ["quick and dirty" list on my C# in Depth site](http://csharpindepth.com/Articles/General/BluffersGuide3.aspx) (which is also slightly tongue in cheek): To respond somewhat to Charles Graham's post, I have an [article](http://csharpindepth.com/Articles/Chapter1/Versions.aspx) about how applicable the language features are when targeting .NET 2.0: Just as a blatant plug, if you're interested in language rather than framework, [C# in Depth](http://www.manning.com/affiliate/idevaffiliate.php?id=876_230) is about as close to a "language only" book as I've seen. It's also divided into two parts (after the introduction): new features in C# 2, and new features in C# 3.
C# 3 new feature posts (and not about .Net 3.5 features)
[ "", "c#", ".net", ".net-3.5", "c#-3.0", "" ]
**Problem**. I need a way to find Starteam server time through Starteam Java SDK 8.0. Version of server is 8.0.172 so method `Server.getCurrentTime()` is not available since it was added only in server version 9.0. **Motivation**. My application needs to use views at specific dates. So if there's some difference in system time between client (where the app is running) and server then obtained views are not accurate. In the worst case the client's requested date is in the future for server so the operation results in exception.
After some investigation I haven't found any cleaner solution than using a temporary item. My app requests the item's time of creation and compares it with local time. Here's the method I use to get server time: ``` public Date getCurrentServerTime() { Folder rootFolder = project.getDefaultView().getRootFolder(); Topic newItem = (Topic) Item.createItem(project.getTypeNames().TOPIC, rootFolder); newItem.update(); newItem.remove(); newItem.update(); return newItem.getCreatedTime().createDate(); } ```
If your StarTeam server is on a Windows box and your code will be executing on a Windows box, you could shell out and execute the **NET time** command to fetch the time on that machine and then compare it to the local time. ``` net time \\my_starteam_server_machine_name ``` which should return: ``` "Current time at \\my_starteam_server_machine_name is 10/28/2008 2:19 PM" "The command completed successfully." ```
Calculate client-server time difference in Borland Starteam server 8
[ "", "java", "time", "starteam", "" ]
Is it possbile to execute linux commands with java? I am trying to create a web servlet to allow ftp users to change their passwords without ssh login access. I would like to execute the next commands: ``` # adduser -s /sbin/nologin clientA -d /home/mainclient/clientA # passwd clientA # cd /home/mainclient; chgrp -R mainclient clientA # cd /home/mainclient/clientA; chmod 770 . ```
Check out [this](http://java.sun.com/j2se/1.5.0/docs/api/java/lang/ProcessBuilder.html). However, doing what you are talking about is way outside spec, and I wouldnt reccommend it. To get it to work you are going to either run your app server as root, or use some other mechanism to give the user the app server is running as permission to execute these privileged commands. One small screw-up somewhere and you are "owned".
Use: ``` Runtime.getRuntim().exec("Command"); ``` where Command is the command string you want to execute.
Linux commands from Java
[ "", "java", "linux", "" ]
Where can I find the specifications for the various C# languages? *(EDIT: it appears people voted down because you could 'google' this, however, my original intent was to put an answer with information not found on google. I've accepted the answer with the best google results, as they are relevant to people who haven't paid for VS)*
[Microsoft's version](http://msdn.microsoft.com/en-us/vcsharp/aa336809.aspx) (probably what you want) [The formal standardised versions](http://www.ecma-international.org/publications/standards/Ecma-334.htm) (via ECMA, created just so they could say it was "standardised" by some external body. Even though ECMA "standards" are effectively "Insert cash, vend standard"). [Further ECMA standards](http://www.ecma-international.org/publications/standards/Stnindex.htm)
If you have Visual Studio 2005 or 2008, they are already on your machine! For 2005 (English): ``` .\Microsoft Visual Studio 8\VC#\Specifications\1033 ``` For 2008 (English): ``` .\Microsoft Visual Studio 9.0\VC#\Specifications\1033 ``` For 2010 (English): ``` .\Microsoft Visual Studio 10.0\VC#\Specifications\1033 ``` For 2012 (English): ``` .\Microsoft Visual Studio 11.0\VC#\Specifications\1033 ```
Where can you find the C# Language Specifications?
[ "", "c#", "visual-studio", "" ]
I want to setup a mechanism for tracking DB schema changes, such the one described in [this answer](https://stackoverflow.com/questions/1607/mechanisms-for-tracking-db-schema-changes#1666): > For every change you make to the > database, you write a new migration. > Migrations typically have two methods: > an "up" method in which the changes > are applied and a "down" method in > which the changes are undone. A single > command brings the database up to > date, and can also be used to bring > the database to a specific version of > the schema. My question is the following: Is every DDL command in an "up" method reversible? In other words, can we always provide a "down" method? Can you imagine any DDL command that can not be "down"ed? Please, do not consider the typical data migration problem where during the "up" method we have loss of data: e.g. changing a field type from `datetime` (`DateOfBirth`) to `int` (`YearOfBirth`) we are losing data that can not be restored.
in sql server every DDL command that i know of is an up/down pair.
Other than loss of data, every migration I've ever done is reversible. That said, Rails offers a way to mark a migration as "destructive": > Some transformations are destructive > in a manner that cannot be reversed. > Migrations of that kind should raise > an ActiveRecord::IrreversibleMigration > exception in their down method. See the API documentation [here](http://api.rubyonrails.org/classes/ActiveRecord/Migration.html).
Is every DDL SQL command reversible? [database version control]
[ "", "sql", "version-control", "migration", "ddl", "" ]
Assume the following class: ``` public class MyEnum: IEnumerator { private List<SomeObject> _myList = new List<SomeObject>(); ... } ``` It is necessary to implement the IEnumerator methods in MyEnum. But is it possible to 'delegate' or redirect the implementation for IEnumerator directly to \_myList without needing to implement the IEnumerator methods?
**Method 1:** Continue to use encapsulation and forward calls to the List implementation. ``` class SomeObject { } class MyEnum : IEnumerable<SomeObject> { private List<SomeObject> _myList = new List<SomeObject>(); public void Add(SomeObject o) { _myList.Add(o); } public IEnumerator<SomeObject> GetEnumerator() { return _myList.GetEnumerator(); } IEnumerator IEnumerable.GetEnumerator() { return this.GetEnumerator(); } } class Program { static void Main(string[] args) { MyEnum a = new MyEnum(); a.Add(new SomeObject()); foreach (SomeObject o in a) { Console.WriteLine(o.GetType().ToString()); } Console.ReadLine(); } } ``` **Method 2:** Inherit from List implementation you get that behavior for free. ``` class SomeObject { } class MyEnum : List<SomeObject> { } class Program { static void Main(string[] args) { MyEnum a = new MyEnum(); a.Add(new SomeObject()); foreach (SomeObject o in a) { Console.WriteLine(o.GetType().ToString()); } Console.ReadLine(); } } ``` **Method 1** allows for better sandboxing as there is no method that will be called in List without MyEnum knowledge. For least effort **Method 2** is preferred.
You can do this: ``` public class MyEnum : IEnumerator { private List<SomeObject> _myList = new List<SomeObject>(); public IEnumerator GetEnumerator() { return this._myList.GetEnumerator(); } } ``` The reason is simple. Your class can contains several fields which are collections, so compiler/enviroment can't know which field should be used for implementing "IEnumerator". **EIDT:** I agree with @pb - you should implements IEnumerator<SomeObject> interface.
How to delegate interface implementation to other class in C#
[ "", "c#", "interface", "" ]
What are the consequences of running a Java class file compiled in JDK 1.4.2 on JRE 1.6 or 1.5?
The [Java SE 6 Compatibility](http://java.sun.com/javase/6/webnotes/compatibility.html) page lists the compatibility of Jave SE 6 to Java SE 5.0. Furthermore, there is a link to [Incompatibilities in J2SE 5.0 (since 1.4.2)](http://java.sun.com/j2se/1.5.0/compatibility.html) as well. By looking at the two documents, it should be possible to find out whether there are any incomapatibilities of programs written under JDK 1.4.2 and Java SE 6. In terms of the binary compatibility of the Java class files, the Java SE 6 Compatibility page has the following to say: > Java SE 6 is upwards binary-compatible > with J2SE 5.0 except for the > [incompatibilities](http://java.sun.com/javase/6/webnotes/compatibility.html#incompatibilities) listed below. Except > for the noted incompatibilities, class > files built with version 5.0 compilers > will run correctly in JDK 6. So, in general, as [workmad3](https://stackoverflow.com/questions/114457/consequences-of-running-a-java-class-file-on-different-jres#114473) noted, Java class files compiled on a older JDK will still be compatible with the newest version. Furthermore, as noted by [Desty](https://stackoverflow.com/questions/114457/consequences-of-running-a-java-class-file-on-different-jres#114480), any changes to the API are generally deprecated rather than removed. From the [Source Compatibilities](http://java.sun.com/javase/6/webnotes/compatibility.html#source) section: > Deprecated APIs are interfaces that > are supported only for backwards > compatibility. The javac compiler > generates a warning message whenever > one of these is used, unless the > -nowarn command-line option is used. It is recommended that programs be > modified to eliminate the use of > deprecated APIs, although there are no > current plans to remove such APIs > entirely from the system with the > exception of JVMDI and JVMPI. There is a long listing of performance improvements in the [Java SE 6 Performance White Paper](http://java.sun.com/performance/reference/whitepapers/6_performance.html).
Java classes are **forward** compatible , e.g. classes generated using 1.5 compiler will be loaded and executed successfully **without any problems** on JRE 1.6. Generally your classes genereated by today java compilers will be compatible with future JREs (for example Java7) The inverse does not hold : you can not run classes generated by 1.6 on older JREs (1.3, 1.4, etc).
Consequences of running a Java Class file on different JREs?
[ "", "java", "java1.4", "" ]
C++ is all about memory ownership - aka **ownership semantics**. It is the responsibility of the owner of a chunk of dynamically allocated memory to release that memory. So the question really becomes who owns the memory. In C++ ownership is documented by the type a *raw* pointer is wrapped inside thus in a good (IMO) C++ program it is very rare (*rare*, not *never*) to see raw pointers passed around (as raw pointers have no inferred ownership thus we can not tell who owns the memory and thus without careful reading of the documentation you can't tell who is responsible for ownership). Conversely, it is rare to see raw pointers stored in a class each raw pointer is stored within its own smart pointer wrapper. (**N.B.:** If you don't own an object you should not be storing it because you can not know when it will go out of scope and be destroyed.) So the question: * What type of ownership semantic have people come across? * What standard classes are used to implement those semantics? * In what situations do you find them useful? Lets keep 1 type of semantic ownership per answer so they can be voted up and down individually. ## Summary: Conceptually, smart pointers are simple and a naive implementation is easy. I have seen many attempted implementations, but invariably they are broken in some way that is not obvious to casual use and examples. Thus I recommend always using well tested smart pointers from a library rather than rolling your own. `std::auto_ptr` or one of the Boost smart pointers seem to cover all my needs. ### `std::auto_ptr<T>`: Single person owns the object. Transfer of ownership is allowed. Usage: This allows you to define interfaces that show the explicit transfer of ownership. ### `boost::scoped_ptr<T>` Single person owns the object. Transfer of ownership is NOT allowed. Usage: Used to show explicit ownership. Object will be destroyed by destructor or when explicitly reset. ### `boost::shared_ptr<T>` (`std::tr1::shared_ptr<T>`) Multiple ownership. This is a simple reference counted pointer. When the reference count reaches zero, the object is destroyed. Usage: When an object can have multiple owers with a lifetime that can not be determined at compile time. ### `boost::weak_ptr<T>`: Used with `shared_ptr<T>` in situations where a cycle of pointers may happen. Usage: Used to stop cycles from retaining objects when only the cycle is maintaining a shared refcount.
For me, these 3 kinds cover most of my needs: `shared_ptr` - reference-counted, deallocation when the counter reaches zero `weak_ptr` - same as above, but it's a 'slave' for a `shared_ptr`, can't deallocate `auto_ptr` - when the creation and deallocation happen inside the same function, or when the object has to be considered one-owner-only ever. When you assign one pointer to another, the second 'steals' the object from the first. I have my own implementation for these, but they are also available in `Boost`. I still pass objects by reference (`const` whenever possible), in this case the called method must assume the object is alive only during the time of call. There's another kind of pointer that I use that I call ***hub\_ptr***. It's when you have an object that must be accessible from objects nested in it (usually as a virtual base class). This could be solved by passing a `weak_ptr` to them, but it doesn't have a `shared_ptr` to itself. As it knows these objects wouldn't live longer than him, it passes a hub\_ptr to them (it's just a template wrapper to a regular pointer).
## Simple C++ Model In most modules I saw, by default, it was assumed that receiving pointers was **not** receiving ownership. In fact, functions/methods abandoning ownership of a pointer were both very rare and explicitly expressed that fact in their documentation. **This model assumes that the user is owner only of what he/she explicitly allocates**. Everything else is automatically disposed of (at scope exit, or through RAII). This is a C-like model, extended by the fact most pointers are owned by objects that will deallocate them automatically or when needed (at said objects destruction, mostly), and that the life duration of objects are predictable (RAII is your friend, again). In this model, raw pointers are freely circulating and mostly not dangerous (but if the developer is smart enough, he/she will use references instead whenever possible). * raw pointers * std::auto\_ptr * boost::scoped\_ptr ## Smart Pointed C++ Model In a code full of smart pointers, the user can hope to ignore the lifetime of objects. The owner is never the user code: It is the smart pointer itself (RAII, again). **The problem is that circular references mixed with reference counted smart pointers can be deadly**, so you have to deal both with both shared pointers and weak pointers. So you have still ownership to consider (the weak pointer could well point to nothing, even if its advantage over raw pointer is that it can tell you so). * boost::shared\_ptr * boost::weak\_ptr ## Conclusion No matter the models I describe, **unless exception, receiving a pointer is *not* receiving its ownership** and **it is still very important to know who owns who**. Even for C++ code heavily using references and/or smart pointers.
Smart pointers: who owns the object?
[ "", "c++", "memory-management", "smart-pointers", "ownership-semantics", "" ]