Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
I'm using xcode in C++. I'm trying to debug my project, but at random intervals, it seems to ignore my breakpoints. There are three things that it does:
1) Sometimes, when I run, it automatically switches to "de-activate break points" mode. (the relevant button goes light and says "Activate")
2) Sometimes when I run, ALL of my breakpoints go "can't find" mode, with the yellow insides. I need to click twice on them to get them back dark blue (once to light blue = inactive, once to dark blue = active)
3) Sometimes, when I run, my breakpoint is dark blue, the button says "deactivate" and it still just ignores my breakpoint, running straight past it.
This makes it very difficult to debug my program...
I should add that I'm using XCode 3.1 beta on OSX 10.5.6, in case that matters.
thanks. | Try unchecking the **Load symbols lazy** in the **Debugging** panel in **Preferences** | Had the same problem but only restarting the iPhone Simulator seemed to fix it. | XCode automatically deactivating breakpoints | [
"",
"c++",
"xcode",
"macos",
""
] |
Coming from a C background I'm used to defining the size of the buffer in the following way:
```
#define BUFFER_SIZE 1024
uint8_t buffer[BUFFER_SIZE];
```
How would you do the accomplish the same thing in C#?
Also does the all-caps K&R style fit in with normal C# Pascal/Camel case? | ```
const int BUFFER_SIZE = 1024;
```
Do not use "static readonly" because it creates a variable. "const" are replaced at build time and do not create variables. | Personally, I prefer constants:
```
private const int BUFFER_SIZE = 1024;
```
Though, if it's public and you're a framework, you may want it to be a readonly to [avoid client recompiles](http://techblogs.wordpress.com/2008/07/01/constansts-in-cs-const-vs-readonly/). | In C# how do you accomplish the same thing as a #define | [
"",
"c#",
"constants",
""
] |
I have an unordered list of items, something like this, shortened for brevity:
```
<div id="elementsContainer">
<ul>
<li><a>One</a></li>
<li><a>Two</a></li>
</ul>
</div>
```
I have the list styled up, but these 3 styles deal with background images for the list items:
```
#elementsContainer ul li {
list-style:none;
}
#elementsContainer a {
background: transparent url(/images/icons/bullet_delete.png) no-repeat 5px 50%;
}
#elementsContainer a:hover,
#elementsContainer a:focus,
#elementsContainer a:active {
background:#fff url(/images/icons/delete.png) no-repeat 5px 50%;
}
```
The list looks great - it puts a little delete icon to the left of the text for each list item. However, I am looking to use jQuery (1.3) to handle the click events for each item, and I would like separate functionality between the background image of the list item and the text of the list item. If I click the image, I want to delete the item. If I click the text, I want to edit the item.
I started using something like this:
```
$("a").live("click", function(event){
alert( $(this).text() );
});
```
But I do not see anything in $(this) or "event" that I can determine if I am clicking the text or the image.
Yes, I know I could just have a separate "img" tag and handle the click on that separately. I'll go that route if that is the only option. I just want to know if there is some way to make it work on the background-image.
Thanks in advance! | Go with the IMG tag. The best you could do it detect a click on the LI element itself, which would end up being messy. An IMG tag (and even an A tag around it for semantic goodness and nicely-degrading pages) would work best.
You shouldn't have much issues styling it to look the same using an IMG within the LI, I do something similar all the time within lists where I need delete/edit icons. | You can't differentiate a click on the background image, since as far as the DOM is concerned, it's not really there. All you have is the `a` element itself (which happens to be presented with your background image), and its onclick handler will fire as long as you click anywhere inside the tag, text or not.
It probably is best to use an `img` tag (or some other separate tag) and handle the click on that separately, as you concluded in your write-up. | Is it possible to trigger a jQuery click event on a background-image in a list? | [
"",
"javascript",
"jquery",
"list",
"html-lists",
""
] |
I'm trying to filter an Xml document so into a subset of itself using XPath.
I have used XPath get an XmlNodeList, but I need to transform this into an XML document.
Is there a way to either transform an XMLNodeList into an XmlDocument or to produce an XmlDocument by filtering another XmlDocument directly? | With XmlDocument, you will need to import those nodes into a second document;
```
XmlDocument doc = new XmlDocument();
XmlElement root = (XmlElement)doc.AppendChild(doc.CreateElement("root"));
XmlNodeList list = // your query
foreach (XmlElement child in list)
{
root.AppendChild(doc.ImportNode(child, true));
}
``` | This is a pretty typical reason to use XSLT, which is an efficient and powerful tool for transforming one XML document into another (or into HTML, or text).
Here's a minimal program to perform an XSLT transform and send the results to the console:
```
using System;
using System.Xml;
using System.Xml.Xsl;
namespace XsltTest
{
class Program
{
static void Main(string[] args)
{
XslCompiledTransform xslt = new XslCompiledTransform();
xslt.Load("test.xslt");
XmlWriter xw = XmlWriter.Create(Console.Out);
xslt.Transform("input.xml", xw);
xw.Flush();
xw.Close();
Console.ReadKey();
}
}
}
```
Here's the actual XSLT, which is saved in `test.xslt` in the program directory. It's pretty simple: given an input document whose top-level element is named `input`, it creates an `output` element and copied over every child element whose `value` attribute is set to `true`.
```
<?xml version="1.0" encoding="utf-8"?>
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:output method="xml" indent="yes"/>
<xsl:template match="/input">
<output>
<xsl:apply-templates select="*[@value='true']"/>
</output>
</xsl:template>
<xsl:template match="@* | node()">
<xsl:copy>
<xsl:apply-templates select="@* | node()"/>
</xsl:copy>
</xsl:template>
</xsl:stylesheet>
```
And here's `input.xml`:
```
<?xml version="1.0" encoding="utf-8" ?>
<input>
<element value="true">
<p>This will get copied to the output.</p>
<p>Note that the use of the identity transform means that all of this content
gets copied to the output simply because templates were applied to the
<em>element</em> element.
</p>
</element>
<element value="false">
<p>This, on the other hand, won't get copied to the output.</p>
</element>
</input>
``` | Can you filter an xml document to a subset of nodes using XPath in C#? | [
"",
"c#",
"xml",
""
] |
Is it is possible to do something like the following in the `app.config` or `web.config` files?
```
<appSettings>
<add key="MyBaseDir" value="C:\MyBase" />
<add key="Dir1" value="[MyBaseDir]\Dir1"/>
<add key="Dir2" value="[MyBaseDir]\Dir2"/>
</appSettings>
```
I then want to access Dir2 in my code by simply saying:
```
ConfigurationManager.AppSettings["Dir2"]
```
This will help me when I install my application in different servers and locations wherein I will only have to change ONE entry in my entire `app.config`.
(I know I can manage all the concatenation in code, but I prefer it this way). | Good question.
I don't think there is. I believe it would have been quite well known if there was an easy way, and I see that Microsoft is creating a mechanism in Visual Studio 2010 for deploying different configuration files for deployment and test.
With that said, however; I have found that you in the `ConnectionStrings` section have a kind of placeholder called "|DataDirectory|". Maybe you could have a look at what's at work there...
Here's a piece from `machine.config` showing it:
```
<connectionStrings>
<add
name="LocalSqlServer"
connectionString="data source=.\SQLEXPRESS;Integrated Security=SSPI;AttachDBFilename=|DataDirectory|aspnetdb.mdf;User Instance=true"
providerName="System.Data.SqlClient"
/>
</connectionStrings>
``` | A slightly more complicated, but far more flexible, alternative is to create a class that represents a configuration section. In your `app.config` / `web.config` file, you can have this:
```
<?xml version="1.0" encoding="utf-8" ?>
<configuration>
<!-- This section must be the first section within the <configuration> node -->
<configSections>
<section name="DirectoryInfo" type="MyProjectNamespace.DirectoryInfoConfigSection, MyProjectAssemblyName" />
</configSections>
<DirectoryInfo>
<Directory MyBaseDir="C:\MyBase" Dir1="Dir1" Dir2="Dir2" />
</DirectoryInfo>
</configuration>
```
Then, in your .NET code (I'll use C# in my example), you can create two classes like this:
```
using System;
using System.Configuration;
namespace MyProjectNamespace {
public class DirectoryInfoConfigSection : ConfigurationSection {
[ConfigurationProperty("Directory")]
public DirectoryConfigElement Directory {
get {
return (DirectoryConfigElement)base["Directory"];
}
}
public class DirectoryConfigElement : ConfigurationElement {
[ConfigurationProperty("MyBaseDir")]
public String BaseDirectory {
get {
return (String)base["MyBaseDir"];
}
}
[ConfigurationProperty("Dir1")]
public String Directory1 {
get {
return (String)base["Dir1"];
}
}
[ConfigurationProperty("Dir2")]
public String Directory2 {
get {
return (String)base["Dir2"];
}
}
// You can make custom properties to combine your directory names.
public String Directory1Resolved {
get {
return System.IO.Path.Combine(BaseDirectory, Directory1);
}
}
}
}
```
Finally, in your program code, you can access your `app.config` variables, using your new classes, in this manner:
```
DirectoryInfoConfigSection config =
(DirectoryInfoConfigSection)ConfigurationManager.GetSection("DirectoryInfo");
String dir1Path = config.Directory.Directory1Resolved; // This value will equal "C:\MyBase\Dir1"
``` | Variables within app.config/web.config | [
"",
"c#",
"variables",
"web-config",
"app-config",
""
] |
Is there a library that will recursively dump/print an objects properties? I'm looking for something similar to the [console.dir()](http://getfirebug.com/console.html) function in Firebug.
I'm aware of the commons-lang [ReflectionToStringBuilder](https://commons.apache.org/proper/commons-lang/apidocs/org/apache/commons/lang3/builder/ReflectionToStringBuilder.html) but it does not recurse into an object. I.e., if I run the following:
```
public class ToString {
public static void main(String [] args) {
System.out.println(ReflectionToStringBuilder.toString(new Outer(), ToStringStyle.MULTI_LINE_STYLE));
}
private static class Outer {
private int intValue = 5;
private Inner innerValue = new Inner();
}
private static class Inner {
private String stringValue = "foo";
}
}
```
I receive:
> ToString$Outer@1b67f74[
> intValue=5
> innerValue=ToString$Inner@530daa
> ]
I realize that in my example, I could have overriden the toString() method for Inner but in the real world, I'm dealing with external objects that I can't modify. | You could try [XStream](http://x-stream.github.io/).
```
XStream xstream = new XStream(new Sun14ReflectionProvider(
new FieldDictionary(new ImmutableFieldKeySorter())),
new DomDriver("utf-8"));
System.out.println(xstream.toXML(new Outer()));
```
prints out:
```
<foo.ToString_-Outer>
<intValue>5</intValue>
<innerValue>
<stringValue>foo</stringValue>
</innerValue>
</foo.ToString_-Outer>
```
You could also output in [JSON](http://x-stream.github.io/json-tutorial.html)
And be careful of circular references ;) | I tried using XStream as originally suggested, but it turns out the object graph I wanted to dump included a reference back to the XStream marshaller itself, which it didn't take too kindly to (why it must throw an exception rather than ignoring it or logging a nice warning, I'm not sure.)
I then tried out the code from user519500 above but found I needed a few tweaks. Here's a class you can roll into a project that offers the following extra features:
* Can control max recursion depth
* Can limit array elements output
* Can ignore any list of classes, fields, or class+field combinations - just pass an array with any combination of class names, classname+fieldname pairs separated with a colon, or fieldnames with a colon prefix ie: `[<classname>][:<fieldname>]`
* Will not output the same object twice (the output indicates when an object was previously visited and provides the hashcode for correlation) - this avoids circular references causing problems
You can call this using one of the two methods below:
```
String dump = Dumper.dump(myObject);
String dump = Dumper.dump(myObject, maxDepth, maxArrayElements, ignoreList);
```
As mentioned above, you need to be careful of stack-overflows with this, so use the max recursion depth facility to minimise the risk.
Hopefully somebody will find this useful!
```
package com.mycompany.myproject;
import java.lang.reflect.Array;
import java.lang.reflect.Field;
import java.util.HashMap;
public class Dumper {
private static Dumper instance = new Dumper();
protected static Dumper getInstance() {
return instance;
}
class DumpContext {
int maxDepth = 0;
int maxArrayElements = 0;
int callCount = 0;
HashMap<String, String> ignoreList = new HashMap<String, String>();
HashMap<Object, Integer> visited = new HashMap<Object, Integer>();
}
public static String dump(Object o) {
return dump(o, 0, 0, null);
}
public static String dump(Object o, int maxDepth, int maxArrayElements, String[] ignoreList) {
DumpContext ctx = Dumper.getInstance().new DumpContext();
ctx.maxDepth = maxDepth;
ctx.maxArrayElements = maxArrayElements;
if (ignoreList != null) {
for (int i = 0; i < Array.getLength(ignoreList); i++) {
int colonIdx = ignoreList[i].indexOf(':');
if (colonIdx == -1)
ignoreList[i] = ignoreList[i] + ":";
ctx.ignoreList.put(ignoreList[i], ignoreList[i]);
}
}
return dump(o, ctx);
}
protected static String dump(Object o, DumpContext ctx) {
if (o == null) {
return "<null>";
}
ctx.callCount++;
StringBuffer tabs = new StringBuffer();
for (int k = 0; k < ctx.callCount; k++) {
tabs.append("\t");
}
StringBuffer buffer = new StringBuffer();
Class oClass = o.getClass();
String oSimpleName = getSimpleNameWithoutArrayQualifier(oClass);
if (ctx.ignoreList.get(oSimpleName + ":") != null)
return "<Ignored>";
if (oClass.isArray()) {
buffer.append("\n");
buffer.append(tabs.toString().substring(1));
buffer.append("[\n");
int rowCount = ctx.maxArrayElements == 0 ? Array.getLength(o) : Math.min(ctx.maxArrayElements, Array.getLength(o));
for (int i = 0; i < rowCount; i++) {
buffer.append(tabs.toString());
try {
Object value = Array.get(o, i);
buffer.append(dumpValue(value, ctx));
} catch (Exception e) {
buffer.append(e.getMessage());
}
if (i < Array.getLength(o) - 1)
buffer.append(",");
buffer.append("\n");
}
if (rowCount < Array.getLength(o)) {
buffer.append(tabs.toString());
buffer.append(Array.getLength(o) - rowCount + " more array elements...");
buffer.append("\n");
}
buffer.append(tabs.toString().substring(1));
buffer.append("]");
} else {
buffer.append("\n");
buffer.append(tabs.toString().substring(1));
buffer.append("{\n");
buffer.append(tabs.toString());
buffer.append("hashCode: " + o.hashCode());
buffer.append("\n");
while (oClass != null && oClass != Object.class) {
Field[] fields = oClass.getDeclaredFields();
if (ctx.ignoreList.get(oClass.getSimpleName()) == null) {
if (oClass != o.getClass()) {
buffer.append(tabs.toString().substring(1));
buffer.append(" Inherited from superclass " + oSimpleName + ":\n");
}
for (int i = 0; i < fields.length; i++) {
String fSimpleName = getSimpleNameWithoutArrayQualifier(fields[i].getType());
String fName = fields[i].getName();
fields[i].setAccessible(true);
buffer.append(tabs.toString());
buffer.append(fName + "(" + fSimpleName + ")");
buffer.append("=");
if (ctx.ignoreList.get(":" + fName) == null &&
ctx.ignoreList.get(fSimpleName + ":" + fName) == null &&
ctx.ignoreList.get(fSimpleName + ":") == null) {
try {
Object value = fields[i].get(o);
buffer.append(dumpValue(value, ctx));
} catch (Exception e) {
buffer.append(e.getMessage());
}
buffer.append("\n");
}
else {
buffer.append("<Ignored>");
buffer.append("\n");
}
}
oClass = oClass.getSuperclass();
oSimpleName = oClass.getSimpleName();
}
else {
oClass = null;
oSimpleName = "";
}
}
buffer.append(tabs.toString().substring(1));
buffer.append("}");
}
ctx.callCount--;
return buffer.toString();
}
protected static String dumpValue(Object value, DumpContext ctx) {
if (value == null) {
return "<null>";
}
if (value.getClass().isPrimitive() ||
value.getClass() == java.lang.Short.class ||
value.getClass() == java.lang.Long.class ||
value.getClass() == java.lang.String.class ||
value.getClass() == java.lang.Integer.class ||
value.getClass() == java.lang.Float.class ||
value.getClass() == java.lang.Byte.class ||
value.getClass() == java.lang.Character.class ||
value.getClass() == java.lang.Double.class ||
value.getClass() == java.lang.Boolean.class ||
value.getClass() == java.util.Date.class ||
value.getClass().isEnum()) {
return value.toString();
} else {
Integer visitedIndex = ctx.visited.get(value);
if (visitedIndex == null) {
ctx.visited.put(value, ctx.callCount);
if (ctx.maxDepth == 0 || ctx.callCount < ctx.maxDepth) {
return dump(value, ctx);
}
else {
return "<Reached max recursion depth>";
}
}
else {
return "<Previously visited - see hashCode " + value.hashCode() + ">";
}
}
}
private static String getSimpleNameWithoutArrayQualifier(Class clazz) {
String simpleName = clazz.getSimpleName();
int indexOfBracket = simpleName.indexOf('[');
if (indexOfBracket != -1)
return simpleName.substring(0, indexOfBracket);
return simpleName;
}
}
``` | Dumping a java object's properties | [
"",
"java",
"reflection",
"properties",
"dump",
"tostring",
""
] |
I have made a chat script using php, mysql and jquery. It uses json to get data from the server. It makes fixed interval requests to the server with the lastly fetched message id to get new messages from the server. But when multiple users will be chatting then thousands and crores of requests will be made to the server within an hour and the hosting people will block it for sure.
Th gmail chat uses socket I think. Because it does not sends fixed interval requests for sure. Could any one of you please give me some sample code or some direction to solve this issue.
Please I need help desperately.
Many thanks in advance. My respect and regards for all. | If the host you are using would "block it for sure" if it's making that many requests, then you may want to consider getting a different host or upgrading your hosting package before worrying about your code. Check out how [Facebook implements their chat:](http://www.facebook.com/note.php?note_id=14218138919)
> The method we chose to get text from
> one user to another involves loading
> an iframe on each Facebook page, and
> having that iframe's Javascript make
> an HTTP GET request over a persistent
> connection that doesn't return until
> the server has data for the client.
> The request gets reestablished if it's
> interrupted or times out. This isn't
> by any means a new technique: it's a
> variation of Comet, specifically XHR
> long polling, and/or BOSH. | You may find it useful to see an example of 'comet' technology in action using Prototype's comet daemon and a [jetty webserver](http://www.mortbay.org/jetty/). The example code for within the jetty download has an example application for chat.
I recently installed jetty myself so you might find a log of my installation commands useful:
Getting started trying to run a comet service
Download Maven from <http://maven.apache.org/>
Install Maven using <http://maven.apache.org/download.html#Installation>
I did the following commands
Extracted to /home/sdwyer/apache-maven-2.0.9
```
> sdwyer@pluto:~/apache-maven-2.0.9$ export M2_HOME=/home/sdwyer/apache-maven-2.0.9
> sdwyer@pluto:~/apache-maven-2.0.9$ export M2=$M2_HOME/bin
> sdwyer@pluto:~/apache-maven-2.0.9$ export PATH=$M2:$PATH.
> sdwyer@pluto:~/apache-maven-2.0.9$ mvn --version
-bash: /home/sdwyer/apache-maven-2.0.9/bin/mvn: Permission denied
> sdwyer@pluto:~/apache-maven-2.0.9$ cd bin
> sdwyer@pluto:~/apache-maven-2.0.9/bin$ ls
m2 m2.bat m2.conf mvn mvn.bat mvnDebug mvnDebug.bat
> sdwyer@pluto:~/apache-maven-2.0.9/bin$ chmod +x mvn
> sdwyer@pluto:~/apache-maven-2.0.9/bin$ mvn –version
Maven version: 2.0.9
Java version: 1.5.0_08
OS name: “linux” version: “2.6.18-4-686″ arch: “i386″ Family: “unix”
sdwyer@pluto:~/apache-maven-2.0.9/bin$
```
Download the jetty server from <http://www.mortbay.org/jetty/>
Extract to /home/sdwyer/jetty-6.1.3
```
> sdwyer@pluto:~$ cd jetty-6.1.3//examples/cometd-demo
> mvn jetty:run
```
A whole stack of downloads run
Once it’s completed open a browser and point it to:
`http://localhost:8080` and test the demos.
The code for the example demos can be found in the directory:
```
jetty-6.1.3/examples/cometd-demo/src/main/webapp/examples
``` | How to implement chat using jQuery, PHP, and MySQL? | [
"",
"php",
"sockets",
""
] |
I'm working on a fairly simple survey system right now. The database schema is going to be simple: a `Survey` table, in a one-to-many relation with `Question` table, which is in a one-to-many relation with the `Answer` table and with the `PossibleAnswers` table.
Recently the customer realised she wants the ability to show certain questions only to people who gave one particular answer to some previous question (eg. *Do you buy cigarettes?* would be followed by *What's your favourite cigarette brand?*, there's no point of asking the second question to a non-smoker).
Now I started to wonder what would be the best way to implement this *conditional* questions in terms of my database schema? If `question A` has 2 possible answers: A and B, and `question B` should only appear to a user **if** the answer was `A`?
Edit: What I'm looking for is a way to store those information about requirements in a database. The handling of the data will be probably done on application side, as my SQL skills suck ;) | > # Survey Database Design
Last Update: 5/3/2015
Diagram and SQL files now available at <https://github.com/durrantm/survey>

**If you use this (top) answer or any element, please add feedback on improvements !!!**
This is a real classic, done by thousands. They always seems 'fairly simple' to start with but to be good it's actually pretty complex. To do this in Rails I would use the model shown in the attached diagram. I'm sure it seems way over complicated for some, but once you've built a few of these, over the years, you realize that most of the design decisions are very classic patterns, best addressed by a dynamic flexible data structure at the outset.
More details below:
> # Table details for key tables
## answers
The **answers** table is critical as it captures the actual responses by users.
You'll notice that answers links to **question\_options**, not **questions**. This is intentional.
## input\_types
**input\_types** are the types of questions. Each question can only be of 1 type, e.g. all radio dials, all text field(s), etc. Use additional questions for when there are (say) 5 radio-dials and 1 check box for an "include?" option or some such combination. Label the two questions in the users view as one but internally have two questions, one for the radio-dials, one for the check box. The checkbox will have a group of 1 in this case.
## option\_groups
**option\_groups** and **option\_choices** let you build 'common' groups.
One example, in a real estate application there might be the question 'How old is the property?'.
The answers might be desired in the ranges:
1-5
6-10
10-25
25-100
100+
Then, for example, if there is a question about the adjoining property age, then the survey will want to 'reuse' the above ranges, so that same option\_group and options get used.
## units\_of\_measure
**units\_of\_measure** is as it sounds. Whether it's inches, cups, pixels, bricks or whatever, you can define it once here.
FYI: Although generic in nature, one can create an application on top of this, and this schema is well-suited to the **Ruby On Rails** framework with conventions such as "id" for the primary key for each table. Also the relationships are all simple one\_to\_many's with no many\_to\_many or has\_many throughs needed. I would probably add has\_many :throughs and/or :delegates though to get things like survey\_name from an individual answer easily without.multiple.chaining. | You could also think about complex rules, and have a string based condition field in your Questions table, accepting/parsing any of these:
* A(1)=3
* ( (A(1)=3) and (A(2)=4) )
* A(3)>2
* (A(3)=1) and (A(17)!=2) and C(1)
Where A(x)=y means "Answer of question x is y" and C(x) means the condition of question x (default is true)...
The questions have an order field, and you would go through them one-by one, skipping questions where the condition is FALSE.
This should allow surveys of any complexity you want, your GUI could automatically create these in "Simple mode" and allow for and "Advanced mode" where a user can enter the equations directly. | What mysql database tables and relationships would support a Q&A survey with conditional questions? | [
"",
"sql",
"database-design",
"database-schema",
"erd",
"data-modeling",
""
] |
When viewing iGoogle, each section is able to be drag-and-dropped to anywhere else on the page and then the state of the page is saved. I am curious on how this is done as I would like to provide this functionality as part of a proof of concept?
**UPDATE**
How do you make it so that the layout you changed to is saved for the next load? I am going to guess this is some sort of cookie? | Any up-to-date client side framework will give that kind of functionality.
* [jQuery](http://jquery.com/)
* [YUI](http://developer.yahoo.com/yui/)
* [GWT](http://code.google.com/webtoolkit/)
* [Prototype](http://www.prototypejs.org/)
Just to name a few...
Regarding the "saving" (persistency, if you will) of the data, this depends on the back-end of your site, but this is usually done via an asynchronous call to the server which saves the state to a DB (usually). | It's amazingly simple with [jQuery](https://jquery.com/). Check out [this blog entry](https://web.archive.org/web/20210210233758/http://geekswithblogs.net/AzamSharp/archive/2008/02/21/119882.aspx) on the subject.
**Edit:** I missed the "state of the page is saved" portion of the question when I answered. That portion will vary wildly based on how you structure your application. You need to store the state of the page somehow, and that will be user dependent. If you don't mind forcing the user to restore their preferences every time they clear their cookie cache, you could store state using a cookie.
I don't know how your application is structured so I can't make any further suggestions, but storing a cookie in jQuery is also amazingly simple. The first part of [this blog entry](https://web.archive.org/web/20170430092251/http://www.shopdev.co.uk:80/blog/cookies-with-jquery-designing-collapsible-layouts/) tells you almost everything you need to know. | How to use draggable sections like on iGoogle? | [
"",
"javascript",
"igoogle",
""
] |
I am looking at this sub-expression (this is in JavaScript):
```
(?:^|.....)
```
I know that **?** means "zero or one times" when it follows a character, but not sure what it means in this context. | You're probably seeing it in this context
```
(?:...)
```
It means that the group won't be captured or used for back-references.
**EDIT:** To reflect your modified question:
```
(?:^|....)
```
means "match the beginning of the line or match ..." but don't capture the group or use it for back-references. | When working with groups, you often have several options that modify the behavior of the group:
```
(foo) // default behavior, matches "foo" and stores a back-reference
(?:foo) // non-capturing group: matches "foo", but doesn't store a back-ref
(?i:foo) // matches "foo" case-insensitively
(?=foo) // matches "foo", but does not advance the current position
// ("positive zero-width look-ahead assertion")
(?!foo) // matches anything but "foo", and does not advance the position
// ("negative zero-width look-ahead assertion")
```
to name a few.
They all begin with "?", which is the way to indicate a group modifier. The question mark has nothing to do with optionality in this case.
It simply says:
```
(?:^foo) // match "foo" at the start of the line, but do not store a back-ref
```
Sometimes it's just overkill to store a back-reference to some part of the match that you are not going to use anyway. When the group is there only to make a complex expression atomic (e.g. it should either match or fail as a whole), storing a back-reference is an unnecessary waste of resources that can even slow down the regex a bit. And sometimes, you just want to be group 1 the *first group relevant to you*, instead of the *first group in the regex*. | What does the "?:^" regular expression mean? | [
"",
"javascript",
"regex",
""
] |
I have a main form at present it has a tab control and 3 data grids (DevExpress xtragrid's). Along with the normal buttons combo boxes... I would say 2/3 rds of the methods in the main form are related to customizing the grids or their relevant event handlers to handle data input. This is making the main forms code become larger and larger.
What is an ok sort of code length for a main form?
How should I move around the code if necesary? I am currently thinking about creating a user control for each grid and dumping it's methods in there. | I build a fair number of apps at my shop and try to avoid, as a general rule, to clog up main forms with a bunch of control-specific code. Rather, I'll encapsulate behaviors and state setup into some commonly reusable user controls and stick that stuff in the user controls' files instead.
I don't have a magic number I shoot for in the main form, instead I'll use the 'Why would I put this here?' test. If I can't come up with a good reason as to why I'm thinking of putting the code in the main form, I'll avoid it. Otherwise, as you've mentioned, the main form starts growing and it becomes a real pain to manage everything.
I like to put my glue code (event handler stuff, etc.) separate from the main form itself.
At a minimum, I'll utilize some regions to separate the code out into logically grouped chunks. Granted, many folks hate the #region/#endregion constructs, but I've got the keystrokes pretty much all memorized so it isn't an issue for me. I like to use them simply because it organizes things nicely and collapses down well in VS.
In a nutshell, I don't put anything in the main form unless I convince myself it belongs there. There are a bunch of good patterns out there that, when employed, help to avoid the big heaping pile that otherwise tends to develop. I looked back at one file I had early on in my career and the darn thing was 10K lines long... absolutely ridiculous!
Anyway, that is my two cents.
Have a good one! | As with any class, having more than about 150 lines is a sign that something has gone horribly wrong. The same OO principles apply to classes relating to UI as everywhere else in your application.
The class should have a single responsibility. | Keeping your main form class short best practice | [
"",
"c#",
""
] |
Due to the implementation of Java generics, you can't have code like this:
```
public class GenSet<E> {
private E a[];
public GenSet() {
a = new E[INITIAL_ARRAY_LENGTH]; // Error: generic array creation
}
}
```
How can I implement this while maintaining type safety?
I saw a solution on the Java forums that goes like this:
```
import java.lang.reflect.Array;
class Stack<T> {
public Stack(Class<T> clazz, int capacity) {
array = (T[])Array.newInstance(clazz, capacity);
}
private final T[] array;
}
```
What's going on? | I have to ask a question in return: is your `GenSet` "checked" or "unchecked"?
What does that mean?
* **Checked**: *strong typing*. `GenSet` knows explicitly what type of objects it contains (i.e. its constructor was explicitly called with a `Class<E>` argument, and methods will throw an exception when they are passed arguments that are not of type `E`. See [`Collections.checkedCollection`](http://docs.oracle.com/javase/7/docs/api/java/util/Collections.html#checkedCollection%28java.util.Collection,%20java.lang.Class%29).
-> in that case, you should write:
```
public class GenSet<E> {
private E[] a;
public GenSet(Class<E> c, int s) {
// Use Array native method to create array
// of a type only known at run time
@SuppressWarnings("unchecked")
final E[] a = (E[]) Array.newInstance(c, s);
this.a = a;
}
E get(int i) {
return a[i];
}
}
```
* **Unchecked**: *weak typing*. No type checking is actually done on any of the objects passed as argument.
-> in that case, you should write
```
public class GenSet<E> {
private Object[] a;
public GenSet(int s) {
a = new Object[s];
}
E get(int i) {
@SuppressWarnings("unchecked")
final E e = (E) a[i];
return e;
}
}
```
Note that the component type of the array should be the [*erasure*](http://docs.oracle.com/javase/tutorial/java/generics/erasure.html) of the type parameter:
```
public class GenSet<E extends Foo> { // E has an upper bound of Foo
private Foo[] a; // E erases to Foo, so use Foo[]
public GenSet(int s) {
a = new Foo[s];
}
...
}
```
All of this results from a known, and deliberate, weakness of generics in Java: it was implemented using erasure, so "generic" classes don't know what type argument they were created with at run time, and therefore can not provide type-safety unless some explicit mechanism (type-checking) is implemented. | You can do this:
```
E[] arr = (E[])new Object[INITIAL_ARRAY_LENGTH];
```
This is one of the suggested ways of implementing a generic collection in *Effective Java; Item 26*. No type errors, no need to cast the array repeatedly. *However* this triggers a warning because it is potentially dangerous, and should be used with caution. As detailed in the comments, this `Object[]` is now masquerading as our `E[]` type, and can cause unexpected errors or `ClassCastException`s if used unsafely.
As a rule of thumb, this behavior is safe as long as the cast array is used internally (e.g. to back a data structure), and not returned or exposed to client code. Should you need to return an array of a generic type to other code, the reflection `Array` class you mention is the right way to go.
---
Worth mentioning that wherever possible, you'll have a much happier time working with `List`s rather than arrays if you're using generics. Certainly sometimes you don't have a choice, but using the collections framework is far more robust. | How can I create a generic array in Java? | [
"",
"java",
"arrays",
"generics",
"reflection",
"instantiation",
""
] |
Is there a way to disassemble Pro\*C/C++ executable files? | In general there should be disassemblers available for executables, regardless how they have been created (gcc, proC, handwritten, etc.) but decompiling an optimized binary most probably leads to unreadable or source.
Also, Pro C/C++ is not directly a compiler but outputs C/C++ code which then in turn is compiled by a platform native compiler (gcc, xlc, vc++, etc.).
Furthermore the generated code is often not directly compilable again without lots of manual corrections.
If you still want to try your luck, have a look at this list of [x86 disassemblers](http://en.wikibooks.org/wiki/X86_Disassembly/Disassemblers_and_Decompilers) for a start. | Try [PE Explorer Disassembler](http://www.heaventools.com/PE_Explorer_disassembler.htm), a very decent disassembler for 32-bit executable files. | How does one disassemble Pro*C/C++ programs? | [
"",
"c++",
"c",
"reverse-engineering",
"decompiling",
""
] |
I have a `<div>...</div>` section in my HTML that is basically like a toolbar.
Is there a way I could force that section to the bottom of the web page (the document, not the viewport) and center it? | I think what you're looking for is this: <http://ryanfait.com/sticky-footer/>
It's an elegant, **CSS only** solution!
I use it and it works perfect with all kinds of layouts in all browsers! As far as I'm concerned it is the only elegant solution which works with all browsers and layouts.
@Josh: No it isn't and that's what Blankman wants, he wants a footer that sticks to the bottom of the document, not of the viewport (browser window). So if the content is shorter than the browser window, the footer sticks to the lower end of the window, if the content is longer, the footer goes down and is not visible until you scroll down.
### Twitter Bootstrap implementation
I've seen a lot of people asking how this can be combined with Twitter Bootstrap. While it's easy to figure out, here are some snippets that should help.
```
// _sticky-footer.scss SASS partial for a Ryan Fait style sticky footer
html, body {
height: 100%;
}
.wrapper {
min-height: 100%;
height: auto !important;
height: 100%;
margin: 0 auto -1*($footerHeight + 2); /* + 2 for the two 1px borders */
}
.push {
height: $footerHeight;
}
.wrapper > .container {
padding-top: $navbarHeight + $gridGutterWidth;
}
@media (max-width: 480px) {
.push {
height: $topFooterHeight !important;
}
.wrapper {
margin: 0 auto -1*($topFooterHeight + 2) !important;
}
}
```
And the rough markup body:
```
<body>
<div class="navbar navbar-fixed-top">
// navbar content
</div>
<div class="wrapper">
<div class="container">
// main content with your grids, etc.
</div>
<div class="push"><!--//--></div>
</div>
<footer class="footer">
// footer content
</footer>
</body>
``` | If I understand you correctly, you want the **toolbar** to *always* be visible, regardless of the vertical scroll position. If that is correct, I would recommend the following CSS...
```
body {
margin:0;
padding:0;
z-index:0;
}
#toolbar {
background:#ddd;
border-top:solid 1px #666;
bottom:0;
height:15px;
padding:5px;
position:fixed;
width:100%;
z-index:1000;
}
``` | Force <div></div> to the bottom of the web page centered | [
"",
"javascript",
"html",
"css",
"ajax",
""
] |
If i remember correctly in .NET one can register "global" handlers for unhandled exceptions. I am wondering if there is something similar for Java. | Yes, there's the [`defaultUncaughtExceptionHandler`](http://java.sun.com/javase/6/docs/api/java/lang/Thread.html#getDefaultUncaughtExceptionHandler%28%29), but it only triggers if the `Thread` doesn't have a [`uncaughtExceptionHandler`](http://java.sun.com/javase/6/docs/api/java/lang/Thread.html#getUncaughtExceptionHandler%28%29) set. | Yes
<http://java.sun.com/j2se/1.5.0/docs/api/java/lang/Thread.UncaughtExceptionHandler.html> | Is there an unhandled exception handler in Java? | [
"",
"java",
"exception",
""
] |
```
/* user-defined exception class derived from a standard class for exceptions*/
class MyProblem : public std::exception {
public:
...
MyProblem(...) { //special constructor
}
virtual const char* what() const throw() {
//what() function
...
}
};
...
void f() {
...
//create an exception object and throw it
throw MyProblem(...);
...
}
```
My question is why there is a "const throw()" after what()?
Normally,if there is a throw() , it implies that the function before throw()
can throw exception.However ,why there is a throw here? | Empty braces in `"throw()"` means the function does not throw. | The **const** is a separate issue to throw().
This indicates that this is a const method. Thus a call to this method will not change the state of the object.
The **throw()** means the method will not throw any exceptions.
To the **USER** of this method, the method will only return through normal means and you do not need to worry about the call generating exceptions.
To the **IMPLEMENTER** of the method there is more to worry about.
Unlike Java this is not a compile time constraint but a runtime constraint. If the implementer writes the function so that it accidentally throws an exception out of the method then the runtime will stop the application dead (no unwinding of the stack no destructors etc).
But the convention is that the implementer will take the extra precautions to catch all internal exceptions.
**PS**
You may want to derive from std::runtime\_error
(From Comment@onebyone.livejournal.com): Not quite.
The no throw specifier is actively used. It is an indication of exception safety demonstrates that the method provides the [no throw guarantee](https://stackoverflow.com/questions/88573/should-i-use-an-exception-specifier-in-c/89790#89790)
On the other hand the other exception specifiers are not used because they are too dangerous. If you get them wrong it causes an application termination via (std::unexpected). The default action is application termination without unwinding the stack and without cleaning up using object destructors. In MHOP this is hardly ever desirable. | A question related to deriving standard exception classes | [
"",
"c++",
"exception",
""
] |
Whenever I try to open a .csv file with the python command
`fread = open('input.csv', 'r')`
it always opens the file with spaces between every single character. I'm guessing it's something wrong with the text file because I can open other text files with the same command and they are loaded correctly. Does anyone know why a text file would load like this in python?
Thanks.
**Update**
Ok, I got it with the help of Jarret Hardie's post
this is the code that I used to convert the file to ascii
```
fread = open('input.csv', 'rb').read()
mytext = fread.decode('utf-16')
mytext = mytext.encode('ascii', 'ignore')
fwrite = open('input-ascii.csv', 'wb')
fwrite.write(mytext)
```
Thanks! | The post by recursive is probably right... the contents of the file are likely encoded with a multi-byte charset. If this is, in fact, the case you can likely read the file in python itself without having to convert it first outside of python.
Try something like:
```
fread = open('input.csv', 'rb').read()
mytext = fread.decode('utf-16')
```
The 'b' flag ensures the file is read as binary data. You'll need to know (or guess) the original encoding... in this example, I've used utf-16, but YMMV. This will convert the file to unicode. If you truly have a file with multi-byte chars, I don't recommend converting it to ascii as you may end up losing a lot of the characters in the process.
EDIT: Thanks for uploading the file. There are two bytes at the front of the file which indicates that it does, indeed, use a wide charset. If you're curious, open the file in a hex editor as some have suggested... you'll see something in the text version like 'I.D.|.' (etc). The dot is the extra byte for each char.
The code snippet above seems to work on my machine with that file. | The file is encoded in some unicode encoding, but you are reading it as ascii. Try to convert the file to ascii before using it in python. | python opens text file with a space between every character | [
"",
"python",
"csv",
"text-files",
""
] |
Notice the following code. The offending line has been commented out.
```
interface I<R> { }
class C : I<int> { }
class Program
{
private static void function<T, R>(T t) where T : class, I<R>
{
}
static void Main(string[] args)
{
// function(new C()); // wont compile
function<C, int>(new C());
}
}
```
I believe type inference should figure out the type because the argument `T` provides the first type, while `I<R>` provides the second type.
Is there a way to redesign the function so that the callers may not have to specify the types? | ```
class D : I<int>, I<string> { }
//
function<D, int>(new D());
function<D, string>(new D());
//is R int or string?
function(new D());
``` | Not if you want to keep all the constraints. However, this should serve equally well, unless you have a specific reason to forbid value types:
```
private static void function<R>(I<R> t)
``` | C# type inference : fails where it shouldn't? | [
"",
"c#",
".net",
"generics",
"type-inference",
""
] |
I have an operation contract (below) that I want to allow GET and POST requests against. How can I tell WCF to accept both types of requests for a single OperationContract?
```
[OperationContract,
WebInvoke(Method="POST",
BodyStyle = WebMessageBodyStyle.Bare,
RequestFormat = WebMessageFormat.Xml,
ResponseFormat = WebMessageFormat.Xml,
UriTemplate = "query")]
XElement Query(string qry);
[OperationContract,
WebInvoke(Method="GET",
BodyStyle = WebMessageBodyStyle.Bare,
RequestFormat = WebMessageFormat.Xml,
ResponseFormat = WebMessageFormat.Xml,
UriTemplate = "query?query={qry}")]
XElement Query(string qry);
``` | This post over on the [MSDN Forums by Carlos Figueira](http://social.msdn.microsoft.com/Forums/en-US/wcf/thread/ad5bb2f0-058c-47ae-bcf3-8f5c4727a70e/) has a solution. I'll go with this for now but if anyone else has any cleaner solutions let me know.
```
[OperationContract,
WebInvoke(Method="POST",
BodyStyle = WebMessageBodyStyle.Bare,
RequestFormat = WebMessageFormat.Xml,
ResponseFormat = WebMessageFormat.Xml,
UriTemplate = "query")]
XElement Query_Post(string qry);
[OperationContract,
WebInvoke(Method="GET",
BodyStyle = WebMessageBodyStyle.Bare,
RequestFormat = WebMessageFormat.Xml,
ResponseFormat = WebMessageFormat.Xml,
UriTemplate = "query?query={qry}")]
XElement Query_Get(string qry);
``` | Incase if anyone looking for a different solution,
```
[OperationContract]
[WebInvoke(Method="*")]
public <> DoWork()
{
var method = WebOperationContext.Current.IncomingRequest.Method;
if (method == "POST") return DoPost();
else if (method == "GET") return DoGet();
throw new ArgumentException("Method is not supported.");
}
``` | Enable multiple HTTP Methods on a single operation? | [
"",
"c#",
".net",
"wcf",
"web-services",
""
] |
When using jQuery to hookup an event handler, is there any difference between using the click method
```
$().click(fn)
```
versus using the bind method
```
$().bind('click',fn);
```
Other than bind's optional data parameter. | For what it's worth, from the [jQuery source](http://code.google.com/p/jqueryjs/source/browse/trunk/jquery/src/event.js#661):
```
jQuery.each( ("blur,focus,load,resize,scroll,unload,click,dblclick," +
"mousedown,mouseup,mousemove,mouseover,mouseout,mouseenter,mouseleave," +
"change,select,submit,keydown,keypress,keyup,error").split(","), function(i, name){
// Handle event binding
jQuery.fn[name] = function(fn){
return fn ? this.bind(name, fn) : this.trigger(name);
};
});
```
So no, there's no difference -
```
$().click(fn)
```
calls
```
$().bind('click',fn)
``` | +1 for Matthew's answer, but I thought I should mention that you can also bind more than one event handler in one go using `bind`
```
$('#myDiv').bind('mouseover focus', function() {
$(this).addClass('focus')
});
```
which is the much cleaner equivalent to:
```
var myFunc = function() {
$(this).addClass('focus');
};
$('#myDiv')
.mouseover(myFunc)
.focus(myFunc)
;
``` | jQuery: $().click(fn) vs. $().bind('click',fn); | [
"",
"javascript",
"jquery",
"event-handling",
""
] |
Is variable assignment expensive compared to a null check? For example, is it worth checking that foo is not null before assigning it null?
```
if (foo != null) {
foo = null;
}
```
Or is this worrying about nothing? | This is a micro-micro-optimization (and possibly something handled by the compiler anyways). Don't worry about it. You'll get a far greater return by focusing on your programs actual algorithm.
> We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. -- Donald Knuth | This is actually (very, very slightly) *less* efficient. Variable assignments are roughly equivalent to null checks, plus there's an extra branch possible. Not that it makes much difference.
> Or is this worrying about nothing?
You got it. | Check if variable null before assign to null? | [
"",
"java",
"compiler-construction",
"performance",
"clarity",
""
] |
I know how to append a new row to a table using JQuery:
```
var newRow = $("<tr>..."</tr>");
$("#mytable tbody").append(newRow);
```
The question is how do I create a new row that precedes some existing row. | ```
where_you_want_it.before(newRow)
```
or
```
newRow.insertBefore(where_you_want_it)
```
-- MarkusQ | ```
var newRow = $("<tr>...</tr>");
$("#idOfRowToInsertAfter").after(newRow);
```
The key is knowing the id of the row you want to insert the new row after, or at least coming up with some selector syntax that will get you that row.
[jQuery docs on `after()`](http://docs.jquery.com/Manipulation/after) | How do I insert a new TR into the MIDDLE of a HTML table using JQuery? | [
"",
"javascript",
"jquery",
"html",
""
] |
I wrote a little "lazy vector" class (or, delayed vector) which is supposed to look like a `std::vector` and usable wherever a `std::vector` is used, but it loads its elements "lazily", i.e. it will load element `n` (and possibly a few more) from disk whenever someone accesses element `n`. (The reason is that in my app, not all elements fit into memory.)
Here is this `LazyVector` class, but there is a problem with `const` member functions that use such a vector, see below.
```
template<class T>
class LazyVector {
std::vector<T> elems_;
void fetchElem(unsigned n){
// load the n-th elem from disk into elems_ etc
}
public:
const T& operator[](unsigned n) const {
fetchElem(n); // ERROR: ... discards qualifiers
return elems_[n];
}
T& operator[](unsigned n) {
fetchElem(n);
return elems_[n];
}
// and provide some other std::vector functions
};
```
As I said, there is a problem when a `const` member function asks for an element of the `LazyVector`. By nature of the `LazyVector`, accessing an element is *not* `const`, i.e. it will change the vector `vec` below, which is forbidden in this context. The `foo` member function must be `const` and cannot be changed. How can I solve this?
```
class Foo {
LazyVector<const std::string*> vec;
void fct(int n) const { // fct must be const
const std::string* str = vec[n];
// do something with str
}
};
``` | You can either use mutable member data or const\_cast in the implementation of your LazyVector class. Thus you can create the illusion of constness needed by your consuming class without actually being const. | Use the [mutable](http://www.parashift.com/c++-faq-lite/const-correctness.html#faq-18.13) keyword on the elems\_ data member. | Designing a lazy vector: problem with const | [
"",
"c++",
"vector",
"constants",
"lazy-loading",
""
] |
I am writing this in C# using .NET 3.5. I have a System.Data.DataSet object with a single DataTable that uses the following schema:
```
Id : uint
AddressA: string
AddressB: string
Bytes : uint
```
When I run my application, let's say the DataTable gets filled with the following:
```
1 192.168.0.1 192.168.0.10 300
2 192.168.0.1 192.168.0.20 400
3 192.168.0.1 192.168.0.30 300
4 10.152.0.13 167.10.2.187 80
```
I'd like to be able to query this DataTable where AddressA is unique and the Bytes column is summed together (I'm not sure I'm saying that correctly). In essence, I'd like to get the following result:
```
1 192.168.0.1 1000
2 10.152.0.13 80
```
I ultimately want this result in a DataTable that can be bound to a DataGrid, and I need to update/regenerate this result every 5 seconds or so.
How do I do this? DataTable.Select() method? If so, what does the query look like? Is there an alternate/better way to achieve my goal?
EDIT: I do not have a database. I'm simply using an in-memory DataSet to store the data, so a pure SQL solution won't work here. I'm trying to figure out how to do it within the DataSet itself. | For readability (and because I love it) I would try to use LINQ:
```
var aggregatedAddresses = from DataRow row in dt.Rows
group row by row["AddressA"] into g
select new {
Address = g.Key,
Byte = g.Sum(row => (uint)row["Bytes"])
};
int i = 1;
foreach(var row in aggregatedAddresses)
{
result.Rows.Add(i++, row.Address, row.Byte);
}
```
If a performace issue is discovered with the LINQ solution I would go with a manual solution summing up the rows in a loop over the original table and inserting them into the result table.
You can also bind the aggregatedAddresses directly to the grid instead of putting it into a DataTable. | most efficient solution would be to do the sum in SQL directly
select AddressA, SUM(bytes) from ... group by AddressA | Join multiple DataRows into a single DataRow | [
"",
"c#",
".net",
"datagridview",
"dataset",
""
] |
A variable of the type Int32 won't be threated as Int32 if we cast it to "Object" before passing to the overloaded methods below:
```
public static void MethodName(int a)
{
Console.WriteLine("int");
}
public static void MethodName(object a)
{
Console.ReadLine();
}
```
To handle it as an Int32 even if it is cast to "Object" can be achieved through reflection:
```
public static void MethodName(object a)
{
if(a.GetType() == typeof(int))
{
Console.WriteLine("int");
}
else
{
Console.ReadLine();
}
}
```
Is there another way to do that? Maybe using Generics? | ```
public static void MethodName(object a)
{
if(a is int)
{
Console.WriteLine("int");
}
else
{
Console.WriteLine("object");
}
}
``` | Runtime overload resolution will not be available until C# 4.0, which has `dynamic`:
```
public class Bar
{
public void Foo(int x)
{
Console.WriteLine("int");
}
public void Foo(string x)
{
Console.WriteLine("string");
}
public void Foo(object x)
{
Console.WriteLine("dunno");
}
public void DynamicFoo(object x)
{
((dynamic)this).Foo(x);
}
}
object a = 5;
object b = "hi";
object c = 2.1;
Bar bar = new Bar();
bar.DynamicFoo(a);
bar.DynamicFoo(b);
bar.DynamicFoo(c);
```
Casting `this` to `dynamic` enables the dynamic overloading support, so the `DynamicFoo` wrapper method is able to call the best fitting `Foo` overload based on the runtime type of the argument. | Overloading methods in C# .NET | [
"",
"c#",
".net",
"generics",
"reflection",
"overloading",
""
] |
I'm testing something in Oracle and populated a table with some sample data, but in the process I accidentally loaded duplicate records, so now I can't create a primary key using some of the columns.
How can I delete all duplicate rows and leave only one of them? | Use the `rowid` pseudocolumn.
```
DELETE FROM your_table
WHERE rowid not in
(SELECT MIN(rowid)
FROM your_table
GROUP BY column1, column2, column3);
```
Where `column1`, `column2`, and `column3` make up the identifying key for each record. You might list all your columns. | From [Ask Tom](http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:15258974323143)
```
delete from t
where rowid IN ( select rid
from (select rowid rid,
row_number() over (partition by
companyid, agentid, class , status, terminationdate
order by rowid) rn
from t)
where rn <> 1);
```
(fixed the missing parenthesis) | Removing duplicate rows from table in Oracle | [
"",
"sql",
"oracle",
"duplicates",
"delete-row",
""
] |
I have a generic method which has two generic parameters. I tried to compile the code below but it doesn't work. Is it a .NET limitation? Is it possible to have multiple constraints for different parameter?
```
public TResponse Call<TResponse, TRequest>(TRequest request)
where TRequest : MyClass, TResponse : MyOtherClass
``` | It is possible to do this, you've just got the syntax slightly wrong. You need a [`where`](https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/keywords/where-generic-type-constraint "Microsoft Docs | where constraint") for each constraint rather than separating them with a comma:
```
public TResponse Call<TResponse, TRequest>(TRequest request)
where TRequest : MyClass
where TResponse : MyOtherClass
``` | In addition to the main answer by @LukeH with another usage, we can use multiple interfaces instead of class. (One class and n count interfaces) like this
```
public TResponse Call<TResponse, TRequest>(TRequest request)
where TRequest : MyClass, IMyOtherClass, IMyAnotherClass
```
or
```
public TResponse Call<TResponse, TRequest>(TRequest request)
where TRequest : IMyClass,IMyOtherClass
``` | Generic method with multiple constraints | [
"",
"c#",
"generics",
".net-3.5",
""
] |
I've been looking at WPF, but I've never really worked in it (except for 15 minutes, which prompted this question). I looked at this [post](https://stackoverflow.com/questions/193005/are-wpf-more-flashy-like-than-winforms) but its really about the "Flash" of a WPF. So what is the difference between a Windows Forms application and a WPF application? | WPF is a vector graphics based UI presentation layer where WinForms is not. Why is that important/interesting? By being vector based, it allows the presentation layer to smoothly scale UI elements to any size without distortion.
WPF is also a composable presentation system, which means that pretty much any UI element can be made up of any other UI element. This allows you to easily build up complex UI elements from simpler ones.
WPF is also fully databinding aware, which means that you can bind any property of a UI element to a .NET object (or a property/method of an object), a property of another UI element, or data. Yes, WinForms supports databinding but in a much more limited way.
Finally, WPF is "skinable" or "themeable", which means that as a developer you can use a list box because those are the behaviors you need but someone can "skin" it to look like something completely different.
Think of a list box of images. You want the content to actually be the image but you still want list box behaviors. This is completely trivial to do in WPF by simply using a listbox and changing the content presentation to contain an image rather than text. | A good way of looking at this might start with asking what exactly Winforms is.
Both Winforms and WPF are frameworks designed to make the UI layer of an application easier to code. The really old folks around here might be able to speak about how writing the windows version of "Hello, World" could take 4 pages or so of code. Also, rocks were a good deal softer then and we had to fight of giant lizards while we coded. The Winforms library and designer takes a lot of the common tasks and makes them easier to write.
WPF does the same thing, but with the awareness that those common tasks might now include much more visually interesting things, in addition to including a lot of things that Winforms did not necessarily consider to be part of the UI layer. The way WPF supports commanding, triggers, and databinding are all great parts of the framework, but the core reason for it is the same core reason Winforms had for existing in the first place.
WPFs improvement here is that, instead of giving you the option of either writing a completely custom control from scratch or forcing you to use a single set of controls with limited customization capabilities, you may now separate the function of a control from its appearance. The ability to describe how our controls look in XAML and keep that separate from how the controls work in code is very similar to the HTML/Code model that web programmers are used to working with.
A good WPF application follows the same model that a good Winforms application would; keeping as much stuff as possible out of the UI layer. The core logic of the application and the data layer should be the same, but there are now easier ways of making the visuals more impressive, which is likely why most of the information you've seen on it involves the flashier visual stuff. If you're looking to learn WPF, you can actually start by using it almost exactly as you would Winforms and then refactoring the other features in as you grasp them. For an excellent example of this, I highly recommend Scott Hanselman's series of blog posts on the development of BabySmash, [which start here](http://www.hanselman.com/blog/IntroducingBabySmashAWPFExperiment.aspx). It's a great walkthrough of the process, both in code and in thought. | What is WPF and how does it compare to WinForms? | [
"",
"c#",
"wpf",
"windows",
""
] |
These are my questions:
1. I'm getting a couple of errors on the line "public static boolean validNumCheck(String num){" - "illegal start of expression", "';' expected", and "')' expected".
2. How can I give the user 3 tries in total for each number? I believe right now the programme asks the user for 3 numbers and gives them 3 tries in total to get the numbers correct (My explanations suck... read the code to get a better idea of what I mean).
This is my code:
```
import javax.swing.JOptionPane;
public class Assignment3 {
public static void main (String[] args){
final int MAX_TRIES = 3;
int[] attempts= new int[2];
String[] getNumber= new String [2];
//Ask the user for 3 integers.
while(attempts[0]<MAX_TRIES){
getNumber[0]= JOptionPane.showInputDialog(null,"Please enter an integer between 0-200.");
//Pass the value to validNumChek
validNumCheck (getNumber);
//If it is not a valid number give the user 3 more tries.
if (getNumber== false){
while(attempts[1]<MAX_TRIES){
getNumber[1]= JOptionPane.showInputDialog(null,"Please enter an integer between 0-200.");
attempts[1]++;}
}
attempts[0]++;
}
//Parse the string to an integer and check if it is a valid number.
public static boolean validNumCheck(String num){
int number;
try {
number = Integer.parseInt(num);
return number >= 0 && number <= 200;
}catch(NumberFormatException e){
//If it is not a valid number return false.
return false;
}
}
}
``` | I think it is important to create a pseudo code or algorithm of the problem **first** and then if it works deal with the programming **later**. Otherwise you'll be solving two things at the same time 1. Problem logic and 2. Implementation details.
This is how I would do it.
```
//The three numbers should be entered by a user in the main method.
MAIN PROGRAM starts
declare a , b , c as numbers
//The numbers should be positive and less than 200.
// see validNumCheck below.
//part 1.If not, the program asks the user to renter the number.
//part 2.The user will have three chances to enter a valid number for each number.
//part 3. If the number is still invalid after the three trials, the program displays an error message to the user and ends.
// ok then read a number and validate it.
attempts = 0;
maxAttempts = 3;
//part 2. three chances... .
loop_while ( attemtps < maxAttempts ) do // or 3 directly.
number = readUserInput(); // part 1. reenter the number...
if( numcheck( number ) == false ) then
attempts = attempts + 1;
// one failure.. try again.
else
break the loop.
end
end
// part 3:. out of the loop.
// either because the attempts where exhausted
// or because the user input was correct.
if( attempts == maxAttemtps ) then
displayError("The input is invalid due to ... ")
die();
else
a = number
end
// Now I have to repeat this for the other two numbers, b and c.
// see the notes below...
MAIN PROGRAM ENDS
```
And this would be the function to "validNumCheck"
```
// You are encouraged to write a separate method for this part of program – for example: validNumCheck
bool validNumCheck( num ) begin
if( num < 0 and num > 200 ) then
// invalid number
return false;
else
return true;
end
end
```
So, we have got to a point where a number "a" could be validated but we need to do the same for "b" and "c"
Instead of "copy/paste" your code, and complicate your life trying to tweak the code to fit the needs you can create a function and delegate that work to the new function.
So the new pseudo code will be like this:
```
MAIN PROGRAM STARTS
declare a , b , c as numbers
a = giveMeValidUserInput();
b = giveMeValidUserInput();
c = giveMeValidUserInput();
print( a, b , c )
MAIN PROGRAM ENDS
```
And move the logic to the new function ( or method )
The function giveMeValidUserInput would be like this ( notice it is almost identical to the first pseudo code )
```
function giveMeValidUserInput() starts
maxAttempts = 3;
attempts = 0;
loop_while ( attemtps < maxAttempts ) do // or 3 directly.
number = readUserInput();
if( numcheck( number ) == false ) then
attempts = attempts + 1;
// one failure.. try again.
else
return number
end
end
// out of the loop.
// if we reach this line is because the attempts were exhausted.
displayError("The input is invalid due to ... ")
function ends
```
The validNumCheck doesn't change.
Passing from that do code will be somehow straightforward. Because you have already understand what you want to do ( analysis ) , and how you want to do it ( design ).
Of course, this would be easier with experience.
Summary
The [steps to pass from problem to code are](https://stackoverflow.com/questions/137375/process-to-pass-from-problem-to-code-how-did-you-learn):
1. Read the problem and understand it ( of course ) .
2. Identify possible "functions" and variables.
3. Write how would I do it step by step ( algorithm )
4. Translate it into code, if there is something you cannot do, create a function that does it for you and keep moving. | In the [method signature](http://en.wikipedia.org/wiki/Method_signature) (that would be "`public static int validNumCheck(num1,num2,num3)`"), you have to declare the types of the [formal parameters](http://en.wikipedia.org/wiki/Parameter_(computer_science)#Parameters_and_arguments). "`public static int validNumCheck(int num1, int num2, int num3)`" should do the trick.
However, a better design would be to make `validNumCheck` take only one parameter, and then you would call it with each of the three numbers.
---
My next suggestion (having seen your updated code) is that you get a decent IDE. I just loaded it up in NetBeans and found a number of errors. In particular, the "illegal start of expression" is because you forgot a `}` in the while loop, which an IDE would have flagged immediately. After you get past "Hello world", Notepad just doesn't cut it anymore.
I'm not going to list the corrections for every error, but keep in mind that `int[]`, `int`, `String[]`, and `String` are all different. You seem to be using them interchangeably (probably due to the amount of changes you've done to your code). Again, an IDE would flag all of these problems.
---
Responding to your newest code (revision 12): You're getting closer. You seem to have used `MAX_TRIES` for two distinct purposes: the three numbers to enter, and the three chances for each number. While these two numbers are the same, it's better not to use the same constant for both. `NUM_INPUTS` and `MAX_TRIES` are what I would call them.
And you still haven't added the missing `}` for the while loop.
The next thing to do after fixing those would be to look at `if (getNumber == false)`.
`getNumber` is a `String[]`, so this comparison is illegal. You should be getting the return value of `validNumCheck` into a variable, like:
```
boolean isValidNum = validNumCheck(getNumber[0]);
```
And also, there's no reason for `getNumber` to be an array. You only need one `String` at a time, right? | Why does this code cause an "illegal start of expression" exception? | [
"",
"java",
"boolean",
"return-value",
"static-methods",
""
] |
I want to convert the following query into LINQ syntax. I am having a great deal of trouble managing to get it to work. I actually tried starting from LINQ, but found that I might have better luck if I wrote it the other way around.
```
SELECT
pmt.guid,
pmt.sku,
pmt.name,
opt.color,
opt.size,
SUM(opt.qty) AS qtySold,
SUM(opt.qty * opt.itemprice) AS totalSales,
COUNT(omt.guid) AS betweenOrders
FROM
products_mainTable pmt
LEFT OUTER JOIN
orders_productsTable opt ON opt.products_mainTableGUID = pmt.guid
LEFT OUTER JOIN orders_mainTable omt ON omt.guid = opt.orders_mainTableGUID AND
(omt.flags & 1) = 1
GROUP BY
pmt.sku, opt.color, opt.size, pmt.guid, pmt.name
ORDER BY
pmt.sku
```
The end result is a table that shows me information about a product as you can see above.
How do I write this query, in LINQ form, using comprehension syntax ?
Additionally, I may want to add additional filters (to the orders\_mainTable, for instance).
Here is one example that I tried to make work, and was fairly close but am not sure if it's the "correct" way, and was not able to group it by size and color from the orders\_productsTable.
```
from pmt in products_mainTable
let Purchases =
from opt in pmt.orders_productsTable
where ((opt.orders_mainTable.flags & 1) == 1)
where ((opt.orders_mainTable.date_completedon > Convert.ToDateTime("01/01/2009 00:00:00")))
select opt
orderby pmt.sku
select new {
pmt.guid,
pmt.sku,
pmt.name,
pmt.price,
AvgPerOrder = Purchases.Average(p => p.qty).GetValueOrDefault(0),
QtySold = Purchases.Sum(p => p.qty).GetValueOrDefault(),
SoldFor = Purchases.Sum(p => p.itemprice * p.qty).GetValueOrDefault()
}
```
\*Edit:
To be a little more explicit so you can understand what I am trying to do, here is some more explanation.
Products are stored in products\_mainTable
Orders are stored in orders\_mainTable
Products That Have Been Ordered are stored in orders\_productsTable
I want to create several reports based on products, orders, etc. drilling into the data and finding meaningful bits to display to the end user.
In this instance, I am trying to show which products have been purchased over a period of time, and are the most popular. How many sold, for what price, and what is the breakout per order. Maybe not the best order, but I'm just experimenting and picked this one.
All of the tables have relationships to other tables. So from the product table, I can get to what orders ordered that product, etc.
The largest problem I am having, is understanding how LINQ works, especially with grouping, aggregate data, extensions, subqueries, etc. It's been fun, but it's starting to get frustrating because I am having difficulty finding detailed explanations on how to do this. | Bruno, thank you so much for your assistance! The FirstOrDefault() was probably the largest help. Following some of what you did, and another resource I came up with the following that seems to work beautifully! This LINQ query below gave me nearly an exact replication of the SQL I posted above.
Here's the other resource I found on doing a LEFT OUTER JOIN in LINQ: [Blog Post](http://solidcoding.blogspot.com/2007/12/left-outer-join-in-linq.html)
**Final Answer:**
```
from pmt in products_mainTable
join opt in orders_productsTable on pmt.guid equals opt.products_mainTableGUID into tempProducts
from orderedProducts in tempProducts.DefaultIfEmpty()
join omt in orders_mainTable on orderedProducts.orders_mainTableGUID equals omt.guid into tempOrders
from ordersMain in tempOrders.DefaultIfEmpty()
group pmt by new { pmt.sku, orderedProducts.color, orderedProducts.size } into g
orderby g.FirstOrDefault().sku
select new {
g.FirstOrDefault().guid,
g.Key.sku,
g.Key.size,
QTY = g.FirstOrDefault().orders_productsTable.Sum(c => c.qty),
SUM = g.FirstOrDefault().orders_productsTable.Sum(c => c.itemprice * c.qty),
AVG = g.FirstOrDefault().orders_productsTable.Average(c => c.itemprice * c.qty),
Some = g.FirstOrDefault().orders_productsTable.Average(p => p.qty).GetValueOrDefault(0),
}
``` | I'm also a beginner in LINQ. I don't know if this is the right way of grouping by several fields but I think you have to transform these grouping fields into a representing key. So, assuming that all your grouping fields are strings or ints you can make a key as follows:
```
var qry = from pmt in products_mainTable
join opt in orders_productsTable on pmt.guid equals opt.products_mainTableGUID
join omt in orders_mainTable on opt.orders_mainTableGUID equals omt.guid
where (opt.orders_mainTable.flags & 1) == 1
group omt by pmt.sku + opt.price + opt.size + pmt.guid + pmt.name into g
orderby g.sku
select new
{
g.FirstOrDefault().guid,
g.FirstOrDefault().sku,
g.FirstOrDefault().name,
g.FirstOrDefault().color,
g.FirstOrDefault().price,
AvgPerOrder = g.Average(p => p.qty).GetValueOrDefault(0),
QtySold = g.Sum(p => p.qty).GetValueOrDefault(),
SoldFor = g.Sum(p => p.itemprice * p.qty).GetValueOrDefault()
};
```
I didn't test this so please see if this helps you in any way. | LINQ to SQL: Complicated query with aggregate data for a report from multiple tables for an ordering system | [
"",
"c#",
"linq",
"linq-to-sql",
"grouping",
"aggregate-functions",
""
] |
I know merely checking for whether the type is not a value type is not sufficient. How can i account for those Nullable?
**DUPLICATE**
[How to check if an object is nullable?](https://stackoverflow.com/questions/374651/how-to-check-if-an-object-is-nullable) | You can use Nullable.GetUnderlyingType that will return null if the type is not nullable. | have you tried the keyword [default(YourType)](http://msdn.microsoft.com/en-us/library/xwth0h0d(VS.80).aspx) ?
> Whether T will be a reference type or a value type.
>
> If T is a value type, whether it will be a numeric value or a struct. | How to determine programmatically in c# that a type accepts null value or not? | [
"",
"c#",
".net",
"nullable",
""
] |
Is there any easy LINQ expression to concatenate my entire `List<string>` collection items to a single `string` with a delimiter character?
What if the collection is of custom objects instead of `string`? Imagine I need to concatenate on `object.Name`. | ## Warning - Serious Performance Issues
Though this answer does produce the desired result, it suffers from poor performance compared to other answers here. Be very careful about deciding to use it
---
By using LINQ, this should work;
```
string delimiter = ",";
List<string> items = new List<string>() { "foo", "boo", "john", "doe" };
Console.WriteLine(items.Aggregate((i, j) => i + delimiter + j));
```
class description:
```
public class Foo
{
public string Boo { get; set; }
}
```
Usage:
```
class Program
{
static void Main(string[] args)
{
string delimiter = ",";
List<Foo> items = new List<Foo>() { new Foo { Boo = "ABC" }, new Foo { Boo = "DEF" },
new Foo { Boo = "GHI" }, new Foo { Boo = "JKL" } };
Console.WriteLine(items.Aggregate((i, j) => new Foo{Boo = (i.Boo + delimiter + j.Boo)}).Boo);
Console.ReadKey();
}
}
```
And here is my best :)
```
items.Select(i => i.Boo).Aggregate((i, j) => i + delimiter + j)
``` | ```
string result = String.Join(delimiter, list);
```
is sufficient. | Concat all strings inside a List<string> using LINQ | [
"",
"c#",
"linq",
".net-3.5",
""
] |
Stylecop is telling me to use /// instead of // when I´m commenting. What is the semantic difference for those two and when should I use which one? | // denotes your own personal comments, where /// denote comments that can contain special data, e.g. markup for interpretation by Visual Studio - like:
```
/// <summary>
/// This class does something.
/// </summary>
class Bob {
...
}
```
This will tell Visual Studio the summary of the Bob class, and that will then show up in your code completion box. | Found it myself while going through further Stylecop rules:
Use // when commentin
Use /// when documenting for XML documentation headers | What is the difference between // and /// in Visual Studio? | [
"",
"c#",
"visual-studio",
"documentation",
""
] |
```
#define SAFE_DELETE(a) if( (a) != NULL ) delete (a); (a) = NULL;
```
OR
```
template<typename T> void safe_delete(T*& a) {
delete a;
a = NULL;
}
```
or any other better way | Clearly the function, for a simple reason. The macro evaluates its argument multiple times. This can have evil side effects. Also the function can be scoped. Nothing better than that :) | I would say neither, as both will give you a false sense of security. For example, suppose you have a function:
```
void Func( SomePtr * p ) {
// stuff
SafeDelete( p );
}
```
You set p to NULL, but the copies of p outside the function are unaffected.
However, if you must do this, go with the template - macros will always have the potential for tromping on other names. | Which version of safe_delete is better? | [
"",
"c++",
"pointers",
"c-preprocessor",
""
] |
I apologize for the newbie question, but I am struggling with this problem. I have the following TextBlock defined:
```
<TextBlock Text="{Binding Source={x:Static local:DeviceManager.Instance},
Path=Player.CurrentArtist}"></TextBlock>
```
The DeviceManager is a singleton that functions as a facade for other classes. For example, `Player` is a property of type `IPlayer` which represents an music-playing application. I would like the TextBlock to display the artist that is currently playing, which is periodically updated in the `Player.CurrentArtist` property.
Unfortunately, I cannot get the TextBlock to update when the `CurrentArtist` property updates. Both the `DeviceManager` and the `IPlayer` implement `INotifyPropertyChanged`, but when I step through the application, the `DeviceManager` does not have an event handler attached to it.
Does anyone have a suggestion for how to update the text block while preserving the singleton-facade?
Here is the code for the INotifyPropertyChanged members in both the DeviceManager and the IPlayer subclass:
```
public sealed class DeviceManager : INotifyPropertyChanged
{
// Singleton members omitted
public IPlayer Player
{
get { return player; }
set
{
this.player = value;
player.PropertyChanged += new PropertyChangedEventHandler(device_PropertyChanged);
}
}
#region INotifyPropertyChanged Members
public event PropertyChangedEventHandler PropertyChanged;
private void device_PropertyChanged(object sender, PropertyChangedEventArgs e)
{
PropertyChangedEventHandler handler = PropertyChanged;
if (handler != null)
{
handler(sender, e);
}
}
#endregion
}
class MediaPlayer : IPlayer
{
private string artist;
private string title;
public event PropertyChangedEventHandler PropertyChanged;
public void Play(string artist, string title)
{
this.artist = artist;
this.title = title;
OnPropertyChanged("Player:Song");
}
private void OnPropertyChanged(string p)
{
PropertyChangedEventHandler handler = PropertyChanged;
if (handler != null)
{
handler(this, new PropertyChangedEventArgs(p));
}
}
public string CurrentTitle
{
get { return title; }
}
public string CurrentArtist
{
get { return artist; }
}
}
``` | The problem is that WPF is never notified of the value of the `CurrentArtist` property changing. You can either implement a private setter for the `CurrentArtist` property, which will trigger the `PropertyChanged` event, or trigger a `PropertyChanged` event for the `CurrentArtist` property in `MediaPlayer.Play()`. | How does the UI know when you change the Player property? From that code it does not look like it raises PropertyChanged to me. Can you post a complete working sample of the problem? Otherwise we're forced to just guess. | TextBlock data-bound to Singleton not updating in WPF | [
"",
"c#",
"wpf",
""
] |
I want to find the text between a pair of <a> tags that link to a given site
Here's the re string that I'm using to find the content:
```
r'''(<a([^<>]*)href=("|')(http://)?(www\.)?%s([^'"]*)("|')([^<>]*)>([^<]*))</a>''' % our_url
```
The result will be something like this:
```
r'''(<a([^<>]*)href=("|')(http://)?(www\.)?stackoverflow.com([^'"]*)("|')([^<>]*)>([^<]*))</a>'''
```
This is great for most links but it errors with a link with tags within it. I tried changing the final part of the regex from:
```
([^<]*))</a>'''
```
to:
```
(.*))</a>'''
```
But that just got everything on the page after the link, which I don't want. Are there any suggestions on what I can do to solve this? | Instead of:
```
[^<>]*
```
Try:
```
((?!</a).)*
```
In other words, match any character that isn't the start of a `</a` sequence. | ```
>>> import re
>>> pattern = re.compile(r'<a.+href=[\'|\"](.+)[\'|\"].*?>(.+)</a>', re.IGNORECASE)
>>> link = '<a href="http://stackoverflow.com/questions/603199/finding-anchor-text-when-there-are-tags-there">Finding anchor text when there are tags there</a>'
>>> re.match(pattern, link).group(1)
'http://stackoverflow.com/questions/603199/finding-anchor-text-when-there-are-tags-there'
>>> re.match(pattern, link).group(2)
'Finding anchor text when there are tags there'
``` | Finding anchor text when there are tags there | [
"",
"python",
"regex",
""
] |
I was thinking of implementing real time chat using a PHP backend, but I ran across this comment on a site discussing comet:
> My understanding is that PHP is a
> terrible language for Comet, because
> Comet requires you to keep a
> persistent connection open to each
> browser client. Using mod\_php this
> means tying up an Apache child
> full-time for each client which
> doesn’t scale at all. The people I
> know doing Comet stuff are mostly
> using Twisted Python which is designed
> to handle hundreds or thousands of
> simultaneous connections.
Is this true? Or is it something that can be configured around? | Agreeing/expanding what has already been said, I don't think FastCGI will solve the problem.
## Apache
Each request into Apache will use one worker thread until the request completes, which may be a long time for COMET requests.
[This article on Ajaxian](http://ajaxian.com/archives/comet-with-apache-and-jetty) mentions using COMET on Apache, and that it is difficult. The problem isn't specific to PHP, and applies to any back-end CGI module you may want to use on Apache.
The suggested solution was to use the ['event' MPM module](http://httpd.apache.org/docs/2.2/mod/event.html) which changes the way requests are dispatched to worker threads.
> This MPM tries to fix
> the 'keep alive problem' in HTTP.
> After a client completes the first
> request, the client can keep the
> connection open, and send further
> requests using the same socket. This
> can save signifigant overhead in
> creating TCP connections. However,
> Apache traditionally keeps an entire
> child process/thread waiting for data
> from the client, which brings its own
> disadvantages. To solve this problem,
> this MPM uses a dedicated thread to
> handle both the Listening sockets, and
> all sockets that are in a Keep Alive
> state.
Unfortunately, that doesn't work either, because it will only 'snooze' *after* a request is complete, waiting for a new request from the client.
## PHP
Now, considering the other side of the problem, even if you resolve the issue with holding up one thread per comet request, you will still need one PHP thread per request - this is why FastCGI won't help.
You need something like [Continuations](http://en.wikipedia.org/wiki/Continuation) which allow the comet requests to be resumed when the event they are triggered by is observed. AFAIK, this isn't something that's possible in PHP. I've only seen it in Java - see the Apache [Tomcat server](http://tomcat.apache.org/tomcat-6.0-doc/aio.html).
**Edit:**
There's an [article here](http://iamseanmurphy.com/2009/03/02/high-performance-comet-on-a-shoestring/) about using a load balancer ([HAProxy](http://haproxy.1wt.eu/)) to allow you to run both an apache server and a comet-enabled server (e.g. jetty, tomcat for Java) on port 80 of the same server. | You could use Nginx and JavaScript to implement a Comet based chat system that is very scalable with little memory or CPU utilization.
I have a very simple example here that can get you started. It covers compiling Nginx with the NHPM module and includes code for simple publisher/subscriber roles in jQuery, PHP, and Bash.
<http://blog.jamieisaacs.com/2010/08/27/comet-with-nginx-and-jquery/> | Using comet with PHP? | [
"",
"php",
"comet",
""
] |
I'm trying to use IF else If logic inside an inline table valued function for SQL and returning a containstable based on that logic. but i'm having syntax problems with the IF Else IF block. thanks for the help. since i can't parametrize the columns in the containstable i have to resort to using if else statements. here's the code. thanks.
i'm getting
Msg 156, Level 15, State 1, Procedure FullTextSearch, Line 17
Incorrect syntax near the keyword 'IF'.
```
ALTER FUNCTION [dbo].[FullTextSearch]
(
@Columns nvarchar(100), @SearchPhrase nvarchar(100)
)
RETURNS TABLE
AS
RETURN
IF (@Columns='Title')
BEGIN
SELECT *
from projects as P inner join
containstable(PROJECTS, Title, @SearchPhrase) as K
on P.project_id = K.[KEY]
END
ELSE IF (@Columns='Project_Details')
BEGIN
SELECT *
from projects as P inner join
containstable(PROJECTS, Project_Details, @SearchPhrase) as K
on P.project_id = K.[KEY]
END
ELSE IF (@Columns='Contact_Info')
BEGIN
SELECT *
from projects as P inner join
containstable(PROJECTS, Contact_Info, @SearchPhrase) as K
on P.project_id = K.[KEY]
END
ELSE IF (@Columns='Project_Description')
BEGIN
SELECT *
from projects as P inner join
containstable(PROJECTS, Project_Description, @SearchPhrase) as K
on P.project_id = K.[KEY]
END
ELSE -- (@Columns='All')
BEGIN
SELECT *
from projects as P inner join
containstable(PROJECTS, (Title, Project_Details, Contact_Info, Project_Description), @SearchPhrase) as K
on P.project_id = K.[KEY]
END
``` | You're getting this error because you're trying to use multiple statements in an inline table-valued function.
An inline table-valued function must contain a single `SELECT` statement, something along the lines of:
```
CREATE FUNCTION dbo.MyInlineFunction (@MyParam INT)
RETURNS TABLE
AS
RETURN
(
SELECT first_column, second_column FROM ...
)
```
Your function needs to be declared as a multi-statement function, syntax similar to this:
```
CREATE FUNCTION dbo.MyMultiStatementFunction (@MyParam INT)
RETURNS @MyTable TABLE
(
first_column INT, second_column VARCHAR(100) -- etc
)
AS
BEGIN
IF (@MyParam = 1)
INSERT INTO @MyTable (...)
SELECT ... FROM ...
ELSE
INSERT INTO @MyTable (...)
SELECT ... FROM ...
RETURN
END
``` | First of all, you have == on the first IF instead of =
Oh yeah and I noticed that within the CONTAINSTABLE parameters in your ELSE you have ProjectDetails listed twice
If you are searching a specific field could you just use this syntax:
```
WHERE Title LIKE '%'+ @Search +'%'
OR ProjectDetails LIKE '%' + @Search + '%'
OR ContactInfo LIKE '%' + @Search + '%')
```
(Warning) This syntax ignores any indexing.
You can check this [blog](http://sqlblogcasts.com/blogs/simons/archive/2008/12/18/LINQ-to-SQL---Enabling-Fulltext-searching.aspx) for a solution that works with indexing. It seems to have a good solution. | Help! - SQL - IF Else IF logic for returning Containstable selection | [
"",
"sql",
"linq-to-sql",
"full-text-search",
"user-defined-functions",
""
] |
I was looking at the Java code for FileInputStream and am interested in what the open and close methods do. These functions, or parts of them, are implemented natively. Do you know where I can find this code? Is this code part of the open source license for the JDK? | ```
http://java.sun.com/j2se/1.5.0/source_license.html
http://download.java.net/jdk6/source/
http://openjdk.java.net/
http://www.gnu.org/software/classpath/
```
Here is a link to the FileInputStream code... there is also code specific to Windows and presumably other platforms inthe approriate directories.
<http://hg.openjdk.java.net/jdk8/jdk8/jdk/file/0a5b87833562/src/share/native/java/io/FileInputStream.c> | [OpenJDK](http://openjdk.java.net/) should have everything you search for, e.g. the [Java 7 mercurial (hg) repository](http://hg.openjdk.java.net/jdk7/jdk7). A [zip file](http://www.java.net/download/openjdk/jdk7/promoted/b48/openjdk-7-ea-src-b48-19_feb_2009.zip) with everything from Feb 21th is also available.
If you don't want the brand new developer stuff from Java 7, there is also everything for the well known Java 6. | Where can you find the native code in the JDK? | [
"",
"java",
""
] |
```
DECLARE @TestVal int
SET @TestVal = 5
SELECT
CASE
WHEN @TestVal <=3 THEN 'Top 3'
ELSE 'Other'
END
```
I saw this sample code online but I couldn't find an example where there was no expression and it had more than one WHEN, so I am wondering if this type of thing is OK:
```
DECLARE @TestVal int
SET @TestVal = 5
SELECT
CASE
WHEN @TestVal <=3 THEN 'Top 3'
WHEN (select ...) = 1 THEN 'Other Value'
WHEN (select ...) = 2 THEN 'Other Value 2'
ELSE 'Other'
END
```
Or do I need to say CASE WHEN for each line? | [Case](http://msdn.microsoft.com/en-us/library/ms181765.aspx) takes the following form
```
CASE WHEN Condition THEN Result
WHEN Condition2 THEN Result2
ELSE Default
END
```
# Edit
This assumes your using Microsoft SQL Server other DBMS might be different | Yes, that's fine, but I would line up the "WHEN"s vertically and explain it more like this:
```
SELECT
CASE
WHEN @TestVal <=3 THEN 'Top 3'
WHEN @TestVal <=10 THEN 'Top 10'
WHEN @TestVAl <=25 THEN 'Top 25'
ELSE 'Other'
END
```
The formatting might just be a markdown glitch, but the `(select...)` in your example complicated what should be a simpler snippet. | Multiple WHEN inside no-expression CASE in SQL? | [
"",
"sql",
"case",
""
] |
I've been reading up on STL containers in my book on C++, specifically the section on the STL and its containers. Now I do understand each and every one of them have their own specific properties, and I'm close to memorizing all of them... But what I do not yet grasp is in which scenario each of them is used.
What is the explanation? Example code is much prefered. | [This cheat sheet](https://web.archive.org/web/20180824133558/homepages.e3.net.nz/%7Edjm/cppcontainers.html) provides a pretty good summary of the different containers.
See the flowchart at the bottom as a guide on which to use in different usage scenarios:

Created by [David Moore](http://linuxsoftware.co.nz/) and [licensed CC BY-SA 3.0](http://creativecommons.org/licenses/by-sa/3.0/nz/) | Here is a flowchart inspired by David Moore's version (see above) that I created, which is up-to-date (mostly) with the new standard (C++11). This is only my personal take on it, it's not indisputable, but I figured it could be valuable to this discussion:
 | In which scenario do I use a particular STL container? | [
"",
"c++",
"stl",
"container-data-type",
""
] |
I am working on a platform with a gcc compiler however boost cannot compile on it.
I am wondering what is the proper way to include the shared\_ptr in std:tr1 on gcc? the file i looked in said not to include it directly, from what i can tell no other file includes it either :| | In **G++ 4.3**,
```
#include <tr1/memory>
```
should do the trick. You'll find `shared_ptr` at `std::tr1::shared_ptr`. | [Boost itself has the answer](http://www.boost.org/doc/libs/1_37_0/doc/html/boost_tr1/usage.html#boost_tr1.usage.include_style). | shared_ptr in std::tr1 | [
"",
"c++",
"gcc",
"boost",
"shared-ptr",
"tr1",
""
] |
Customer X has asked for ways to improve the startup time of a Java process he uses. The problem is, it is not run through a jar file, but rather 'jnlp' (which I am assuming indicates it is a java webstart application)
```
StartUserWorx.jnlp
```
Is there a way to convert this to a JAR file and then have the user invoke the application locally, and include a startup flag to the JRE to allocate more ram to the process?
The user is getting a bit frustrated, because other workers in the same office use the same application on almost identical machines, and yet his process seems to always take a much longer time to load (a few minutes rather than a few seconds). | Check the network settings of the user. It is my experience that minute long delays often is caused by DNS misconfiguration, bad routing tables (DNS must time out when no positive response is received), or simply an incorrect lmhosts file. | Check the proxy settings on the slow machine. That user might not have correct proxy settings, and a lot of extra network traffic/timeouts could be occurring?
We've had that issue with webstart before, downloading a jar from a local server was round tripping through the proxy to the other coast and back. | Diagnosing and improving performance of a java jnlp compared to jar file | [
"",
"java",
"performance",
"memory",
"jar",
"java-web-start",
""
] |
Sometimes when a user is copying and pasting data into an input form we get characters like the following:
> didn’t,“ for beginning quotes and †for end quote, etc ...
I use this routine to sanitize most input on web forms (I wrote it a while ago but am also looking for improvements):
```
function fnSanitizePost($data) //escapes,strips and trims all members of the post array
{
if(is_array($data))
{
$areturn = array();
foreach($data as $skey=>$svalue)
{
$areturn[$skey] = fnSanitizePost($svalue);
}
return $areturn;
}
else
{
if(!is_numeric($data))
{
//with magic quotes on, the input gets escaped twice, which means that we have to strip those slashes. leaving data in your database with slashes in them, is a bad idea
if(get_magic_quotes_gpc()) //gets current configuration setting of magic quotes
{
$data = stripslahes($data);
}
$data = pg_escape_string($data); //escapes a string for insertion into the database
$data = strip_tags($data); //strips HTML and PHP tags from a string
}
$data = trim($data); //trims whitespace from beginning and end of a string
return $data;
}
}
```
I really want to avoid characters like I mention above from ever getting stored in the database, do I need to add some regex replacements in my sanitizing routine?
Thanks,
`-` Nicholas | I *finally* came up with a routine for replacing these characters. It took parsing some of the problematic strings one character at a time and returning the octal value of each character. In doing so I learned that smart quote characters come back as sets of 3 octal values. Here is routine I used to parse the string:
```
$str = "string_with_smart_quote_chars";
$ilen = strlen($str);
$sords = NULL;
echo "$str\n\n";
for($i=0; $i<$ilen; $i++)
{
$sords .= ord(substr($str, $i, 1))." ";
}
echo "$sords\n\n";
```
Here are the str\_replace() calls to "fix" the string:
```
$str = str_replace(chr(226).chr(128).chr(156), '"', $str); // start quote
$str = str_replace(chr(226).chr(128).chr(157), '"', $str); // end quote
$str = str_replace(chr(226).chr(128).chr(153), "'", $str); // for single quote
```
I am going to continue building up an array of these search/replacements which I am sure will continue to grow with the increasing use of these types of characters.
I know that there are some canned routines for replacing these but I had no luck with any of them on the Solaris 10 platform that my scripts are running on.
-- Nicholas | > didn’t,“ for beginning quotes and †for end quote
That's not junk, those are legitimate “smart quote” characters that have been passed to you encoded as UTF-8, but read, incorrectly, as ISO-8859-1.
You can try to get rid of them or try to parse them into plain old Latin-1 using utf\_decode, but if you do you'll have an application that won't let you type anything outside ASCII, which in this day and age is a pretty poor show.
Better if you can manage it is to have all your pages served as UTF-8, all your form submissions coming in as UTF-8, and all your database contents stored as UTF-8. Ideally, your application would work internally with all Unicode characters, but unfortunately PHP as a language doesn't have native Unicode strings, so it's usually a case of holding all your strings also as UTF-8, and taking the risk of occasionally truncating a UTF-8 sequence and getting a �, unless you want to grapple with mbstring.
> $data = pg\_escape\_string($data); //escapes a string for insertion into the database
>
> $data = strip\_tags($data); //strips HTML and PHP tags from a string
You don't want to do that as a sanitisation measure coming into your application. Keep all your strings in plain text form for handling them, then pg\_escape\_string() only on the way out to a Postgres query, and htmlspecialchars() only on the way out to an HTML page.
Otherwise you'll get weird things like SQL escapes appearing on variables that have passed straight through the script to the output page, and no-one will be able to use a plain less-than character.
One thing you *can* usefully do as a sanitisation measure is to remove any control codes in strings (other than newlines, \n, which you might conceivably want).
```
$data= preg_replace('/[\x00-\x09\x0B-\x19\x7F]/', '', $data);
``` | Routine for removing ALL junk from incoming strings? | [
"",
"php",
"string",
"sanitization",
""
] |
I have a c++ class, let's say it's called c, and I want to use the implicit copy constructor in one of the methods, like this:
```
c c::do_something() {
c copy = this; //I want implicit copy constructor here!
copy.something_else();
//........//
return copy;
}
```
However, gcc returns this error:
> error: invalid conversion from 'c\* const' to 'long unsigned int'
(I have another constructor from long unsigned int present)
... as if the copy constructor didn't exist. What am I doing wrong? | *this* is a pointer to an object so it should be
```
c copy = *this;
``` | Quite an aside, but it won't really fit in a comment and there seems to be some disagreement. Try this piece of code out to get an idea of when copy-constructors and assignment operators get called:
```
class A
{
public:
A() { cout << "A::A()\n"; }
A(const A &) { cout << "A::A(const A &)\n"; }
void operator =(const A &) { cout << "A::operator=(const A &)\n"; }
};
int main()
{
A a; // default constructor
A copy = a; // copy constructor (and no assignment operator)
A copy2(a); // copy constructor (and no assignment operator)
a = copy; // assignment operator
A function(); // declares a function return A
A b = A(); // default constructor (but no assignment operator)
A b2 = A(a); // copy constructor (but no assignment operator)
}
``` | Using "this" as a parameter to copy constructor | [
"",
"c++",
""
] |
Lets say I have a library function that I cannot change that produces an object of class A, and I have created a class B that inherits from A.
What is the most straightforward way of using the library function to produce an object of class B?
edit- I was asked in a comment for more detail, so here goes:
PyTables is a package that handles hierarchical datasets in python. The bit I use most is its ability to manage data that is partially on disk. It provides an 'Array' type which only comes with extended slicing, but I need to select arbitrary rows. Numpy offers this capability - you can select by providing a boolean array of the same length as the array you are selecting from. Therefore, I wanted to subclass Array to add this new functionality.
In a more abstract sense this is a problem I have considered before. The usual solution is as has already been suggested- Have a constructor for B that takes an A and additional arguments, and then pulls out the relevant bits of A to insert into B. As it seemed like a fairly basic problem, I asked to question to see if there were any standard solutions I wasn't aware of. | Since the library function returns an A, you can't make it return a B without changing it.
One thing you can do is write a function to take the fields of the A instance and copy them over into a new B instance:
```
class A: # defined by the library
def __init__(self, field):
self.field = field
class B(A): # your fancy new class
def __init__(self, field, field2):
self.field = field
self.field2 = field2 # B has some fancy extra stuff
def b_from_a(a_instance, field2):
"""Given an instance of A, return a new instance of B."""
return B(a_instance.field, field2)
a = A("spam") # this could be your A instance from the library
b = b_from_a(a, "ham") # make a new B which has the data from a
print b.field, b.field2 # prints "spam ham"
```
Edit: depending on your situation, **composition instead of inheritance** could be a good bet; that is your B class could just contain an instance of A instead of inheriting:
```
class B2: # doesn't have to inherit from A
def __init__(self, a, field2):
self._a = a # using composition instead
self.field2 = field2
@property
def field(self): # pass accesses to a
return self._a.field
# could provide setter, deleter, etc
a = A("spam")
b = B2(a, "ham")
print b.field, b.field2 # prints "spam ham"
``` | This can be done if the initializer of the subclass can handle it, or you write an explicit upgrader. Here is an example:
```
class A(object):
def __init__(self):
self.x = 1
class B(A):
def __init__(self):
super(B, self).__init__()
self._init_B()
def _init_B(self):
self.x += 1
a = A()
b = a
b.__class__ = B
b._init_B()
assert b.x == 2
``` | Converting an object into a subclass in Python? | [
"",
"python",
""
] |
I am trying to learn how Hibernate works, and I am running into an almost unacceptable learning curve. I can't see how to get Hibernate to respect the auto\_increment policy for my objects. Instead, it is overwriting entries in the database with existing IDs, beginning with 1.
I have a simple `Foo` object, backed by a MySQL table defined like this:
```
CREATE TABLE `Foo` (
`fooId` int(11) NOT NULL AUTO_INCREMENT,
PRIMARY KEY (`fooId`),
)
```
I have confirmed that inserting multiple Foo objects by hand with SQL (`insert into Foo values();`) does the right thing.
My Java class has the ID specified using annotations like this:
```
@Id
@GeneratedValue(strategy=GenerationType.AUTO)
@Column(name="fooId")
private Integer id;
```
I then execute some test code that simply instantiates Foo objects and saves them to the database (using `session.save(obj)`). It seems that it uses its own primary key sequence, beginning with one, and does not look at the table's key policy. It overwrites whatever was there.
I have tried variations on the `@GeneratedValue` bit (using all possible strategies, leaving off the parenthetic clause). Somebody even suggested leaving off the `GeneratedValue` entirely. Nothing seems to work.
Am I leaving something out? What am I missing? Is Hibernate really this hard?
(If anybody has an alternative Java database persistence option, please suggest one. I am making prototypes, not long-lasting mondo-engineered projects.) | I believe you want `GenerationType.IDENTITY`. MySql does not use a table or sequence for generating the Id value. | I wrote this in a comment under the accepted answer, but those aren't shown by default so I'll re-post it as an answer.
I was using a hibernate.cfg.xml file off some dude's web site, and it had this:
```
<property name="hibernate.hbm2ddl.auto">create</property>
```
This made the system to re-create my table each time I ran my app. Commenting it out solved the problem.
The other two answers about the various ways to create IDs are correct. My original problem's **symptom** seemed to do with ID generation, but the actual cause was misconfiguration. | Hibernate not respecting MySQL auto_increment primary key field | [
"",
"java",
"mysql",
"hibernate",
""
] |
I want to get the number of rows in my MySQL table and store that number in a php variable. This is the code I'm using:
```
$size = @mysql_query("SELECT COUNT(*) FROM News");
```
$size ends up being "Resource ID #7." How do I put the number of rows directly into $size? | mysql\_query returns a query resource id. In order to get values from it you need to use mysql\_fetch\_assoc on the resource id to fetch a row into an array.
```
$result = mysql_query("SELECT COUNT(*) FROM News");
$row = mysql_fetch_assoc($result);
$size = $row['COUNT(*)'];
``` | You need to call [mysql\_fetch\_row](http://us.php.net/manual/en/function.mysql-fetch-row.php) or one of its [sister functions](http://us.php.net/mysql).
```
<?php
// untested
$result = @mysql_query("SELECT COUNT(*) FROM News");
// error handling
$row = mysql_fetch_row($result);
$count = $row[0];
?>
``` | How can I store the result of an SQL COUNT statement in a PHP variable | [
"",
"php",
"mysql",
""
] |
When you start to use a third party library in Java, you add their jars to your project. Do you also add the jars to the repository or do you just keep a local installation. And if the latter, how do you sync between team members to be able to work?
Thanks. | No, if you use Maven. Put them into Maven repository (if they are not there yet, most open source libraries are in public Maven repositories already).
Yes, if you use Ant. | Yes. You should add to the repository whatever is required for a developer on a clean system (aside from having the JDK and ant installed) to check out and build the project. | Should libraries (jar) go into the repository? | [
"",
"java",
"jar",
"repository",
"project",
""
] |
In MySQL you can quite simply read a file with `SELECT load_file('foo.txt')` and write to a file with `SELECT 'text' INTO DUMPFILE 'foo.txt'`.
I am looking for a way to do this in MSSQL 2000 without using any external tools, just with queries or stored procedures. | For input, you can use the text file ODBC driver:
```
select * from OpenRowset('MSDASQL',
'Driver={Microsoft Text Driver (*.txt;
*.csv)};DefaultDir=c:\;','select * from [FileName.csv]')
```
Assuming the output file already exists, you can use Jet to write to it:
```
INSERT INTO
OPENROWSET('Microsoft.Jet.OLEDB.4.0',
'Text;Database=C:\;HDR=Yes;', 'SELECT
* FROM filename.csv') SELECT 1234,5678
``` | There are a few ways you can do this.
using xp\_cmdshell:
```
declare @writetofile varchar(1000)
select @writetofile = 'osql -U -P -S -Q"select * from yourtable" -o"c:\foo.txt"'
exec master..xp_cmdshell @writetofile
```
Or you can use bcp from the command line to read/write to files, which is usually quicker to do bulk inserts/selects
Alternatively, you can also use sp\_OACreate, but you probably don't want to go this route since it's not exactly what databases are meant to do. :) | How can I read/write from/to files with MSSQL 2000? | [
"",
"sql",
"sql-server",
""
] |
How would you prevent the user from adding or removing lines in a TextBox? By that I mean, if I set the text in a textbox to 7 lines with some text, how can I make sure that it will always be 7 lines of text there? The user must be able to edit those lines like usual, but not remove a line entirely, and not add any new ones.
Would have to take into account both keyboard input and also things like cut and paste, etc.
Any good ideas?
---
**Reason:** I want to make a file renamer kind of like [Oscar's Renamer](http://www.mediachance.com/free/renamer.htm). You give it a folder, and it loads the filenames into a textbox where you can do changes pretty much like you do in a text editor. When you are happy with your changes, you write them back. Reason for constant `n` lines in textbox is then of course that line `n` is the name of file `n`. Adding a new line shouldn't be allowed since you only have those files in that folder. Removing a line should also not be allowed since you would then be missing a name for a file.
Why go through the trouble of making something like this when it already exists? Well, I am curious to see if I could do it, and thought it could be a nice excercise to learn a few things along the way. Since it has some interesting problems that needs to be solved. Like this one :) There are also some features I think are lacking in that [Oscar's Renamer](http://www.mediachance.com/free/renamer.htm). So... to sum up: I'm doing it to learn and to try make an even better version of it. I know fully well that I may just as well fail completely though, or just never finish it :p But that is another story. I want to learn | One possible way of doing this is to sub-class the Textbox control and override the winProc method. This method handles all window messages pumped to the windowed control (Textbox in your case). You could monitor the use of the backspace and delete keys and carat position and discard the key strokes that attempt to remove carriage return line feed sequences. And provide the user with an interactive alert that tells them why they cannot remove entire lines.
Doing it this way gives you complete control and is the lowest level way to see **all** input that is coming into your Textbox control. You can intercept certain messages and discard them, the ones that you want to allow just pass them through into the base class method. Such as if the user highlights all lines and hits the delete key. There is other event handlers that you can use to intercept keyboard input but they have some limitations, the winProc will allow you to check any message directed to the control including delete, backspace copy and paste etc, mouse clicks etc.
Sample:
```
public class myCustomTextBox : TextBox
{
protected override void WndProc(ref Message m)
{
if (m.Msg == 770) // window paste message id
{
string clipBoardData = Clipboard.GetDataObject().GetData(DataFormats.Text).ToString();
handlePasteEvent(clipBoardData);
}
else
{
base.WndProc(ref m);
}
}
private void handlePasteEvent(string pasteData)
{
// process pasted data
}
}
``` | You have the wrong interface then for the data. In this case, you should have a fixed number of textboxes, one for each line of data. This would allow the user to modify the contents of each line, but not remove a line or add a line.
Trying to make a multi-line textbox do this will be maddening at best, since you will have to determine when a new line is added/removed and then kill the change. | How would you prevent user from adding/removing lines in a TextBox? | [
"",
"c#",
"winforms",
""
] |
I have a function that exports values into a CSV file, but the "comments" field has commas in it and it messes up the columns when you open it in a spreadsheet program.
Is there a way around exporting it properly?
```
//this exports only names and comments into a CSV file
function exportNamesCommentsCSV($table, $columns) {
$file = "volunteer_comments";
$csv_output = '';
$table = 'volunteers_2009';
$result = mysql_query("select * FROM ".$table."");
$i = 0;
if (mysql_num_rows($result) > 0) {
while ($row = mysql_fetch_assoc($result)) {
$csv_output .= $row['Field'].", ";
$i++;
}
}
$csv_output .= "\n";
$values = mysql_query("SELECT lname, fname, email, comments FROM ".$table."");
while ($rowr = mysql_fetch_row($values)) {
for ($j=0;$j<$i;$j++) {
$csv_output .= $rowr[$j].", ";
}
$csv_output .= "\n";
}
$filename = $file."_".date("Y-m-d_H-i",time());
header("Content-type: application/vnd.ms-excel");
header("Content-disposition: csv" . date("Y-m-d") . ".csv");
header( "Content-disposition: filename=".$filename.".csv");
print $csv_output;
exit;
}
```
EDIT: This would be an example of a comment...
I would prefer to work the gates, but I can work anywhere really.
The , (comma) in the example comment above throws off the proper column structure. | What you need to do is enclose values in quotes, like so:
```
"value","value","value"
```
This will correctly handle commas in the values, and numeric values will also still work fine.
To put a quote in the value, however, you need to put two quotes together. E.g. representing this string:
```
Bobby says, "Hi!"
```
it will need to be represented as
```
"some value","Bobby says, ""Hi!""","Some other value"
```
You can do this with:
```
$csv_value = "\"" . eregi_replace("\"", "\"\"", $orig_value) . "\"";
``` | PHP has functions for dealing with making comma separated files for you.
Check out the [`fputcsv`](https://www.php.net/manual/en/function.fputcsv.php) function. It lets you do something like this:
```
<?php
$list = array (
array('this has, commas, in it','bbb','ccc','dddd'),
array('123','456','789', 'this is "like this"'),
array('1234','124','1242', 'yay, woo"!"')
);
$fp = fopen('file.csv', 'w');
foreach ($list as $record) {
fputcsv($fp, $record);
}
fclose($fp);
?>
```
The file `file.csv` will have this in it:
```
"this has, commas, in it",bbb,ccc,dddd
123,456,789,"this is ""like this"""
1234,124,1242,"yay, woo""!"""
```
Later on when you need to read it, if you ever do, you can use [`fgetcsv`](https://www.php.net/manual/en/function.fgetcsv.php).
So in your situation, all you need to do is make an array of arrays that each have the name and the comment, and then put it through `fputcsv` and you're golden. | How do I export to CSV file with table record value with comma in it properly? | [
"",
"php",
"mysql",
"csv",
""
] |
I have a simple HTML page that looks like this:
```
...<div id="main">
<a href="#">Click here!</a>
</div>...
```
I have a piece of jQuery JavaScript in the header that looks like this:
```
<script type="text/javascript">
$(document).ready(function() {
DoHello();
});
function DoHello()
{
$("div#main a").text("Click here!");
$("div#main a").attr("onmouseup", "javascript:alert('Hello!');");
}
</script>
```
When I click the HTML link in FireFox then I get an alert that says 'Hello!'. Why does this not work in IE7/8?
When I look at the (dynamically) build DOM in IE then I can see the `onmouseup` is present but it is never called. I have tried replacing `onmouseup` with `onclick` - same problem... | You shouldn't be using the JavaScript pseudo protocol for anything.
This should work:
```
function DoHello()
{
$("div#main a")
.text("Click here!")
.mouseup(function(){
alert('Hello!');
});
}
``` | Don't use expando events, use jQuery!
```
$("div#main a").mouseup(function(){ alert('Hello!') });
``` | JavaScript / jQuery problem on Internet Explorer 7 | [
"",
"javascript",
"jquery",
"jquery-events",
""
] |
I have a C# web application on .net 2.0 being hosted on server A. The web application allows users to upload files using `<input id="File1" name="filMyFile" type="file" runat="server" />` to server A. This all works just fine.
I am now being asked to modify the web application to allow pages being served by A to allow uploading directly to server B without storing any information on A not even temporarily.
I am being asked to do this for security reasons. I was thinking about possibly using an iframe and having server B only host the upload portion wrapping the request with SSL. I am not entirely sure of the security implications of doing this, however I have seen a few websites in which for their login controls they SSL only an iframe which contains the login portion and the rest of their site was unsecure.
Is this an OK thing to do? Can someone recommend a better way? Perhaps suggest a basic architecture. | I have used the iframe approach, where the iframe code with the upload form is hosted on Server B. It worked reasonably well. You could also host the entire page on Server B if you want. | Why don't you just set the `form` action to the SSL secured page on server B?
```
<form method="POST" action="https://serverb/uploadmanager.ashx" ...>
``` | Securely upload file to alternate server C# .net 2.0 | [
"",
"c#",
"security",
"redirect",
"file-upload",
""
] |
I am looking for ways to decode a PKCS#12 file in .NET, I need to extract the private key and any certificates so that i can programatically access the following.
* modulus
* publicExponent
* privateExponent
* prime1
* prime2
* exponent1
* exponent2
* coefficient
I need this informatio so that i can successfully use PKCS#11 to create a private key and cetificate on a USB token.
I have found a website that uses [OpenSSL](http://www.claushc.dk/ssl/) to output this data. I was pretty excited when I found [OpenSSL.NET](http://sourceforge.net/projects/openssl-net) however the functionallity to split PKCS#12 files hasn't been implemented yet. I was wondering if anyone knew of any altenatives.
Thanks
Rohan | I've used bouncy castle API extensibly on a recent project, on its Java port, and it works wonders, flawlessly.
I bet their C# isn't much different, and it does a really good work at what is targeted.
<http://www.bouncycastle.org/> | Cheers Manuel,
I downloaded the Bouncy Castle API and it didn't take long to find what i needed. The source code includes an extensive list of unit tests.
```
static void Main(string[] args)
{
char[] password = new char[] {'p','a','s','s','w','o','r','d'};
using(StreamReader reader = new StreamReader(@"Test.pfx"))
{
Pkcs12Store store = new Pkcs12Store(reader.BaseStream,password);
foreach (string n in store.Aliases)
{
if(store.IsKeyEntry(n))
{
AsymmetricKeyEntry key = store.GetKey(n);
if(key.Key.IsPrivate)
{
RsaPrivateCrtKeyParameters parameters = key.Key as RsaPrivateCrtKeyParameters;
Console.WriteLine(parameters.PublicExponent);
}
}
}
}
}
``` | Decode a PKCS#12 file | [
"",
"c#",
"pfx",
"pkcs#12",
""
] |
I'm having some trouble with a particular piece of code, if anyone can enlighten me on this matter it would be greatly appreciated, I've isolated the problem down in the following sample:
```
#include <iostream>
using namespace std;
class testing{
int test();
int test1(const testing& test2);
};
int testing::test(){
return 1;
}
int testing::test1(const testing& test2){
test2.test();
return 1;
}
```
So what could possibly have cause the following error:
test.cpp:15: error: passing ‘const testing’ as ‘this’ argument of ‘int testing::test()’ discards qualifiers
Thanks a lot! | The problem is calling a non-`const` function `test2.test()` on a `const` object `test2` from `testing::test1`.
`testing::test1` gets `test2` as a parameter `const testing &test2`. So within `testing::test1`, `test2const`. Then in the first line of the function:
```
test2.test()
```
The `testing::test` function is called on `test2`. That function is not declared with `const` at the signature end, so it may modify the object it is called on (the `this` pointer implicitly passed to it), and even though it does not, the compiler assumes so. By letting you call it there, the compiler would let you modify a `const` variable without an explicit cast, which C++ is not supposed to allow. **Therefore to explain the error message**:
```
test.cpp:15: error: passing ‘const testing’ as ‘this’ argument of ‘int testing::test()’ discards qualifiers
```
`this` refers to the object the member function (`testing::test`) operates on, and in this case it is not `const`, because `testing::test` was not declared with `const`, and thus the mismatch is detected when trying to make a non-`const` pointer (`this`) refer to a `const` object (`testing`), ignoring the `const` *qualifier*.
**To solve this**, decide whether the `testing::test` function should ever need to modify the object it is called on (the way it is written now it does not, as all it does is `return 1`, however that may change, so you need to think at what its intended functionality is). If it should, then obviously calling it on a `const` object is bad, although you can use `const_cast` to ask the compiler to override that, but this is dangerous. If it should not, then mark it `const`, so that it can be called on `const` objects as well:
```
class testing{
int test1() const;
// ...
}
int testing::test() const {
// ...
}
``` | Because of the definition of the member function test1:
```
int testing::test1(const testing& test2){
test2.test();
return 1;
}
```
You are passing in a const reference of for the variable test2.
That means that you cannot modify any member of test2 and you cannot call any member function that is not const or is not static.
Here is how you can fix:
```
int testing::test() const {
return 1;
}
```
The extra const at the end tells the compiler that you are not planning on modifying the content of the current object (and if you do you will get a different compiling error). | An odd C++ error: test.cpp:15: error: passing ‘const *’ as ‘this’ argument of ‘*’ discards qualifiers | [
"",
"c++",
"compiler-errors",
"constants",
""
] |
I have created css page called style.php and included this the top:
`<?php header("Content-type: text/css"); ?>`
Does this make you cringe. Is it a terrible idea? I am doing this because I have created a CMS that allows the admin to control colors on pages (so the style.php script queries the database and grabs the hex values).
Any thoughts? | This is a fine solution, just make sure that you are serving up the appropriate headers. See [my blogpost](http://hype-free.blogspot.com/2007/07/serving-up-authenticated-static-files.html) about a related topic (search for "The important headers are" to get to the right section).
One more thing:
With the caching you might get into the situation where the user changes the color she wants to see, but (because it is cached at the client), the page doesn't update. To invalidate the cache, append a ?=id at the end of the URL, where ID is a number that is stored for the user (for example in the session) and is incremented every time she changes the color scheme.
Example:
* At first the user has a stylesheet of <http://example.com/style.php?id=0>
* When she changes the colors, she will get the url of <http://example.com/style.php?id=1> and so on. | It's not a bad idea (subject to the notes about caching + content-type), but think about the cost of firing up a PHP instance (mod\_php) or passing the script to an already running php (fastcgi style). Do you really want that overhead?
You might be better off writing a "cached" version of your CSS page to a static file, and serving that (or if you need per-page flexibility, selecting which style sheet to include; I assume your main page is PHP already) | Is it a bad idea to use php in css documents? | [
"",
"php",
"css",
""
] |
I have a loop in a BackgroundWorker that saves some stuff via xml Serialization when needed but this seems to load a new assembly each time
> 'xxyyzz.Main.vshost.exe' (Managed): Loaded '9skkbvdl'
>
> 'xxyyzz.Main.vshost.exe' (Managed): Loaded 'd2k4bdda'
and so on. Why is this happening? Is there any thing i can do about it? Is this something I should be concerned about? This program will stay running for a long long time with no restart... | The assemblies are generated on the fly when you create the XML Serializer - I wouldn't be too concerned about it, but if you are, you could hold a reference to the serialiser for your type, and use that in sucessive calls | Are you passing additional arguments into your XmlSerializer? i.e. using a non-default constructor? yup, it does this (as it builds a new serialization assembly each time)... consider creating the serializer in a type initializer and caching it:
```
static readonly XmlSerializer foo;
static MyType() {
foo = new XmlSerializer(typeof(TypeToSerialize), additionalArgs);
}
```
then use the cached `foo` serializer instance repeatedly. | Loading new assembly when Serializing | [
"",
"c#",
"memory-management",
"assemblies",
"xml-serialization",
""
] |
I'm having a bit of trouble getting a Python regex to work when matching against text that spans multiple lines. The example text is (`\n` is a newline)
```
some Varying TEXT\n
\n
DSJFKDAFJKDAFJDSAKFJADSFLKDLAFKDSAF\n
[more of the above, ending with a newline]\n
[yep, there is a variable number of lines here]\n
\n
(repeat the above a few hundred times).
```
I'd like to capture two things:
* the `some Varying TEXT` part
* all lines of uppercase text that come two lines below it in one
capture (I can strip out the newline characters later).
I've tried a few approaches:
```
re.compile(r"^>(\w+)$$([.$]+)^$", re.MULTILINE) # try to capture both parts
re.compile(r"(^[^>][\w\s]+)$", re.MULTILINE|re.DOTALL) # just textlines
```
...and a lot of variations hereof with no luck. The last one seems to match the lines of text one by one, which is not what I really want. I can catch the first part, no problem, but I can't seem to catch the 4-5 lines of uppercase text.
I'd like `match.group(1)` to be `some Varying Text` and `group(2)` to be line1+line2+line3+etc until the empty line is encountered.
If anyone's curious, it's supposed to be a sequence of amino acids that make up a protein. | Try this:
```
re.compile(r"^(.+)\n((?:\n.+)+)", re.MULTILINE)
```
I think your biggest problem is that you're expecting the `^` and `$` anchors to match linefeeds, but they don't. In multiline mode, `^` matches the position immediately *following* a newline and `$` matches the position immediately *preceding* a newline.
Be aware, too, that a newline can consist of a linefeed (`\n`), a carriage-return (`\r`), or a carriage-return+linefeed (`\r\n`). If you aren't certain that your target text uses only linefeeds, you should use this more inclusive version of the regex:
```
re.compile(r"^(.+)(?:\n|\r\n?)((?:(?:\n|\r\n?).+)+)", re.MULTILINE)
```
BTW, you don't want to use the DOTALL modifier here; you're relying on the fact that the dot matches everything *except* newlines. | This will work:
```
>>> import re
>>> rx_sequence=re.compile(r"^(.+?)\n\n((?:[A-Z]+\n)+)",re.MULTILINE)
>>> rx_blanks=re.compile(r"\W+") # to remove blanks and newlines
>>> text="""Some varying text1
...
... AAABBBBBBCCCCCCDDDDDDD
... EEEEEEEFFFFFFFFGGGGGGG
... HHHHHHIIIIIJJJJJJJKKKK
...
... Some varying text 2
...
... LLLLLMMMMMMNNNNNNNOOOO
... PPPPPPPQQQQQQRRRRRRSSS
... TTTTTUUUUUVVVVVVWWWWWW
... """
>>> for match in rx_sequence.finditer(text):
... title, sequence = match.groups()
... title = title.strip()
... sequence = rx_blanks.sub("",sequence)
... print "Title:",title
... print "Sequence:",sequence
... print
...
Title: Some varying text1
Sequence: AAABBBBBBCCCCCCDDDDDDDEEEEEEEFFFFFFFFGGGGGGGHHHHHHIIIIIJJJJJJJKKKK
Title: Some varying text 2
Sequence: LLLLLMMMMMMNNNNNNNOOOOPPPPPPPQQQQQQRRRRRRSSSTTTTTUUUUUVVVVVVWWWWWW
```
---
Some explanation about this regular expression might be useful: `^(.+?)\n\n((?:[A-Z]+\n)+)`
* The first character (`^`) means "starting at the beginning of a line". Be aware that it does not match the newline itself (same for $: it means "just before a newline", but it does not match the newline itself).
* Then `(.+?)\n\n` means "match as few characters as possible (all characters are allowed) until you reach two newlines". The result (without the newlines) is put in the first group.
* `[A-Z]+\n` means "match as many upper case letters as possible until you reach a newline. This defines what I will call a *textline*.
* `((?:`*textline*`)+)` means match one or more *textlines* but do not put each line in a group. Instead, put **all** the *textlines* in one group.
* You could add a final `\n` in the regular expression if you want to enforce a double newline at the end.
* Also, if you are not sure about what type of newline you will get (`\n` or `\r` or `\r\n`) then just fix the regular expression by replacing every occurrence of `\n` by `(?:\n|\r\n?)`. | Regular expression matching a multiline block of text | [
"",
"python",
"regex",
"multiline",
""
] |
Which is the best way to parse an XML file in PHP ?
**First**
Using the DOM object
```
//code
$dom = new DOMDocument();
$dom->load("xml.xml");
$root = $dom->getElementsByTagName("tag");
foreach($root as $tag)
{
$subChild = $root->getElementsByTagName("child");
// extract values and loop again if needed
}
```
**Second**
Using the simplexml\_load Method
```
// code
$xml = simplexml_load_string("xml.xml");
$root = $xml->root;
foreach($root as $tag)
{
$subChild = $tag->child;
// extract values and loop again if needed
}
```
Note :
These are the two I am aware of. If there are more fill in.
Wanted to know which method is the best for parsing huge XML files, also which method is the **fastest** irrespective of the way the method needs to be implemented
Size will be varying from 500KB to 2MB. The parser should be able to parse small as well as large files in the least amount of time with good memory usage if possible. | I have started to use XMLReader to parse the XML files. After doing a bit of googling around found it the best to way parse XML files as it does not load the whole XML file into memory. Say if suppose my XML files was of 5 MB, while parsing it using XMLReader 5MB of my memory does not get wasted.
```
//usage
$xml = new XMLReader();
$xml->XML($xmlString);
while($xml->read)
{
if($xml->localName == 'Something') // check if tag name equals something
{
//do something
}
}
```
Using XML Reader we can find if the current tag is an opening tag or closing tag and do the needful as required. | It depends on the document you're passing, but XMLReader is usually the faster than both simplexml and DOM (<http://blog.liip.ch/archive/2004/05/10/processing_large_xml_documents_with_php.html>). Personally though I've never used XMLReader and usually decided which to use depending on whether or not I need to edit it:
* simplexml if I'm just reading a document
* DOM if I'm modifying the DOM and saving it back
You can also convert objects between simplexml and DOM. | PHP XML Parsing | [
"",
"php",
"xml",
"parsing",
"simplexml",
"domdocument",
""
] |
In [podcast 40](https://blog.stackoverflow.com/2009/02/podcast-40/) one of the things mentioned was checking the difference between files in source control to see what was done the day before.
Are there any good external tools - something I can open all by itself - that will show the diff on everything changed since the last check-in or within a given time frame on a single project?
We're currently using SourceSafe but will hopefully be moving to TFS soon.
Thanks | The question is a little misleading. The term difference/compare tool is usually referred to a tool used to compare the contents of 2 versions of a file or compare the contents of a local file with that in source control repository.
If you want a tool that generates reports of all files modified between two dates or all files changed by a particular user etc., checkout [VSSReporter](http://www.codeproject.com/KB/applications/VssReporter.aspx). | Unfortunately, there aren't any external tools that can integrate within sourcesafe that I know of.
Depending on the question you asked, the best you can do is right click the folder you're interested in, go to differences, and click the recursive button, and uncheck the "same files" checkbox. The other option is to do the same thing, but search for the recursive history between dates.
If you want to see what *you* have done, you can search for all files you have checked out by doing View->Search->Status Search, Files checked out to <your username>, search in current project + subprojects.
This is really slow operation in sourcesafe, and is much faster in other version-control systems. | External diffence tools for Sourcesafe | [
"",
"c#",
"vb.net",
"diff",
"visual-sourcesafe",
""
] |
I've finally got Intellisense working for JQuery by applying patch KB958502 to Visual Studio 2008 and including this line:
```
/// <reference path="JQuery\jquery-1.3.2.js"/>
```
at the top of my .js files. Now I'm trying to figure out how to get JavaScript intellisense for the client proxies generated by the ScriptManager's ScriptReference elements (as shown here):
```
<asp:ScriptManager ID="ScriptManager1" runat="Server" EnablePartialRendering="false" AsyncPostBackTimeout="999999">
<Services>
<asp:ServiceReference path="../Services/DocLookups.svc" />
</Services>
</asp:ScriptManager>
```
The client proxies are working -- i.e. I can make calls through them, but I'm getting no Intellisense.
My service is defined with a .svc file:
```
<%@ ServiceHost Language="C#" Debug="true" Service="Documents.Services.DocLookups" CodeBehind="~/App_Code/DocLookups.cs" %>
```
The code behind file looks like:
```
[ServiceContract(Namespace = "Documents.Services", Name = "DocLookups")]
[AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)]
[ServiceBehavior(IncludeExceptionDetailInFaults = true)]
public class DocLookups {
...
```
a sample method in this class is:
```
//Called at the begining of the page to fill in the category list
[OperationContract]
public SelectOption[] GetCategoriesForSelectList()
{
SelectOption[] Result;
IDocumentRepository repository = new DocumentEntityRepository(ConnectionString);
Result = (from cat in repository.GetDocCategories()
select new SelectOption(cat.Category_ID.ToString(), cat.CategoryName)).ToArray();
if (Result.Length > 0)
Result[0].Selected = true; //Select first item
return Result;
}
```
and it uses a data contract defined like this:
```
namespace Documents.Services {
[DataContract]
public class SelectOption
{
//A useful DTO to use when filling a <select> element with options
public SelectOption(string optionValue, string optionText) {
OptionValue = optionValue;
OptionText = optionText;
Selected = false;
}
public SelectOption(string optionValue, string optionText, bool selected) {
OptionValue = optionValue;
OptionText = optionText;
Selected = selected;
}
[DataMember]
public string OptionValue { get; set; }
[DataMember]
public string OptionText { get; set; }
[DataMember]
public bool Selected { get; set; }
}
```
}
In my javascript files, a call to this service looks like:
```
Documents.Services.DocLookups.GetCategoriesForSelectList(...
```
but I get no Intellisense (for example, if I type Documents. nothing pops up). I don't get intellisense for either the generated methods or the [DataContract] types used by the methods.
I believe that I am **supposed** to get Intellisense for these proxies and types, but can't figure out what I might be doing wrong. TIA. | Did
`/// <reference path="../Services/DocLookups.svc" />`
not work? | Thanks to Scott for pointing out that I need to add the
```
///<reference path...
```
line. I don't know where it's documented but I somehow missed that this was required for the WCF generated client-side proxies -- although it makes sense now given the same idiom is used to get the Intellisense for JQuery.
For the record, the line I ended up having to use was slightly different than what Scott suggested given my projects structure. I tried:
```
/// <reference path="../Documents/Services/DocLookups.svc" />
```
I then saved the file and from the VS Edit menu chose **Intellisense**... **Update JScript Intellisense**...
Unfortunately, this did not work and I got the following error when updating the Intellisense:
```
Error updating JScript IntelliSense:
C:\TFSSource\LitigationPortal\Version 1.0\LitigationPortal\Documents\Services\DocLookups.svc:
'Type' is undefined @ 0:0
```
So I've made some progress but I'm not quite there yet. | How do I get intellisense for WCF Ajax Services? | [
"",
"javascript",
"ajax",
"visual-studio-2008",
"wcf",
"intellisense",
""
] |
I am using both the JAI media apis and ImageMagick?
ImageMagick has some scalability issues and the JNI based JMagick isn't attractive either.
JAI has poor quality results when doing resizing operations compared to ImageMagick.
Does anyone know of any excellent tools either open source or commercial that are native java and deliver high quality results? | There's [ImageJ](http://rsbweb.nih.gov/ij/), which boasts to be the
> world's fastest pure Java image
> processing program
It can be used as a library in another application. It's architecture is not brilliant, but it does basic image processing tasks. | I know this question is quite old, but as new software comes out it does help to get some new links to projects that might be interesting for folks.
[imgscalr](https://github.com/thebuzzmedia/imgscalr) is pure-Java image resizing (and simple ops like padding, cropping, rotating, brighten/dimming, etc.) library that is painfully simple to use - a single class consists of a set of simple graphics operations all defined as *static* methods that you pass an image and get back a result.
The most basic example of using the library would look like this:
```
BufferedImage thumbnail = Scalr.resize(image, 150);
```
And a more typical usage to generate image thumbnails using a few quality tweaks and the like might look like this:
```
import static org.imgscalr.Scalr.*;
public static BufferedImage createThumbnail(BufferedImage img) {
// Create quickly, then smooth and brighten it.
img = resize(img, Method.SPEED, 125, OP_ANTIALIAS, OP_BRIGHTER);
// Let's add a little border before we return result.
return pad(img, 4);
}
```
All image-processing operations use the raw Java2D pipeline (which is hardware accelerated on major platforms) and won't introduce the pain of calling out via JNI like library contention in your code.
imgscalr has also been deployed in large-scale productions in quite a few places - the inclusion of the AsyncScalr class makes it a perfect drop-in for any server-side image processing.
There are numerous tweaks to image-quality you can use to trade off between speed and quality with the highest ULTRA\_QUALITY mode providing a scaled result that looks better than GIMP's Lancoz3 implementation. | What is the best java image processing library/approach? | [
"",
"java",
"image-processing",
"image-manipulation",
""
] |
I have a really weird issue with an SQL query I've been working with for some time. I'm using SQL Server 2005.
Here's an example table from which the query is made:
```
Log:
Log_ID | FB_ID | Date | Log_Name | Log_Type
7 | 4 | 2007/11/8 | Nina | Critical
6 | 4 | 2007/11/6 | John | Critical
5 | 4 | 2007/11/6 | Mike | Critical
4 | 4 | 2007/11/6 | Mike | Critical
3 | 3 | 2007/11/3 | Ben | Critical
2 | 3 | 2007/11/1 | Ben | Critical
```
Now, the idea is to return the first date for the Log\_Person working on each FB\_ID, however, in the case there are several Log\_Names, I only want the SECOND Log\_Name (first time the responsibility is handed over to another). The result should look like this:
```
Desired result
Log_ID | FB_ID | Date | Log_Name | Log_Type
6 | 4 | 2007/11/6 | John | Critical
2 | 3 | 2007/11/1 | Ben | Critical
```
In an earlier thread, Peter Lang and Quassnoi gave brilliant answers which are below. Sadly I can barely understand what happens there, but they worked like a charm. Here's the code:
```
Quassnoi
SELECT lo4.*
FROM
(SELECT CASE WHEN ln.log_id IS NULL THEN lo2.log_id ELSE ln.log_id END
AS log_id, ROW_NUMBER() OVER (PARTITION BY lo2.fb_id ORDER BY lo2.cdate) AS rn
FROM
(SELECT lo.*,
(SELECT TOP 1 log_id
FROM t_log li WHERE li.fb_id = lo.fb_id AND li.cdate >= lo.cdate
AND li.log_id lo.log_id AND li.log_name lo.log_name
ORDER BY cdate, log_id)
AS next_id
FROM t_log lo)
lo2 LEFT OUTER JOIN t_log ln ON ln.log_id = lo2.next_id)
lo3, t_log lo4
WHERE lo3.rn = 1 AND lo4.log_id = lo3.log_id
---
Peter Lang
SELECT *
FROM log
WHERE log_id IN
(SELECT MIN(log_id) FROM log
WHERE
(SELECT COUNT(DISTINCT log_name)
FROM log log2
WHERE log2.fb_id = log.fb_id ) = 1 OR log.log_name
(SELECT log_name FROM log log_3
WHERE log_3.log_id =
(SELECT MIN(log_id)
FROM log log4
WHERE log4.fb_id = log.fb_id ))
GROUP BY fb_id )
```
Now if you've read this far, here's the question. Why do they both work fine, but as soon as I apply other filters on them, everything gets mixed?
I even tried to create a temporary table with the WITH clause, and use Date and Log\_Type filters on that, but it still didn't work. Several results that should've been included with the filters were suddenly left out. When originally I would get the first dated only name, or the second name if there were several from the Log\_Name column, now I would randomly get whatever, if any. In a similar manner, using *WHERE (DATE BETWEEN '2007/11/1' AND '2007/11/30')* would cause a permaloop, where using *WHERE (MONTH(Date) = '11') AND (YEAR(Date) = '2007')* would work fine. But if I added one more filter to the latter option, for example *.. AND WHERE Log\_Type = 'Critical'*, it'd be on permaloop again. Both permaloops happened with Lang's solution.
I need to combine this type of search with another using UNION ALL, so I'm wondering if I'm going to run into any more similarly weird problems in the future with that? There's clearly something I don't understand about SQL here, and my DL for the query is today so I'm kinda stressed out here. Thx for all the help. :)
**Edit:** To clarify. In need the result of the queries above, and those results need to be filtered to 'critical' cases over a given time (month) ONLY.
This will then be united with another search that returns the first time a FB\_ID with 'Support' status (Log\_Type) has been logged. The idea is to give a picture of how many new cases are recorded in the DB each month.
**Edit 2:** Update, Russ Cam's suggestion below is working, but it rules out any FB\_ID's that were first dated outside the given range, even if the query result row where the Log\_Name changes, would exist within range. | Just as tester, what happens when you wrap either of their statements in a SELECT statement, effectively turning their statement into a sub query, and then put a WHERE clause on that?
For example,
```
SELECT log.*
FROM
(
SELECT lo4.*
FROM
(SELECT CASE WHEN ln.log_id IS NULL THEN lo2.log_id ELSE ln.log_id END
AS log_id, ROW_NUMBER() OVER (PARTITION BY lo2.fb_id ORDER BY lo2.cdate) AS rn
FROM
(SELECT lo.*,
(SELECT TOP 1 log_id
FROM t_log li WHERE li.fb_id = lo.fb_id AND li.cdate >= lo.cdate
AND li.log_id lo.log_id AND li.log_name lo.log_name
ORDER BY cdate, log_id)
AS next_id
FROM t_log lo)
lo2 LEFT OUTER JOIN t_log ln ON ln.log_id = lo2.next_id)
lo3, t_log lo4
WHERE lo3.rn = 1 AND lo4.log_id = lo3.log_id
) AS log
WHERE log.Date BETWEEN @start and @end
```
I would anticipate this working
**EDIT:**
Try this version. Set the variables to the desired values. Essentially, we only want to retrieve as little results that satisfy our conditions, within each of the subqueries, so that we are not trying to run conditions over a large resultset on columns that are not indexed.
```
DECLARE @log_type CHAR(20) --Set this to the correct datatype
SET @log_type = 'Critical'
DECLARE @start_date DATETIME
SET @start_date = '20 FEB 2009' -- use whichever datetime format is appropriate
DECLARE @end_date DATETIME
SET @end_date = '21 FEB 2009' -- use whichever datetime format is appropriate
SELECT lo4.*
FROM
(
SELECT
CASE WHEN ln.log_id IS NULL THEN lo2.log_id ELSE ln.log_id END AS log_id,
ROW_NUMBER() OVER (PARTITION BY lo2.fb_id ORDER BY lo2.cdate) AS rn
FROM
(
SELECT lo.*,
(SELECT TOP 1 log_id
FROM t_log li
WHERE
li.fb_id = lo.fb_id
AND li.cdate >= lo.cdate
AND li.log_id <> lo.log_id
AND li.log_name <> lo.log_name
AND log_type = @log_type
AND li.cdate BETWEEN @start_date and @end_date
ORDER BY cdate, log_id
) AS next_id
FROM t_log lo
) lo2
LEFT OUTER JOIN
t_log ln
ON ln.log_id = lo2.next_id
/* AND ln.cdate BETWEEN @start_date and @end_date
I think that this would be needed for cases where
the next_id is null
*/
) lo3,
t_log lo4
WHERE
lo3.rn = 1
AND lo4.log_id = lo3.log_id
```
**EDIT 2:**
After considering this some more, it's important to have the answers to further questions. Those questions have already been asked on [Quassnoi's answer](https://stackoverflow.com/questions/569438/weird-sql-issue/569486#569486), and will greatly change the result set returned. In short,
**1. For a specified date range,**
* do both the log date of the original log record and the next log record need to fall into that date range?
* are you looking to only include results where the date of the next log record for each fb\_id is in the date range (i.e. the date of the original log record doesn't matter)?
* are you looking to only include results where the date of the original log record for each fb\_id is in the date range (i.e. the date of the next log record, which is the one that will be returned in the resultset where a handover has taken place, can be after the date range).?
**2. For a specified log type,**
* does the log type for both the original log record and the next log record need to be the same as the specified log type?
* are you looking to include results where the next log record matches the specified log type, irrespective of the log type of the original log record for each fb\_id?
* are you looking to only include results where the original log record matches the specified log type, irrespective of the log type of the next log record?
The answers to these questions will be crucial in how the query is constructed and inferring what the resulting data is actually telling you. | Well, thanks for the compliment, first :)
My query actually looks over all records, selects the next responsible person for each record, and assigns the row number for each responsibility transition for a certain `fb_id`.
If there were no transitions for this `fb_id`, it will be selected as transition to `NULL`.
Then the query selects every first transition (that is with `ROW_NUMBER` of `1`), be it a real transition or fake, and checks if it is real or fake.
If it's real (to a non-`NULL` that is), it returns the id of person who **got** the responsibility; if not, it returns the person who **gave** the responsibility to `NULL` (i. e. did not give it at all).
As you can see, this query relies heavily on the index on `(fb_id, cdate, id)` to search the next responsible person. If you add new conditions, it cannot use this index anymore and becomes slow.
Please clarify which condition you want to add and we'll try to help you again :)
You said you want to add `log_type` in to the query.
How does the `first transition` count? Do you need to return first transition when the both fields and `critical`, or only transitions from `non-critical` to `critical`, or when either one is `critical`?
If you need to add `date range`, say, for `February` only, should it count the person who got the work on `February` but giving it on `March`? Or who got the work on `January` and giving it on `February`?
Meanwhile, try this:
```
SELECT lo4.*
FROM
(
SELECT CASE WHEN ln.log_id IS NULL THEN lo2.log_id ELSE ln.log_id END AS log_id,
ROW_NUMBER() OVER (PARTITION BY lo2.fb_id ORDER BY lo2.cdate) AS rn
FROM
(
SELECT
lo.*,
(
SELECT TOP 1 log_id
FROM t_log li
WHERE li.fb_id = lo.fb_id
AND li.cdate >= CASE WHEN lo.cdate < @range THEN @range ELSE lo.cdate END
AND li.cdate < DATEADD(month, 1, @range)
AND li.log_id <> lo.log_id
AND li.log_name <> lo.log_name
ORDER BY
cdate, log_id
) AS next_id
FROM t_log lo
WHERE lo.cdate >= @range
AND lo.cdate < DATEADD(month, 1, @range)
) lo2
LEFT OUTER JOIN t_log ln ON ln.log_id = lo2.next_id
) lo3,
t_log lo4
WHERE lo3.rn = 1
AND lo4.log_id = lo3.log_id
```
As you can see, there are two checks for date ranges there.
The one in the `inner` subquery filters out transitions where the `recipient` is out of date range.
The one in the `outer` query filters out transitions where the `sender` is out of range. | Weird SQL issue | [
"",
"sql",
""
] |
I am currently in a try catch finding if a property has been set properly to the bool value that it should be like this...
```
public void RunBusinessRule(MyCustomType customType)
{
try
{
if (customType.CustomBoolProperty == true)
{
DoSomething();
}
else
{
throw new Exception("This is obviously false or possibly null lets throw up an error.");
}
}
catch(Exception)
{
throw;
}
}
```
Now the deal with throwing this error for me is that I am using Microsoft's source analysis and it gives me an error stating "CA2201 : Microsoft.Usage : Object.RunBusinessRule(MyCustomType)creates an exception of type 'Exception', an exception type that is not sufficiently specific and should never be raised by user code. If this exception instance might be thrown, use a different exception type.
Soooo What exception should I throw that would be specific enough for Microsoft.., for the circumstance of throwing an error about my own application's logic handling and when I want to "throw". | ```
ArgumentException
InvalidOperationException
FormatException
```
The passed in argument wasn't good. | Should you be throwing an exception at all?
Having a false boolean value isn't exactly an exceptional circumstance.
**EDIT**
My original answer was a bit terse so I'll elaborate...
From your example it's not clear what the actual objects, properties and methods represent. Without this information, it's difficult to say what type of exception, if any, is appropriate.
eg, I'd consider the following a perfectly valid use of an exception (and your real code might well look something like this, but we can't tell from your example):
```
public void UpdateMyCustomType(MyCustomType customType)
{
if (!customType.IsUpdateable)
throw new InvalidOperationException("Object is not updateable.");
// customType is updateable, so let's update it
}
```
But in the general case, without knowing more about your domain model, I'd say that something like this (a false boolean value) isn't really exceptional. | C# what kind of exception should I raise? | [
"",
"c#",
""
] |
I've got geodjango running using [openlayers](http://openlayers.org/) and [OpenStreetMaps](http://www.openstreetmap.org/) with the admin app.
Now I want to write some views to display the data. Basically, I just want to add a list of points (seen in the admin) to the map.
Geodjango appears to use a *special* [openlayers.js](http://code.djangoproject.com/browser/django/tags/releases/1.0.2/django/contrib/gis/templates/gis/admin/openlayers.js) file to do it's magic in the admin. Is there a good way to interface with this?
How can I write a view/template to display the geodjango data on a open street map window, as is seen in the admin?
At the moment, I'm digging into the [openlayers.js](http://openlayers.org/) file and api looking for an 'easy' solution. (I don't have js experience so this is taking some time.)
The current way I can see to do this is add the following as a template, and use django to add the code needed to display the points. (Based on the example [here](http://openlayers.org/dev/examples/vector-features.html))
```
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<title>Draw Feature Example</title>
<script src="http://www.openlayers.org/api/OpenLayers.js"></script>
<script type="text/javascript">
var map;
function init(){
map = new OpenLayers.Map('map');
var layer = new OpenLayers.Layer.WMS( "OpenLayers WMS",
"http://labs.metacarta.com/wms/vmap0", {layers: 'basic'} );
map.addLayer(layer);
/*
* Layer style
*/
// we want opaque external graphics and non-opaque internal graphics
var layer_style = OpenLayers.Util.extend({}, OpenLayers.Feature.Vector.style['default']);
layer_style.fillOpacity = 0.2;
layer_style.graphicOpacity = 1;
/*
* Blue style
*/
var style_blue = OpenLayers.Util.extend({}, layer_style);
style_blue.strokeColor = "blue";
style_blue.fillColor = "blue";
style_blue.graphicName = "star";
style_blue.pointRadius = 10;
style_blue.strokeWidth = 3;
style_blue.rotation = 45;
style_blue.strokeLinecap = "butt";
var vectorLayer = new OpenLayers.Layer.Vector("Simple Geometry", {style: layer_style});
// create a point feature
var point = new OpenLayers.Geometry.Point(-111.04, 45.68);
var pointFeature = new OpenLayers.Feature.Vector(point,null,style_blue);
// Add additional points/features here via django
map.addLayer(vectorLayer);
map.setCenter(new OpenLayers.LonLat(point.x, point.y), 5);
vectorLayer.addFeatures([pointFeature]);
}
</script>
</head>
<body onload="init()">
<div id="map" class="smallmap"></div>
</body>
</html>
```
Is this how it's done, or is there a better way? | I think your solution is workable and probably the easiest approach. Just templatize the javascript and use Django to inject your data points as the template is rendered.
If you wanted to get fancier, you could have a Django view that served up the data points as JSON (application/json) and then use AJAX to call back and retrieve the data based on events that are happening in the browser. If you want your application to be highly interactive above and beyond what OpenLayers provides, this might be worth the added complexity, but of course it all depends on the needs of your application. | Another solution is to create a form that utilizes the GeoDjango Admin widget.
To do this, I:
Setup a GeneratePolygonAdminClass:
```
class GeneratePolygonAdmin(admin.GeoModelAdmin):
list_filter=('polygon',)
list_display=('object', 'polygon')
```
Where the form is built:
```
geoAdmin=GeneratePolygonAdmin(ModelWithPolygonField, admin.site)
PolygonFormField=GeneratePolygon._meta.get_field('Polygon')
PolygonWidget=geoAdmin.get_map_widget(PolygonFormField)
Dict['Polygon']=forms.CharField(widget=PolygonWidget()) #In this case, I am creating a Dict to use for a dynamic form
```
Populating the widget of the form:
```
def SetupPolygonWidget(form, LayerName, MapFileName, DefaultPolygon=''):
form.setData({'Polygon':DefaultPolygon})
form.fields['Polygon'].widget.params['wms_layer']=LayerName
form.fields['Polygon'].widget.params['wms_url']='/cgi-bin/mapserv?MAP=' + MapFileName
form.fields['Polygon'].widget.params['default_lon']=-80.9
form.fields['Polygon'].widget.params['default_lat']=33.7
form.fields['Polygon'].widget.params['default_zoom']=11
form.fields['Polygon'].widget.params['wms_name']=YOURWMSLayerName
form.fields['Polygon'].widget.params['map_width']=800
form.fields['Polygon'].widget.params['map_height']=600
form.fields['Polygon'].widget.params['map_srid']=YOUR_SRID
form.fields['Polygon'].widget.params['modifiable']=True
form.fields['Polygon'].widget.params['map_options']={}
form.fields['Polygon'].widget.params['map_options']['buffer'] = 0
return form
```
Based on the code at:
<http://code.djangoproject.com/browser/django/branches/gis/django/contrib/gis/admin/options.py?rev=7980>
It looks like you can use the extra\_js option to include OpenStreetMap (I have not tested this). | How to display data using openlayers with OpenStreetMap in geodjango? | [
"",
"python",
"mapping",
"openlayers",
"geodjango",
""
] |
Is this pattern for mutual exclusion as safe as I think it is? If so, what do you call it?
```
lock (_lock) {
if (_flag) return;
else _flag = true;
}
try {
//critical code...
}
finally {
_flag = false;
}
```
I want to ensure the critical section, but without the other threads piling up waiting to acquire the lock. Obviously I make sure the flag is set nowhere else. Is there a better way? | No, that is not safe. If you want to ensure mutually exclusive without blocking, you can use Monitor.TryEnter:
```
if (Monitor.TryEnter(lockObj, 0)) {
// got the lock !
try {
// code
}
finally { // release the lock
Monitor.Exit(lockObj);
}
}
``` | Have you looked at [`Monitor.TryEnter`](http://msdn.microsoft.com/en-us/library/4tssbxcw.aspx)? | Mutual exclusion: is this safe? | [
"",
"c#",
".net",
"multithreading",
"mutual-exclusion",
""
] |
I have an asp:button which is inside an asp:hyperlink. When you click the button in firefox, it goes to the correct url specified by the asp:hyperlink, but if you click the button in internet explorer, it stays on the same page. I am just using the PostBackUrl property on the button to resolve the issue, but here is an example of the code:
```
<asp:Hyperlink ID="hyp" runat="server" NavigateUrl="Page2.aspx">
<asp:Button ID="btn" runat="server" Text="Submit" /></asp:Hyperlink>
```
**Why does the above work in firefox, but not IE?** | What you did is not very correct.
Just add the button and in its click handler do:
```
Response.Redirect("Page2.aspx");
```
Alternatively you can write a line of javascript:
```
<input type="button" value="Text" onclick="location='Page2.aspx'" />
``` | Is there a reason why you are using a button inside a hyperlink? Depending on the design you are trying to achive I would use just a Button or a LinkButton and then do a redirect after your logic in the codebehind
```
<asp:Button runat='server' id='button1' Text='Click Me' Click='button1_Click' />
<asp:LinkButton runat='server' id='linkbutton1' Text='Click Me' Click='button1_Click' />
```
**Code-Behind**
```
protected void button1_Click(object sender, EventArgs e) {
// some logic
Response.Redirect("Page2.aspx");
}
```
**Firefox vs Internet Explorer**
I suspect your having discrepencies between Firefox and Internet Explorer because of the way the events are bubbled/propogated between the browsers. If you would like to cancel the propagation of the event, you would need to include a call to event.preventDefault() or event.stopPropagation() in your button click event handler (in javascript)? | asp:Button inside asp:hyperlink does not navigate to page in internet explorer | [
"",
"c#",
"asp.net",
""
] |
Can a PHP script unserialize a Storable file created with Perl? | No, but you can dump PHP-readable data from Perl with [PHP::Serialization](http://search.cpan.org/perldoc?PHP::Serialization). You also might want to pick something more standard, like YAML or JSON. Pretty much any language can understand those. | You could use JSON as a lingua-franca between the two languages, I suggest [JSON::XS](http://search.cpan.org/%7Emlehmann/JSON-XS-2.2311/XS.pm) on the Perl side (with subroutines implemented in C/C++) for performances, then you can read back (in PHP) the JSON with [this extension](https://www.php.net/json). | Can a PHP script unserialize a Storable file created with Perl? | [
"",
"php",
"perl",
"serialization",
"storable",
""
] |
I am familiar with concurrent programming in Java which provides a lot of tools for this. However, C++ concurrent programming isn't so easy to start using.
What is the best way to start programming concurrently on C++? Are there any nice libraries which wrap concurrent programming primitives and provide you with more high-level constructs?
I tried [QtConcurrent](http://labs.trolltech.com/page/Projects/Threads/QtConcurrent) which provides you with nice MapReduce functionality but it is heavily biased towards concurrent computation using Qt so it's not a good choice if you don't want to use Qt.
Are there any other similar libraries? What do people use here?
Thanks for your help in advance,
sneg | There are several choices:
[ACE](http://www.cs.wustl.edu/~schmidt/ACE.html) which provides some [concurrency constructs](http://www.cs.wustl.edu/~schmidt/ACE-papers.html#concurrency)
Intel [Threading Building Blocks](http://www.threadingbuildingblocks.org/)
[boost::threads](http://www.boost.org/doc/libs/1_38_0/doc/html/thread.html)
[OpenMP](http://openmp.org/wp/)
[Qt Threading libraries](https://doc.qt.io/qt-5/threads.html) | Morendil's suggestion (CSP - communicating sequential processes) is truly interesting to take a look at - it's a very different view of threading and one that works well once you wrap your head around it. I first encountered it in the rather esoteric Occam language for Transputers, but the idea has stuck with me.
A more conventional idea: [boost::threads](http://www.boost.org/doc/libs/1_38_0/doc/html/thread.html) work quite well for building thread-based concurrent programs. It's quite low level though.
[OpenMP](http://openmp.org/wp/) is at a higher level than threads and also quite well-supported. | Platform-independent concurrent programming libraries for C++ | [
"",
"c++",
"cross-platform",
"parallel-processing",
""
] |
I dont know how to retrieve first Item from this collection :
```
IGrouping<string, Plantilla> groupCast = group as System.Linq.IGrouping<string, Plantilla>;
```
I also tryed :
```
IGrouping<string, Plantilla> firstFromGroup = groupCast.FirstOrDefault();
```
but not works cause and explicit conversion already exist | Why not just use `var`?
```
var firstFromGroup = group.First();
```
As for the reason you're getting an error, I'm guessing either the Key or Element is different than what you think they are. Take a look at the rest of the error message to see what types the compiler is complaining about. Note that if there is an anonymous type involved, the only way to get it is using `var`. | Try this (based on your partial solution):
```
foreach (var group in dlstPlantillas.SelectedItems)
{
var groupCast = groupCast = group as System.Linq.IGrouping<string, Plantilla>
if(groupCast == null) return;
item = groupCast.FirstOrDefault<Plantilla>();
if(item == null) return;
// do stuff with item
}
``` | How can I retrieve first item from a Collection? | [
"",
"c#",
"collections",
"c#-3.0",
""
] |
We have a requirement to do some Rijndael development in Java.
Any recommendations for articles, libraries etc. that would help us?
Any pointers to keystore maintenance and how store the keys securely?
Edit:
It would need to be open source. Essentially, it's just standard encrypt / decrypt of data using Rijndael. | Java includes AES out of the box. Rijndael is AES. You don't need any external libraries. You just need something like this:
```
byte[] sessionKey = null; //Where you get this from is beyond the scope of this post
byte[] iv = null ; //Ditto
byte[] plaintext = null; //Whatever you want to encrypt/decrypt
Cipher cipher = Cipher.getInstance("AES/CBC/PKCS5Padding");
//You can use ENCRYPT_MODE or DECRYPT_MODE
cipher.init(Cipher.ENCRYPT_MODE, new SecretKeySpec(sessionKey, "AES"), new IvParameterSpec(iv));
byte[] ciphertext = cipher.doFinal(plaintext);
```
And that's it, for encryption/decryption. If you are processing large amounts of data then you're better off reading chunks that are multiples of 16 bytes and calling update instead of doFinal (you just call doFinal on the last block). | For a great free library, I highly recommend [BouncyCastle.](http://www.bouncycastle.org/) It is actively maintained, high quality, and has a nice array of code examples. For reference documentation, you'll have to rely more on the general [JCE docs.](http://java.sun.com/j2se/1.5.0/docs/guide/security/jce/JCERefGuide.html)
I can't say what library we use to meet FIPS certification requirements. But there are alternatives to CryptoJ that are much, much cheaper.
In general, I'd recommend generating a new key for each message you encrypt with a symmetric cipher like Rijndael, and then encrypting that key with an asymmetric algorithm like RSA. These private keys can be stored in a password-protected, software-based key store like PKCS #12 or Java's "JKS", or, for better security, on "smart card" hardware token or other crypto hardware module. | Rijndael support in Java | [
"",
"java",
"cryptography",
"rijndael",
""
] |
This small SQL error is bugging me. It doesn't seem to be a problem with the query, just the scope(?), examples work best:
```
SELECT ocp.*, oc.*, GROUP_CONCAT( u.username SEPARATOR ', ') AS `memjoined`
FROM gangs_ocs_process ocp, gangs_ocs oc
LEFT JOIN users u ON u.userid IN ( ocp.membersin )
WHERE ocp.ocid =1 AND ocp.gangid =1 AND oc.oc_name = ocp.crimename
GROUP BY ocp.ocid
LIMIT 0 , 30
```
Theres a column (gangs\_ocs\_process.membersin) which has a list of IDs that have joined (ie 1,2,5). I'm trying to get the usernames for each of these IDs (from the `users` table) in one go.
The problem is `LEFT JOIN users u ON u.userid IN ( ocp.membersin )`
If I substitue `1,2,4` in for `ocp.membersin` (putting the literal list instead of column name), it works ok. It returns a column that has the usernames ([image](http://decepti0n.com/u/f/tmp_fTceB.jpg)). However, if I leave in the ocp.membersin, I get this error:
`#1054 - Unknown column 'ocp.membersin' in 'on clause'`
This is the first time I've even used IN in left joins so I'm a bit lost.
Any help would be great :) | I don't think that "IN" will work for this syntax. MySQL expects IN to be something akin to a dataset, not a delimited string. I think you need to find a way to take membersin, expand it into a dataset MySQL can work with (maybe a temporary table), and join on that. | If you have delimited strings in your table, you have a design problem in your database. Add a new table to hold these values. | SQL problem, LEFT JOIN [..] IN() | [
"",
"mysql",
"sql",
"join",
"mysql-error-1054",
""
] |
I find that using the following:
```
TreeViewItem i = sender as TreeViewItem;
if(i != null){ ... }
```
is easier to write and understand than:
```
if(sender.GetType() == typeof(TreeViewItem)){
TreeViewItem i = (TreeViewItem)sender;
...
}
```
Are there compelling reasons *not* to use the first construct? | I prefer casts to `as` in most cases because usually if the type of the object is wrong, that indicates a bug. Bugs *should* cause exceptions IMO - and an `InvalidCastException` at exactly the line which performs the cast is a lot clearer than a `NullReferenceException`much later in the code.
`as` should be used when it's valid and legal to have been passed a reference to an object of the type that you don't want. That situation does come up, but not as often as normal casting in my experience.
Comparing types using `GetType()`, however, is very rarely the right solution - it's only appropriate when you want to check for the *exact* type involved rather than a compatible type.
I've written a [significantly longer answer](https://stackoverflow.com/questions/496096/casting-vs-using-the-as-keyword-in-the-clr/496167#496167) about the "cast vs as" discussion elsewhere. | Not at all - it gives you the chance to verify that the conversion (cast) was done OK. If you do
```
TreeViewItem i = (TreeViewItem) sender;
```
you might get an exception if the cast fails. | Are there compelling reasons AGAINST using the C# keyword "as"? | [
"",
"c#",
".net",
"casting",
"types",
"as-keyword",
""
] |
I want to read two properties from a worker thread. Here it is an example of the class:
```
public partial class FrmPrincipal : Form
{
private int Height;
private int Width;
private string token;
public FrmPrincipal()
{
InitializeComponent();
...
}
private void menuItem1_Click(object sender, EventArgs e)
{
Thread t = new Thread(RequestImage);
t.Start();
}
private void RequestImage()
{
try
{
...
// I want to read these properties
int alto = this.Height;
int ancho = this.Width;
this.token = "...";
...
}
catch (Exception ex)
{
...
}
}
```
When menuItem1\_Click it's executed it's starts a new thread using method RequestImage. In this method I need to read this.Height and this.Width and update the value of this.token.
How can I do this?
I'm programming an app for windows mobile (Compact Framework 2.0).
Thanks! | I suggest you capture them in the click handler, and pass them into a delegate for the new thread. This is most easily done with a lambda expression for the actual ThreadStart:
```
public partial class FrmPrincipal : Form
{
private string token;
public FrmPrincipal()
{
InitializeComponent();
...
}
private void menuItem1_Click(object sender, EventArgs e)
{
int width = Width;
int height = Height;
Thread t = new Thread(() => RequestImage(width, height));
t.Start();
}
private void RequestImage(int width, int height)
{
try
{
...
int alto = height;
int ancho = width;
this.token = "...";
...
}
catch (Exception ex)
{
...
}
}
}
``` | Your code as posted doesn't require anything additional. However, it will only work properly if no other code will access the token member while the thread is running. Shared read/write access to a variable needs to be protected by a lock. But that's not all, you'll also have to ensure that the threads are synchronized properly, the thread that reads "token" should probably wait until the worker thread updated the value. Google for "producer consumer pattern" and you'll find plenty of literature on the subject.
Assuming in this case you need some kind of code in the UI thread to wait for RequestImage() to complete, then use its result, the easiest way to handle the synchronization is to let RequestImage() call Control.BeginInvoke() when it completes the job.
Note that you'll also need to handle the case where the UI thread terminates before the worker thread is completed. Not doing this is likely to produce an ObjectDisposed exception. The Q&D solution for that is to set the thread's IsBackground property to True. Do make sure that nothing nasty happens when the thread gets aborted. | c# threads and properties | [
"",
"c#",
"winforms",
"multithreading",
""
] |
I plan on making a multi comparison program. It will compare multiple files by displaying N number of files in a grid where N = X \* Y. X and Y are the width and height of the grid elements. Easy enough, I know how to do this pretty much.
The question:
How do and in what way is best to highlight individual characters in each of these grid elements? I plan on highlighting matching text that is found in the same position. | I'd use a [JTextPane](http://java.sun.com/docs/books/tutorial/uiswing/components/editorpane.html) rather than a JTextArea, and read up on the StyledDocument class. This will give you all sorts of options. | You could use a `JTextArea` with a `Highlighter`. See the second example on [this page](http://java.sun.com/docs/books/tutorial/uiswing/components/textfield.html) for how. | Swing: Which objects for a multi panel display and highlighting text | [
"",
"java",
"swing",
""
] |
Does anyone know where I might find a PHP matrix math library which is still actively maintained?
I need to be able to do the basic matrix operations like reduce, transpose (including non-square matrices), invert, determinant, etc.
This question was asked in the past, then closed with no answers. Now I need an answer to the same question. See these links to related questions:
[Matrix artihmetic in PHP?](https://stackoverflow.com/questions/428473/matrix-artihmetic-in-php)
<https://stackoverflow.com/questions/435074/matrix-arithmetic-in-php-again>
I was in the process of installing the pear Math\_Matrix library when I saw these and realized it wouldn't help me. (Thanks Ben for putting that comment about transpose in your question.)
I can code this stuff myself, but I would make me happier to see that there is a library for this somewhere. | You might do better to do your matrix manipulations in another language and call that code from PHP. The PHP community isn't typically concerned with matrix computation, so I imagine it will be challenging to find what you want. But there are plenty of math libraries in other languages. For example, you might try Python (SciPy), though I don't know how hard it is to mix PHP and Python. I don't know PHP, but most languages have a way to call C, and from C you could call the Gnu Scientific Library, for example.
By the way, there's hardly ever a reason to invert a matrix. Most problems that appear to require matrix inversion actually require solving linear systems. The latter is more stable. Also, some libraries may not have a matrix inversion routine per se because they assume people will use a factorization routine (e.g. Cholesky) and repeated solve systems of equations. | I've used this one which is quite good:
<https://github.com/mcordingley/LinearAlgebra>
Also this one looks ok:
<http://numphp.org/> | Looking for actively maintained matrix math library for php | [
"",
"php",
"math",
"matrix",
""
] |
In my Java app I need to get some files and directories.
This is the program structure:
```
./main.java
./package1/guiclass.java
./package1/resources/resourcesloader.java
./package1/resources/repository/modules/ -> this is the dir I need to get
./package1/resources/repository/SSL-Key/cert.jks -> this is the file I need to get
```
`guiclass` loads the resourcesloader class which will load my resources (directory and file).
As to the file, I tried
```
resourcesloader.class.getClass().getResource("repository/SSL-Key/cert.jks").toString()
```
in order to get the real path, but this way does not work.
I have no idea which path to use for the directory. | Supply the path relative to the classloader, not the class you're getting the loader from. For instance:
```
resourcesloader.class.getClassLoader().getResource("package1/resources/repository/SSL-Key/cert.jks").toString();
``` | I had problems with using the `getClass().getResource("filename.txt")` method.
Upon reading the Java docs instructions, if your resource is not in the same package as the class you are trying to access the resource from, then you have to give it relative path starting with `'/'`. The recommended strategy is to put your resource files under a "resources" folder in the root directory. So for example if you have the structure:
```
src/main/com/mycompany/myapp
```
then you can add a resources folder as recommended by maven in:
```
src/main/resources
```
furthermore you can add subfolders in the resources folder
```
src/main/resources/textfiles
```
and say that your file is called `myfile.txt` so you have
```
src/main/resources/textfiles/myfile.txt
```
Now here is where the stupid path problem comes in. Say you have a class in your `com.mycompany.myapp package`, and you want to access the `myfile.txt` file from your resource folder. Some say you need to give the:
```
"/main/resources/textfiles/myfile.txt" path
```
or
```
"/resources/textfiles/myfile.txt"
```
both of these are wrong. After I ran `mvn clean compile`, the files and folders are copied in the:
```
myapp/target/classes
```
folder. But the resources folder is not there, just the folders in the resources folder. So you have:
```
myapp/target/classes/textfiles/myfile.txt
myapp/target/classes/com/mycompany/myapp/*
```
so the correct path to give to the `getClass().getResource("")` method is:
```
"/textfiles/myfile.txt"
```
here it is:
```
getClass().getResource("/textfiles/myfile.txt")
```
This will no longer return null, but will return your class.
It is strange to me, that the `"resources"` folder is not copied as well, but only the subfolders and files directly in the `"resources"` folder. It would seem logical to me that the `"resources"` folder would also be found under `"myapp/target/classes" | open resource with relative path in Java | [
"",
"java",
"resources",
"loading",
""
] |
Consider this example schema:
```
Customer ( int CustomerId pk, .... )
Employee ( int EmployeeId pk,
int CustomerId references Customer.CustomerId, .... )
WorkItem ( int WorkItemId pk,
int CustomerId references Customer.CustomerId,
null int EmployeeId references Employee.EmployeeId, .... )
```
Basically, three tables:
* A customer table with a primary key and some additional columns
* A employee table with a primary key, a foreign key constraint reference to the customer tables primary key, representing an employee of the customer.
* A work item table, which stores work done for the customer, and also info about the specific employee who the work was performed for.
My question is. How do I, on a database level, test if an employee is actually associated with a customer, when adding new work items.
If for example Scott (employee) works at Microsoft (customer), and Jeff (employee) works at StackOverflow (customer), how do I prevent somebody from adding a work item into the database, with customer = Microsoft, and employee = Jeff, which do not make sense?
Can I do it with check constraints or foreign keys or do I need a trigger to test for it manually?
Should mention that I use SQL Server 2008.
**UPDATE:** I should add that WorkItem.EmployeeId can be null.
Thanks, Egil. | Wouldn't a foreign key on a composite column (CustomerId, EmployeeId) work?
```
ALTER TABLE WorkItem
ADD CONSTRAINT FK_Customer_Employee FOREIGN KEY (CustomerId, EmployeeId)
REFERENCES Employee (CustomerId, EmployeeId);
``` | You might be able to do this by creating a view "WITH SCHEMABINDING" that spans those tables and enforces the collective constraints of the individual tables. | How do I check constraints between two tables when inserting into a third table that references the other two tables? | [
"",
"sql",
"sql-server",
"database-design",
"foreign-keys",
"check-constraints",
""
] |
The IMAP specification ([RFC 2060](http://www.faqs.org/rfcs/rfc2060.html), 5.1.3. Mailbox International Naming Convention) describes how to handle non-ASCII characters in folder names. It defines a **modified** UTF-7 encoding:
> By convention, international mailbox
> names are specified using a
> modified version of the UTF-7 encoding
> described in [UTF-7]. The purpose
> of these modifications is to correct
> the following problems with UTF-7:
>
> 1. UTF-7 uses the "+" character for shifting; this conflicts with
> the common use of "+" in mailbox names, in particular USENET
> newsgroup names.
> 2. UTF-7's encoding is BASE64 which uses the "/" character; this
> conflicts with the use of "/" as a popular hierarchy delimiter.
> 3. UTF-7 prohibits the unencoded usage of "\"; this conflicts with
> the use of "\" as a popular hierarchy delimiter.
> 4. UTF-7 prohibits the unencoded usage of "~"; this conflicts with
> the use of "~" in some servers as a home directory indicator.
> 5. UTF-7 permits multiple alternate forms to represent the same
> string; in particular, printable US-ASCII chararacters can be
> represented in encoded form.
>
> In modified UTF-7, printable US-ASCII characters except for "&" represent themselves;
> that is, characters with octet values 0x20-0x25
> and 0x27-0x7e. The character "&"
> (0x26) is represented by the two-octet sequence "&-".
>
> All other characters (octet values
> 0x00-0x1f, 0x7f-0xff, and all Unicode 16-bit octets) are represented
> in modified BASE64, with a further
> modification from [UTF-7] that "," is
> used instead of "/".
> Modified BASE64 MUST NOT be used to represent
> any printing US-ASCII character
> which can represent itself.
>
> "&" is used to shift to modified
> BASE64 and "-" to shift back to US-ASCII. All names start in US-ASCII,
> and MUST end in US-ASCII (that is,
> a name that ends with a Unicode 16-bit
> octet MUST end with a "-").
Before I'll start implementing it, my question: is there some **.NET code/library** out there (or even in the framework) that does the job? I couldn't find .NET resources (only [implementations for other languages/frameworks](http://search.cpan.org/~pmakholm/Encode-IMAPUTF7-1.01/lib/Encode/IMAPUTF7.pm)).
*Thank you!* | ```
//
// ImapEncoding.cs
//
// Author: Jeffrey Stedfast <jestedfa@microsoft.com>
//
// Copyright (c) 2013-2019 Microsoft Corp. (www.microsoft.com)
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
// THE SOFTWARE.
//
using System.Text;
namespace MailKit.Net.Imap {
static class ImapEncoding
{
const string utf7_alphabet = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+,";
static readonly byte[] utf7_rank = {
255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,
255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,
255,255,255,255,255,255,255,255,255,255,255, 62, 63,255,255,255,
52, 53, 54, 55, 56, 57, 58, 59, 60, 61,255,255,255,255,255,255,
255, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14,
15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25,255,255,255,255,255,
255, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40,
41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51,255,255,255,255,255,
};
public static string Decode (string text)
{
var decoded = new StringBuilder ();
bool shifted = false;
int bits = 0, v = 0;
int index = 0;
char c;
while (index < text.Length) {
c = text[index++];
if (shifted) {
if (c == '-') {
// shifted back out of modified UTF-7
shifted = false;
bits = v = 0;
} else if (c > 127) {
// invalid UTF-7
return text;
} else {
byte rank = utf7_rank[(byte) c];
if (rank == 0xff) {
// invalid UTF-7
return text;
}
v = (v << 6) | rank;
bits += 6;
if (bits >= 16) {
char u = (char) ((v >> (bits - 16)) & 0xffff);
decoded.Append (u);
bits -= 16;
}
}
} else if (c == '&' && index < text.Length) {
if (text[index] == '-') {
decoded.Append ('&');
index++;
} else {
// shifted into modified UTF-7
shifted = true;
}
} else {
decoded.Append (c);
}
}
return decoded.ToString ();
}
static void Utf7ShiftOut (StringBuilder output, int u, int bits)
{
if (bits > 0) {
int x = (u << (6 - bits)) & 0x3f;
output.Append (utf7_alphabet[x]);
}
output.Append ('-');
}
public static string Encode (string text)
{
var encoded = new StringBuilder ();
bool shifted = false;
int bits = 0, u = 0;
for (int index = 0; index < text.Length; index++) {
char c = text[index];
if (c >= 0x20 && c < 0x7f) {
// characters with octet values 0x20-0x25 and 0x27-0x7e
// represent themselves while 0x26 ("&") is represented
// by the two-octet sequence "&-"
if (shifted) {
Utf7ShiftOut (encoded, u, bits);
shifted = false;
bits = 0;
}
if (c == 0x26)
encoded.Append ("&-");
else
encoded.Append (c);
} else {
// base64 encode
if (!shifted) {
encoded.Append ('&');
shifted = true;
}
u = (u << 16) | (c & 0xffff);
bits += 16;
while (bits >= 6) {
int x = (u >> (bits - 6)) & 0x3f;
encoded.Append (utf7_alphabet[x]);
bits -= 6;
}
}
}
if (shifted)
Utf7ShiftOut (encoded, u, bits);
return encoded.ToString ();
}
}
}
``` | This is too specialized to be present in a framework. There might be something on codeplex though many incomplete "implementations" I've seen don't do bother with the conversion at all and will happily pass all non-us-ascii characters on to the IMAP server.
However I've implemented this in the past and it is really just 30 lines of code. You go through all characters in a string and output them if they fall in the range between 0x20 and 0x7e (don't forget to append "-" after the "&") otherwise collect all non-us-ascii and convert them using UTF7 (or UTF8 + base64, I'm not quite sure here) replacing "/" with ",". Additionally you need to maintain "shifted state", e.g. whether you're currently encoding non-us-ascii or outputting us-ascii and append transition tokens "&" and "-" on state change. | IMAP folder path encoding (IMAP UTF-7) for .NET? | [
"",
"c#",
".net",
"encoding",
"imap",
"utf-7",
""
] |
What do you use to minimize and compress JavaScript libraries? | I use [YUI Compressor](https://github.com/yui/yuicompressor). Seems to get the job done well! | I've used [YUI Compressor](https://github.com/yui/yuicompressor) for a long time and have had no problems with it, but have recently started using [Google Closure Compiler](http://code.google.com/closure/compiler/) and had some success with it. My impressions of it so far:
* It generally outperforms YUI Compressor in terms of file size reduction. By a small amount on simple mode, and by a lot on advanced mode.
* Simple mode has so far been as reliable as YUI Compressor. Nothing I've fed it has shown any problems.
* Advanced "compilation" mode is great for some scripts, but the dramatic size reduction of your script comes at the expense of a lot of meddling with your code that stands a decent chance of breaking it. There are [ways to deal with some of these problems](http://code.google.com/closure/compiler/docs/api-tutorial3.html) and understanding what it's doing can go a long way to avoiding problems but I generally avoid using this mode.
I've moved over to using Google Closure Compiler in simple "compilation" mode, because it slightly outperforms YUI Compressor in general. I have used it considerably less than I have YUI Compressor but from what I've seen so far I'd recommend it.
One other that I've yet to try but sounds promising is [Mihai Bazon's UglifyJS](http://github.com/mishoo/UglifyJS). | What do you use to minimize and compress JavaScript libraries? | [
"",
"javascript",
"compression",
"minimize",
"jscompress",
""
] |
When building, or "on the fly" (perhaps with caching) when the users request pages.
And what are the dis/advantages of each. | When the site moves from dev to the live server.
I always have an uncompressed version of the JS on the dev server and a minimized version on the live server.
The advantage of that is when developing i can run into JS trouble and fix it very simply, but i need to run each changed script through a minimizer, but for me its not that much. | When building or deploying to a stage environment is a good time to compress javascript. That way you will have a chance to test it in the stage environment and catch any errors that could occur.
Occasionally, there are errors that do come up when compressing. You may want to include a command-line version of jslint that runs before compressing, to make sure that the js passes. That will minimize, but not eliminate, all compression errors. | At what stage do you compress/minimize javascript? | [
"",
"javascript",
"jscompress",
""
] |
I have a method like:
```
AverageAndDoSomeMath (Point2)
```
and I wonder how to handle:
```
AverageAndDoSomeMath (Point2) // single
AverageAndDoSomeMath (Point2 collection) // multiple
```
using a single implementation preferably.
For collection, I plan to use the IEnumerable type so I can pass any kind of collection, but for a single value, I don't want to wrap and pass it as a collection, because the collection itself serves no purpose, other than to satisfy the type.
How to best handle it in the clearest, fastest, most efficient way?
EDIT: Maybe I should have chosen a better method name, but the method calculates the average of all the points, for 1 value it doesn't make sense, but think of it as that the value will be used to say calculate another value, but what's important is finding the average, so I can't call the 1st method. | I know you said you didn't want to wrap it and pass it as a collection, but there are two ways you can do this with minimal hassle, so I'll post them in case you weren't aware of one.
You could use params on your method:
```
public void Average(params Point2[] points)
```
after which you can call it with any number of arguments, or with an array:
```
Average(P1);
Average(P2, P3, P4);
Average(); // this one becomes an empty array in the method
Point[] array = new Point[] { P1, P2, P3, P4 };
Average(array);
```
Or alternately, you can use short array syntax to wrap up your object:
```
Average({ P1 }); // this is legal C# for making an array with 1 element
```
**EDIT:** After reading your note, I would suggest that a params array is the clearest way to do what you want. The only real downside to it is that you can't pass in an `IEnumerable<Point2>` (or other collections like a `List<Point2>`) without first calling `ToArray()`, because it only takes actual arrays. | Write two methods
* AverageAndDoSomeMath (Point2 point)
* AverageAndDoSomeMath (IEnumerable<Point2> points)
and make the 1st one call the 2nd one by wrapping it in a collection.
```
public decimal AverageAndDoSomeMath (Point2 point)
{
return AverageAndSomeMath(new []{point});
}
```
I'm guessing that's what you already had in mind but thought it wasn't the correct solution, right?
**Edit:** Simplified the method (thanks to Freddy Rios) | Handling single and multiple values in C# methods effectively | [
"",
"c#",
".net",
"methods",
""
] |
I am fairly naive when it comes to the world of Java Threading and Concurrency. I am currently trying to learn. I made a simple example to try to figure out how concurrency works.
Here is my code:
```
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
public class ThreadedService {
private ExecutorService exec;
/**
* @param delegate
* @param poolSize
*/
public ThreadedService(int poolSize) {
if (poolSize < 1) {
this.exec = Executors.newCachedThreadPool();
} else {
this.exec = Executors.newFixedThreadPool(poolSize);
}
}
public void add(final String str) {
exec.execute(new Runnable() {
public void run() {
System.out.println(str);
}
});
}
public static void main(String args[]) {
ThreadedService t = new ThreadedService(25);
for (int i = 0; i < 100; i++) {
t.add("ADD: " + i);
}
}
}
```
What do I need to do to make the code print out the numbers 0-99 in sequential order? | [Thread pools](http://en.wikipedia.org/wiki/Thread_pool) are usually used for operations which do not need [synchronization](http://en.wikipedia.org/wiki/Synchronization_(computer_science)) or are highly [parallel](http://en.wikipedia.org/wiki/Parallel_algorithm).
Printing the numbers 0-99 sequentially is not a concurrent problem and requires threads to be synchronized to avoid printing out of order.
I recommend taking a look at the [Java concurrency lesson](http://java.sun.com/docs/books/tutorial/essential/concurrency/) to get an idea of concurrency in Java. | The simplest solution to your problem is to use a ThreadPool size of 1. However, this isn't really the kind of problem one would use threads to solve.
To expand, if you create your executor with:
```
this.exec = Executors.newSingleThreadExecutor();
```
then your threads will all be scheduled and executed in the order they were submitted for execution. There are a few scenarios where this is a logical thing to do, but in most cases Threads are the wrong tool to use to solve this problem.
This kind of thing makes sense to do when you need to execute the task in a different thread -- perhaps it takes a long time to execute and you don't want to block a GUI thread -- but you don't need or don't want the submitted tasks to run at the same time. | Java Threading Tutorial Type Question | [
"",
"java",
"multithreading",
"concurrency",
"threadpool",
""
] |
When reading data in chunks of say, 1024, how do I continue to read from a socket that receives a message bigger than 1024 bytes until there is no data left? Should I just use BeginReceive to read a packet's length prefix only, and then once that is retrieved, use Receive() (in the async thread) to read the rest of the packet? Or is there another way?
## edit:
I thought Jon Skeet's link had the solution, but there is a bit of a speedbump with that code. The code I used is:
```
public class StateObject
{
public Socket workSocket = null;
public const int BUFFER_SIZE = 1024;
public byte[] buffer = new byte[BUFFER_SIZE];
public StringBuilder sb = new StringBuilder();
}
public static void Read_Callback(IAsyncResult ar)
{
StateObject so = (StateObject) ar.AsyncState;
Socket s = so.workSocket;
int read = s.EndReceive(ar);
if (read > 0)
{
so.sb.Append(Encoding.ASCII.GetString(so.buffer, 0, read));
if (read == StateObject.BUFFER_SIZE)
{
s.BeginReceive(so.buffer, 0, StateObject.BUFFER_SIZE, 0,
new AyncCallback(Async_Send_Receive.Read_Callback), so);
return;
}
}
if (so.sb.Length > 0)
{
//All of the data has been read, so displays it to the console
string strContent;
strContent = so.sb.ToString();
Console.WriteLine(String.Format("Read {0} byte from socket" +
"data = {1} ", strContent.Length, strContent));
}
s.Close();
}
```
Now this corrected works fine most of the time, but it fails when *the packet's size is a multiple of the buffer*. The reason for this is if the buffer gets filled on a read it is assumed there is more data; but the same problem happens as before. A 2 byte buffer, for exmaple, gets filled twice on a 4 byte packet, and assumes there is more data. It then blocks because there is nothing left to read. **The problem is that the receive function doesn't know when the end of the packet is.**
---
This got me thinking to two possible solutions: I could either have an end-of-packet delimiter or I could read the packet header to find the length and then receive exactly that amount (as I originally suggested).
There's problems with each of these, though. I don't like the idea of using a delimiter, as a user could somehow work that into a packet in an input string from the app and screw it up. It also just seems kinda sloppy to me.
The length header sounds ok, but I'm planning on using protocol buffers - I don't know the format of the data. Is there a length header? How many bytes is it? Would this be something I implement myself? Etc..
What should I do? | No - call `BeginReceive` again from the callback handler, until `EndReceive` returns 0. Basically, you should keep on receiving asynchronously, assuming you want the fullest benefit of asynchronous IO.
If you look at the MSDN page for [`Socket.BeginReceive`](http://msdn.microsoft.com/en-us/library/dxkwh6zw.aspx) you'll see an example of this. (Admittedly it's not as easy to follow as it might be.) | Dang. I'm hesitant to even reply to this given the dignitaries that have already weighed in, but here goes. Be gentle, O Great Ones!
Without having the benefit of reading Marc's blog (it's blocked here due the corporate internet policy), I'm going to offer "another way."
The trick, in my mind, is ***to separate the receipt of the data from the processing of that data***.
I use a StateObject class defined like this. It differs from the MSDN StateObject implementation in that it does not include the StringBuilder object, the BUFFER\_SIZE constant is private, and it includes a constructor for convenience.
```
public class StateObject
{
private const int BUFFER_SIZE = 65535;
public byte[] Buffer = new byte[BUFFER_SIZE];
public readonly Socket WorkSocket = null;
public StateObject(Socket workSocket)
{
WorkSocket = workSocket;
}
}
```
I also have a Packet class that is simply a wrapper around a buffer and a timestamp.
```
public class Packet
{
public readonly byte[] Buffer;
public readonly DateTime Timestamp;
public Packet(DateTime timestamp, byte[] buffer, int size)
{
Timestamp = timestamp;
Buffer = new byte[size];
System.Buffer.BlockCopy(buffer, 0, Buffer, 0, size);
}
}
```
My ReceiveCallback() function looks like this.
```
public static ManualResetEvent PacketReceived = new ManualResetEvent(false);
public static List<Packet> PacketList = new List<Packet>();
public static object SyncRoot = new object();
public static void ReceiveCallback(IAsyncResult ar)
{
try {
StateObject so = (StateObject)ar.AsyncState;
int read = so.WorkSocket.EndReceive(ar);
if (read > 0) {
Packet packet = new Packet(DateTime.Now, so.Buffer, read);
lock (SyncRoot) {
PacketList.Add(packet);
}
PacketReceived.Set();
}
so.WorkSocket.BeginReceive(so.Buffer, 0, so.Buffer.Length, 0, ReceiveCallback, so);
} catch (ObjectDisposedException) {
// Handle the socket being closed with an async receive pending
} catch (Exception e) {
// Handle all other exceptions
}
}
```
Notice that this implementation does absolutely no processing of the received data, nor does it have any expections as to how many bytes are supposed to have been received. It simply receives whatever data happens to be on the socket (up to 65535 bytes) and stores that data in the packet list, and then it immediately queues up another asynchronous receive.
Since processing no longer occurs in the thread that handles each asynchronous receive, the data will obviously be processed by a *different* thread, which is why the Add() operation is synchronized via the lock statement. In addition, the processing thread (whether it is the main thread or some other dedicated thread) needs to know ***when*** there is data to process. To do this, I usually use a ManualResetEvent, which is what I've shown above.
Here is how the processing works.
```
static void Main(string[] args)
{
Thread t = new Thread(
delegate() {
List<Packet> packets;
while (true) {
PacketReceived.WaitOne();
PacketReceived.Reset();
lock (SyncRoot) {
packets = PacketList;
PacketList = new List<Packet>();
}
foreach (Packet packet in packets) {
// Process the packet
}
}
}
);
t.IsBackground = true;
t.Name = "Data Processing Thread";
t.Start();
}
```
That's the basic infrastructure I use for all of my socket communication. It provides a nice separation between the receipt of the data and the processing of that data.
As to the other question you had, it is important to remember with this approach that each Packet instance does not necessarily represent a complete message within the context of your application. A Packet instance might contain a partial message, a single message, or multiple messages, and your messages might span multiple Packet instances. I've addressed how to know when you've received a full message in the related question you posted [here](https://stackoverflow.com/questions/586819/how-to-detect-when-a-protocol-buffer-message-is-fully-received/599309#599309). | C# Begin/EndReceive - how do I read large data? | [
"",
"c#",
"networking",
"sockets",
"asynchronous",
""
] |
I have a large number of email addresses to validate. Initially I parse them with a regexp to throw out the completely crazy ones. I'm left with the ones that look sensible but still might contain errors.
I want to find which addresses have valid domains, so given me@abcxyz.com I want to know if it's even possible to send emails to abcxyz.com .
I want to test that to see if it corresponds to a valid A or MX record - is there an easy way to do it using only Python standard library? I'd rather not add an additional dependency to my project just to support this feature. | There is no DNS interface in the standard library so you will either have to roll your own or use a third party library.
This is not a fast-changing concept though, so the external libraries are stable and well tested.
The one I've used successful for the same task as your question is [PyDNS](http://pydns.sourceforge.net/).
A very rough sketch of my code is something like this:
```
import DNS, smtplib
DNS.DiscoverNameServers()
mx_hosts = DNS.mxlookup(hostname)
# Just doing the mxlookup might be enough for you,
# but do something like this to test for SMTP server
for mx in mx_hosts:
smtp = smtplib.SMTP()
#.. if this doesn't raise an exception it is a valid MX host...
try:
smtp.connect(mx[1])
except smtplib.SMTPConnectError:
continue # try the next MX server in list
```
Another library that might be better/faster than PyDNS is [dnsmodule](http://c0re.23.nu/c0de/dnsmodule/) although it looks like it hasn't had any activity since 2002, compared to PyDNS last update in August 2008.
**Edit**: I would also like to point out that email addresses can't be easily parsed with a regexp. You are better off using the parseaddr() function in the standard library email.utils module (see my [answer to this question](https://stackoverflow.com/questions/550009/parsing-from-addresses-from-email-text/550036#550036) for example). | The easy way to do this NOT in the standard library is to use the [validate\_email package](https://pypi.python.org/pypi/validate_email "validate_email package"):
```
from validate_email import validate_email
is_valid = validate_email('example@example.com', check_mx=True)
```
For faster results to process a large number of email addresses (e.g. list `emails`, you could stash the domains and only do a check\_mx if the domain isn't there. Something like:
```
emails = ["email@example.com", "email@bad_domain", "email2@example.com", ...]
verified_domains = set()
for email in emails:
domain = email.split("@")[-1]
domain_verified = domain in verified_domains
is_valid = validate_email(email, check_mx=not domain_verified)
if is_valid:
verified_domains.add(domain)
``` | How do I validate the MX record for a domain in python? | [
"",
"python",
"email",
"dns",
""
] |
I need to salt and hash some passwords so that I can store them safely in a database.
Do you have any advice or ideas as to how best to do this using Linq To SQL? | LINQ to SQL doesn't have much relevance in this case. You could use any mechanism you want to, because you won't be doing hashing and salting in SQL.
The steps to save a password would go along these lines:
1. Receive the password as cleartext, along with the user's ID.
2. Generate (and remember) the salt.
3. Combine the salt with the password text, e.g. prepend it or append it.
4. Hash the resulting text with your hash function
5. Store the user ID, the hash and the salt in your DB.
The steps to verify a password would go along these lines:
1. Receive the password as cleartext, along with the user's ID.
2. Retrieve the hashed and the salt from the DB for the supplied user ID.
3. Combine the salt with the supplied password text.
4. Hash the resulting text with your hash function.
5. Compare the hash from the function with the hash retrieved from the DB.
6. If they are equal, the supplied password was correct. | Since you are using the .NET and C#, use can use the [System.Security.Cryptography.SHA512Managed](http://msdn.microsoft.com/en-us/library/system.security.cryptography.sha512managed.aspx) namespace for generating the salt value and password hash | Salting and Hashing Passwords using Linq To SQL | [
"",
"c#",
".net",
"database",
"linq-to-sql",
""
] |
I'm currently using a simple XML file that contains license information along with the data's signature and public key. So far, this method is working great. However, I'm seeing one rather large security flaw...
When my ASP.NET MVC application starts, it verifies the license and sets the "ValidLicense" bool property accordingly. During each request, this property is checked and an error message is displayed if the license is invalid.
As you can guess, there's absolutely nothing to stop a competent user from simply modifying my assembly to set "ValidLicense" to true regardless of the license's validity. I know this can be done to pretty much any application, but it seems incredibly easy to do with .NET assemblies.
What are some ways that I can stop this from happening, or at least make it a little more difficult to crack the license verification procedure?
I'd rather stay away from assembly encryption and obfuscation systems, if possible. Go ahead and suggest them if you feel that they are good enough to warrant the cost and extra headache, however. | The only way to win is not to play.
The people who are going to steal your stuff (regardless of what protections you put in place), are *not* the people who are going to pay for it if it's too hard for them to break. | Instead of a simple boolean variable, you could perform a complex calculation every time you need to verify the license and base your logic on the result from that calculation. You do get a perf hit, though. And cracking your assembly wouldn't be that much harder anyway.
You could also employ some more advanced techniques, like dynamic mutation of the code and using the a corresponding mutable function to control the flow of your logic.
However, you should ask yourself does your assembly really contain such precious intelectual property, as to warrant the time and efforts to implement anything like this? It might be more viable and cheaper to go the legal route and battle the potential piracy if and when it occurs. | Verifying license information without a boolean check? | [
"",
"c#",
"asp.net-mvc",
"licensing",
""
] |
On the project we are trying to reach an agreement on the namespace usage.
We decided that the first level will be "productName" and the second is "moduleName".
```
productName::moduleName
```
Now if the module is kind of utility module there is no problem to add third namespace. For example to add "str": productName::utilityModuleName::str - to divide space where all "strings" related stuff will go.
If the module is the main business module we have many opportunities and almost no agreement.
For example
```
class productName::mainModuleName::DomainObject
```
and
```
class productName::mainModuleName::DomainObjectSomethingElseViewForExample
```
can be both at
```
namespace productName::mainModuleName::domainObject
class Data
class ViewForExample
```
Why should we create inner not private classes and not namespaces?
Why should we create class where all methods are static (except cases when this class is going to be template parameter)?
Project consist of 1Gb of source code.
So, what is the best practice to divide modules on namespaces in the c++? | **What namespaces are for:**
Namespaces are meant to establish context only so you don't have naming confilcts.
**General rules:**
Specifying too much context is not needed and will cause more inconvenience than it is worth.
So you want to use your best judgment, but still follow these 2 rules:
* Don't be too general when using namespaces
* Don't be too specific when using namespaces
I would not be so strict about how to use namespace names, and to simply use namespaces based on a related group of code.
**Why namespaces that are too general are not helpful:**
The problem with dividing the namespace starting with the product name, is that you will often have a component of code, or some base library that is common to multiple products.
You also will not be using Product2 namespaces inside Product1, so explicitly specifying it is pointless. If you were including Product2's files inside Product1, then is this naming conversion still useful?
**Why namespaces that are too specific are not helpful:**
When you have namespaces that are too specific, the line between these distinct namespaces start to blur. You start using the namespaces inside each other back and forth. At this time it's better to generalize the common code together under the same namespace.
**Classes with all static vs templates:**
> "Why should we create inner not
> private classes and not namespaces?
> Why should we create classes where all
> methods are static"
Some differences:
* Namespaces can be implied by using the `using` keyword
* Namespaces can be aliased, classes are types and can be typedef'ed
* Namespaces can be added to; you can add functionality to it at any time and add to it directly
* Classes cannot be added to without making a new derived class
* Namespaces can have forward declarations
* With classes you can have private members and protected members
* Classes can be used with templates
**Exactly how to divide:**
> "Project consist of 1Gb of source
> code. So, what is the best practice to
> divide modules on namespaces in the
> c++?"
It's too subjective to say exactly how to divide your code without the exact source code. Dividing based on the modules though sounds logical, just not the whole product. | This is all subjective, but I would hesitate to go more than 3 levels deep. It just gets too unwieldy at some point. So unless your code base is very, very large, I would keep it pretty shallow.
We divide our code into subsystems, and have a namespace for each subsystem. Utility things would go into their own namespace if indeed they are reusable across subsystems. | c++ namespace usage and naming rules | [
"",
"c++",
"namespaces",
""
] |
Is there a way using py2exe or some other method to generate dll files instead of exe files?
I would want to basically create a normal win32 dll with normal functions but these functions would be coded in python instead of c++. | I think you could solve this by doing some hacking:
* Take a look at the zipextimporter module in py2exe . It helps with importing pyd-files from a zip.
* Using that, you might be able to load py2exe's output file in your own app/dll using raw python-api. (Use boost::python if you can and want)
* And, since py2exe's outputfile is a zip, you could attach it at the end of your dll, making the whole thing even more integrated. (Old trick that works with jar-files too.)
Not tested, but I think the theory is sound.
Essentially, you reimplement py2exe's output executable's main() in your dll. | I doubt that py2exe does this, as it's architectured around providing a bootstrapping .exe that rolls out the python interpreter and runs it.
But why not just embed Python in C code, and compile that code as a DLL? | py2exe to generate dlls? | [
"",
"python",
"windows",
"dll",
"py2exe",
""
] |
Does anyone have experience using python to create a COM object that implements some custom IDLs?
Basically I'd like to know if it's extremely simple to do compared to c++, and if it is do you know of a good tutorial? | The tutorial you are looking for is in the [Python Programming On Win32](http://oreilly.com/catalog/9781565926219/) book, by Mark Hammond and Andy Robinson. A bit old, and the COM object creation info is distributed across some chapters.
A more recent example, [simple COM server using Python](http://techarttiki.blogspot.com/2008/03/calling-python-from-maxscript.html), can give you a quick start. | There is also [comtypes](http://pypi.python.org/pypi/comtypes), which allows to access and implement custom interfaces. An article on [codeproject](http://www.codeproject.com/KB/COM/python-comtypes-interop.aspx) has a nice tutorial. | Quick primer on implementing a COM object that implement some custom IDLs in python? | [
"",
"python",
"com",
""
] |
Firstly, I'm pretty new to C++. I believe that `getline()` isn't a standard C function, so `#define _GNU_SOURCE` is required to use it. I'm now using C++ and g++ tells me that `_GNU_SOURCE` is already defined:
```
$ g++ -Wall -Werror parser.cpp
parser.cpp:1:1: error: "_GNU_SOURCE" redefined
<command-line>: error: this is the location of the previous definition
```
Can anyone confirm if this is standard, or is its definition hidden somewhere in my setup? I'm not sure of the meaning of the final line quoted.
The file's includes are as follows, so presumably it's defined in one or more of these?
```
#include <iostream>
#include <string>
#include <cctype>
#include <cstdlib>
#include <list>
#include <sstream>
```
Thanks! | I think g++, from version 3, automagically defines `_GNU_SOURCE`. This is supported by your third line in the error stating that the first definition was done on the command line (with nary a `-D_GNU_SOURCE` in sight):
```
<command-line>: error: this is the location of the previous definition
```
If you don't want it, `#undef` it as the first line in your compilation unit. You may need it, however, in which case use:
```
#ifndef _GNU_SOURCE
#define _GNU_SOURCE
#endif
```
The reason you're getting the error is because you're re-defining it. It shouldn't be an error if you define it to what it already was. At least that's the case with C, it may be different with C++. Based on the GNU headers, I would say they're doing an implicit `-D_GNU_SOURCE=1` which is why it thinks you're *re-defining* it to something else.
The following snippet should tell you its value provided you haven't changed it.
```
#define DBG(x) printf ("_GNU_SOURCE = [" #x "]\n")
DBG(_GNU_SOURCE); // first line in main.
``` | I've always had to use one of the following in C++. Never had to declare \_GNU\_ anything before. I usually run in \*nix so I usually use gcc and g++ as well.
```
string s = cin.getline()
char c;
cin.getchar(&c);
``` | getline() in C++ - _GNU_SOURCE not needed? | [
"",
"c++",
"getline",
""
] |
ISO C++ says that the inline definition of member function in C++ is the same as declaring it with inline. This means that the function will be defined in every compilation unit the member function is used. However, if the function call cannot be inlined for whatever reason, the function is to be instantiated "as usual". (<http://msdn.microsoft.com/en-us/library/z8y1yy88%28VS.71%29.aspx>)
The problem I have with this definition is that it does not tell in which translation unit it would be instantiated.
The problem I encountered is that when facing two object files in a single static library, both of which have the reference to some inline member function which cannot be inlined, the linker might "pick" an arbitrary object file as a source for the definition. This particular choice might introduce unneeded dependencies. (among other things)
For instance:
**In a static library**
```
A.h:
class A{
public:
virtual bool foo() { return true; }
};
```
U1.cpp:
```
A a1;
```
U2.cpp:
```
A a2;
```
*and lots of dependencies*
**In another project**
main.cpp:
```
#include "A.h"
int main(){
A a;
a.foo();
return 0;
}
```
The second project refers the first. How do I know which definition the compiler will use, and, consequently which object files with their dependencies will be linked in? Is there anything the standard says on that matter? (Tried, but failed to find that)
Thanks
Edit: since I've seen some people misunderstand what the question is, I'd like to emphasize: *If the compiler decided to create a symbol for that function (and in this case, it will, because of 'virtualness', there will be several (externally-seen) instantiations in different object file, which definition (from which object file?) will the linker choose?)* | When you have an inline method that is forced to be non-inlined by the compiler, it will really instantiate the method in every compiled unit that uses it. Today most compilers are smart enough to instantiate a method only if needed (if used) so merely including the header file will not force instantiation. The linker, as you said, will pick one of the instantiations to include in the executable file - but keep in mind that the record inside the object module is of a special kind (for instance, a COMDEF) in order to give the linker enough information to know how to discard duplicated instances. These records will not, therefore, result in unwanted dependencies between modules, because the linker will use them with less priority than "regular" records to resolve dependencies.
In the example you gave, you really don't know, but it doesn't matter. The linker won't resolve dependencies based on non-inlined instances alone ever. The result (in terms of modules included by the linker) will be as good as if the inline method didn't exist. | Just my two cents. This is not about virtual function in particular, but about inline and member-functions generally. Maybe it is useful.
### C++
As far as Standard C++ is concerned, a inline function *must* be defined in every translation unit in which it is used. And an non-static inline function will have the same static variables in every translation unit and the same address. The compiler/linker will have to merge the multiple definitions into one function to achieve this. So, always place the definition of an inline function into the header - or place no declaration of it into the header if you define it only in the implementation file (".cpp") (for a non-member function), because if you would, and someone used it, you would get a linker error about an undefined function or something similar.
This is different from non-inline functions which must be defined only once in an entire program (*one-definition-rule*). For inline functions, multiple definitions as outlined above are rather the normal case. And this is independent on whether the call is atually inlined or not. The rules about inline functions still matter. Whether the Microsoft compiler adheres to those rules or not - i can't tell you. If it adheres to the Standard in that regard, then it will. However, i could imagine some combination using virtual, dlls and different TUs could be problematic. I've never tested it but i believe there are no problems.
For member-functions, if you define your function in the class, it is implicitly inline. And because it appears in the header, the rule that it has to be defined in every translation unit in which it is used is automatically satisfied. However, if you define the function out-of-class and in a header file (for example because there is a circular dependency with code in between), then that definition has to be inline if you include the corresponding file more than once, to avoid multiple-definition errors thrown by the linker. Example of a file `f.h`:
```
struct f {
// inline required here or before the definition below
inline void g();
};
void f::g() { ... }
```
This would have the same effect as placing the definition straight into the class definition.
### C99
Note that the rules about inline functions are more complicated for C99 than for C++. Here, an inline function can be defined as an *inline definition*, of which can exist more than one in the entire program. But if such a (inline-) definition is used (e.g if it is called), then there *must* be also *exactly one* external definition in the entire program contained in another translation unit. Rationale for this (quoting from a PDF explaining the rationale behind several C99 features):
> Inlining in C99 does extend the C++ specification in two ways. First, if a function is declared inline in one translation unit, it need not be declared inline in every other translation unit. This allows, for example, a library function that is to be inlined within the library but available only through an external definition elsewhere. The alternative of using a wrapper function for the external function requires an additional name; and it may also adversely impact performance if a translator does not actually do inline substitution.
>
> Second, the requirement that all definitions of an inline function be "exactly the same" is replaced by the requirement that the behavior of the program should not depend on whether a call is implemented with a visible inline definition, or the external definition, of a function. This allows an inline definition to be specialized for its use within a particular translation unit. For example, the external definition of a library function might include some argument validation that is not needed for calls made from other functions in the same library. These extensions do offer some advantages; and programmers who are concerned about compatibility can simply abide by the stricter C++ rules.
Why do i include C99 into here? Because i know that the Microsoft compiler supports some stuff of C99. So in those MSDN pages, some stuff may come from C99 too - haven't figured anything in particular though. One should be careful when reading it and when applying those techniques to ones own C++ code intended to be portable C++. Probably informing which parts are C99 specific, and which not.
A good place to test small C++ snippets for Standard conformance is the [comeau online compiler](http://www.comeaucomputing.com/tryitout/). If it gets rejected, one can be pretty sure it is not strictly Standard conforming. | In which translation unit are inline functions instantiated, and which definition does the linker use? | [
"",
"c++",
"member-functions",
"inline-functions",
""
] |
I have an Office add-in project with a setup project for deployment (using VS 2008), and I need to build the same product in a few different flavours.
I'm looking for a good way to make the installer resources dependent on the build configuration. The product name, manufacturer, manufacturer url, author, etc., etc. properties should be different for each of the builds. Also, the images shown in the installer UI will be different as well.
If possible, I'd like to do this without creating a new project for each different UI.
I believe this could be done using the ORCAS tool and build events, but this approach seems overly complex and fragile.
Does anyone have any ideas on a clean way to go about doing this? | Another option, although it is a bit more work up front, is to throw the setup project out the window and use e.g. WiX instead (<http://wix.sf.net>) | I think the best way is to create hand-made NAnt script.
Explaination:
NAnt is a build tool that besides building can execute scripts. By creating scripts and targets you can modify files that are checked out from your SCM, and then order them to build. Check out [official NAnt site](http://nant.sourceforge.net/) for more informations. | Is there a clean way to set a VS 2008 setup project's properties at build time? | [
"",
"c#",
"visual-studio-2008",
"installation",
"windows-installer",
""
] |
I am trying to determine the size in bytes of the contents in a `VARBINARY(MAX)` field in SQL Server 2005, using SQL. As I doubt there is native support for this, could it be done using CLR integration? Any ideas would be greatly appreciated. | Actually, you can do this in T-SQL!
`DATALENGTH(<fieldname>)` will work on `varbinary(max)` fields. | The VARBINARY(MAX) field allocates variable length data up to just under 2GB in size.
You can use DATALENGTH() function to determine the length of the column content.
For example:
```
SELECT DATALENGTH(CompanyName), CompanyName
FROM Customers
``` | Size of VARBINARY field in SQL Server 2005 | [
"",
"sql",
"sql-server",
"sql-server-2005",
"clr",
""
] |
Non-blocking TCP/IP `SocketChannel`s and `Selector` in NIO help me to handle many TCP/IP connections with small number of threads. But how about UDP `DatagramChannels`? (I must admit that I'm not very familiar with UDP.)
UDP send operations don't seem to block even if the `DatagramChannel` is not operating in blocking mode. Is there really a case where `DatagramSocket.send(DatagramPacket)` blocks due to congestion or something similar? I'm really curious if there's such a case and what the possible cases exist in a production environment.
If `DatagramSocket.send(DatagramPacket)` doesn't actually block and I am not going to use a connected `DatagramSocket` and bind to only one port, is there no advantage of using non-blocking mode with `DatagramChannel` and `Selector`? | It's been a while since I've used Java's DatagramSockets, Channels and the like, but I can still give you some help.
The UDP protocol does not establish a connection like TCP does. Rather, it just sends the data and forgets about it. If it is important to make sure that the data actually gets there, that is the client's responsibility. Thus, even if you are in blocking mode, your send operation will only block for as long as it takes to flush the buffer out. Since UDP does not know anything about the network, it will write it out at the earliest opportunity without checking the network speed or if it actually gets to where it is supposed to be going. Thus, to you, it appears as if the channel is actually immediately ready for more sending. | UDP doesn't block (It only blocks while it is transferring the data to the OS)
This means if at any point the next hop/switch/machine cannot buffer the UDP packet it drops it. This can be desirable behaviour in some situations. But it is something you need to be aware of.
UDP also doesn't guarantee to
* delivery packets in the order they are sent.
* not to break up large packets.
* forward packets across switches. Often UDP forwarding between switches is turned off.
However UDP does support multicast so the same packet can be delivered to one or more hosts. The sender has no idea if anyone receives the packets however.
A tricky thing about UDP is it works most of the time, but fails badly sometimes in ways which are very difficult to reproduce. For this reason, you shouldn't assume reliability even if you do a few tests and it appears to work. | Non-blocking UDP I/O vs blocking UDP I/O in Java | [
"",
"java",
"udp",
"nio",
"datagram",
""
] |
I'm having a problem with my work that hopefully reduces to the following: I have two `List<int>`s, and I want to see if any of the `int`s in `ListA` are equal to any `int` in `ListB`. (They can be arrays if that makes life easier, but I think `List<>` has some built-in magic that might help.) And I'm sure this is a LINQ-friendly problem, but I'm working in 2.0 here.
My solution so far has been to `foreach` through ListA, and then `foreach` through ListB,
```
foreach (int a in ListA)
{
foreach (int b in ListB)
{
if (a == b)
{
return true;
}
}
}
```
which was actually pretty slick when they were each three items long, but now they're 200 long and they frequently don't match, so we get the worst-case of N^2 comparisons. Even 40,000 comparisons go by pretty fast, but I think I might be missing something, since N^2 seems pretty naive for this particular problem.
Thanks! | With [LINQ](http://msdn.microsoft.com/en-us/library/bb397926.aspx), this is trivial, as you can call the [`Intersect` extension method](http://msdn.microsoft.com/en-us/library/system.linq.enumerable.intersect.aspx) on the [`Enumerable` class](http://msdn.microsoft.com/en-us/library/system.linq.enumerable.aspx) to give you the set intersection of the two arrays:
```
var intersection = ListA.Intersect(ListB);
```
However, this is the *set* intersection, meaning if `ListA` and `ListB` don't have unique values in it, you won't get any copies. In other words if you have the following:
```
var ListA = new [] { 0, 0, 1, 2, 3 };
var ListB = new [] { 0, 0, 0, 2 };
```
Then `ListA.Intersect(ListB)` produces:
```
{ 0, 2 }
```
If you're expecting:
```
{ 0, 0, 2 }
```
Then you're going to have to maintain a count of the items yourself and yield/decrement as you scan the two lists.
First, you'd want to collect a [`Dictionary<TKey, int>`](http://msdn.microsoft.com/en-us/library/xfhwa508.aspx) with the lists of the individual items:
```
var countsOfA = ListA.GroupBy(i => i).ToDictionary(g => g.Key, g => g.Count());
```
From there, you can scan `ListB` and place that in a list when you come across an item in `countsOfA`:
```
// The items that match.
IList<int> matched = new List<int>();
// Scan
foreach (int b in ListB)
{
// The count.
int count;
// If the item is found in a.
if (countsOfA.TryGetValue(b, out count))
{
// This is positive.
Debug.Assert(count > 0);
// Add the item to the list.
matched.Add(b);
// Decrement the count. If
// 0, remove.
if (--count == 0) countsOfA.Remove(b);
}
}
```
You can wrap this up in an extension method that defers execution like so:
```
public static IEnumerable<T> MultisetIntersect(this IEnumerable<T> first,
IEnumerable<T> second)
{
// Call the overload with the default comparer.
return first.MultisetIntersect(second, EqualityComparer<T>.Default);
}
public static IEnumerable<T> MultisetIntersect(this IEnumerable<T> first,
IEnumerable<T> second, IEqualityComparer<T> comparer)
{
// Validate parameters. Do this separately so check
// is performed immediately, and not when execution
// takes place.
if (first == null) throw new ArgumentNullException("first");
if (second == null) throw new ArgumentNullException("second");
if (comparer == null) throw new ArgumentNullException("comparer");
// Defer execution on the internal
// instance.
return first.MultisetIntersectImplementation(second, comparer);
}
private static IEnumerable<T> MultisetIntersectImplementation(
this IEnumerable<T> first, IEnumerable<T> second,
IEqualityComparer<T> comparer)
{
// Validate parameters.
Debug.Assert(first != null);
Debug.Assert(second != null);
Debug.Assert(comparer != null);
// Get the dictionary of the first.
IDictionary<T, long> counts = first.GroupBy(t => t, comparer).
ToDictionary(g => g.Key, g.LongCount(), comparer);
// Scan
foreach (T t in second)
{
// The count.
long count;
// If the item is found in a.
if (counts.TryGetValue(t, out count))
{
// This is positive.
Debug.Assert(count > 0);
// Yield the item.
yield return t;
// Decrement the count. If
// 0, remove.
if (--count == 0) counts.Remove(t);
}
}
}
```
Note that both of these approaches are (and I apologize if I'm butchering Big-O notation here) `O(N + M)` where `N` is the number of items in the first array, and `M` is the number of items in the second array. You have to scan each list only once, and it's assumed that getting the hash codes and performing lookups on the hash codes is an `O(1)` (constant) operation. | Load the whole of ListA into a HashSet instance, and then test foreach item in ListB against the HastSet: I'm pretty sure that this would be O(N).
```
//untested code ahead
HashSet<int> hashSet = new HashSet<int>(ListA);
foreach (int i in ListB)
{
if (hashSet.Contains(i))
return true;
}
```
Here's the same thing in one line:
```
return new HashSet<int>(ListA).Overlaps(ListB);
```
HashSet doesn't exist in .NET 3.5, so in .NET 2.0 you can use `Dictionary<int,object>` (instead of using `HashSet<int>`), and always store null as the object/value in the Dictionary since you're only interested in the keys. | matching items from two lists (or arrays) | [
"",
"c#",
".net",
"arrays",
"list",
""
] |
How can invoke user control's public method from within the page?
I load the control dynamically inside OnInit on the page.Any ideas? For some reason I am getting a build error that says that the method doesn't exist, even though it's public. Starting to think that user controls are not worth all the hassle. | You've said
```
Control fracTemplateCtrl =
(FracTemplateCtrl)LoadControl("FracTemplateCtrl.ascx")
fracTemplateCtrl.TestMethod();
```
you need to say
```
FracTemplateCtrl fracTemplateCtrl =
(FracTemplateCtrl)LoadControl("FracTemplateCtrl.ascx")
fracTemplateCtrl.TestMethod();
```
Note that `fracTemplateCtrl is` declared as a `FracTemplateCtrl`, so visual studio knows that it has a `TestMethod()`. When it is declared as a `Control`, visual studio can't make this assumption. | Are you casting the User Control to the correct type? | Public Methods inside User control c# .net | [
"",
"c#",
"asp.net",
"user-controls",
""
] |
I am trying to download the content of a secure (uses https) webpage using php and curl libraries.
However, reading failed and I get error 60: "SSL certificate problem, verify that the CA cert is OK."
also "Details: SSL3\_GET\_SERVER\_CERTIFICATE:certificate verify failed"
So...pretty self explanatory error msg's.
My question is: How do I send an SSL certificate (the right one?) and get this page to verify it and let me in?
Also, here is my options array in case you are wondering:
```
$options = array(
CURLOPT_RETURNTRANSFER => true, // return web page
CURLOPT_HEADER => false, // don't return headers
CURLOPT_FOLLOWLOCATION => true, // follow redirects
CURLOPT_ENCODING => "", // handle all encodings
CURLOPT_USERAGENT => "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:x.x.x) Gecko/20041107 Firefox/x.x", // who am i
CURLOPT_AUTOREFERER => true, // set referer on redirect
CURLOPT_CONNECTTIMEOUT => 120, // timeout on connect
CURLOPT_TIMEOUT => 120, // timeout on response
CURLOPT_MAXREDIRS => 10, // stop after 10 redirects
CURLOPT_SSL_VERIFYHOST => 1,
);
```
Any suggestions would be great,
Andrew | It sounds like you might be misinterpreting the error. It looks to me like the site you're connecting to is self-signed or some other common problem. Just like the usual browser warning, you're easiest work around is to disable the checks.
You'll need to set `CURLOPT_SSL_VERIFYPEER` and `CURLOPT_SSL_VERIFYHOST` to `FALSE`. This should disable the two main checks. They may not both be required, but this should at least get you going.
**To be clear, this disables a feature designed to protect you. Only do this if you have verified the certificate and server by some other means.**
More info on the PHP site: [curl\_setopt()](http://ca.php.net/manual/en/function.curl-setopt.php) | If you want to use SSL peer verification (turning it off is not always good idea) you may use next solution on Windows globally for all applications:
1. Download file with root certificates from here:
<http://curl.haxx.se/docs/caextract.html>
2. Add to php.ini:
`curl.cainfo=C:/path/to/cacert.pem`
that's all magic, CURL can now verify certificates.
*(as I know there is no such problem on Linux, at least on Ubuntu)* | reading SSL page with CURL (php) | [
"",
"php",
"ssl",
"curl",
""
] |
I need to write a simple shell-style application in Java. It would be nice to use a library that takes care of parsing commands and takes care of things like flags and optional/mandatory parameters...
Something that has built-in TAB completion would be particularly great. | You can use [JLine](http://jline.sourceforge.net/) for editing and [Apache Commons CLI](http://commons.apache.org/cli/usage.html) for command line parsing. | [BeanShell](http://www.beanshell.org/) ? | Are there good Java libraries that facilitate building interactive shell-style applications? | [
"",
"java",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.