Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
I am trying to right align a control in a [`StatusStrip`](http://msdn.microsoft.com/en-us/library/system.windows.forms.statusstrip.aspx). How can I do that?
I don't see a property to set on `ToolStripItem` controls that specifies their physical alignment on the parent `StatusStrip`.
[How do I get Messages drop down to be right aligned? http://i.friendfeed.com/ed90b205f64099687db30553daa79d075f280b90](http://i.friendfeed.com/ed90b205f64099687db30553daa79d075f280b90) | Found it via MSDN forums almost immediately after posting :)
You can use a [`ToolStripLabel`](http://msdn.microsoft.com/en-us/library/system.windows.forms.toolstripstatuslabel.aspx) to pseudo right align controls by setting the `Text` property to `string.Empty` and setting the `Spring` property to `true`. This will cause it to fill all of the available space and push all the controls to the right of the [`ToolStripLabel`](http://msdn.microsoft.com/en-us/library/system.windows.forms.toolstripstatuslabel.aspx) over. | For me it took two simple steps:
1. Set `MyRightIntendedToolStripItem.Alignment` to `Right`
2. Set `MyStatusStrip.LayoutStyle` to `HorizontalStackWithOverflow` | How do I right align controls in a StatusStrip? | [
"",
"c#",
"winforms",
"statusstrip",
""
] |
I've managed to use [Sun's MSCAPI provider](http://java.sun.com/javase/6/docs/technotes/guides/security/SunProviders.html#SunMSCAPI) in my application. The problem I'm having now is that it always pops up a window, asking for a password, even though I've provided it in the code. This is a problem, because I need the cryptography functionality in a webservice.
Here's the code I have now:
```
String alias = "Alias to my PK";
char[] pass = "MyPassword".toCharArray();
KeyStore ks = KeyStore.getInstance("Windows-MY");
ks.load(null, pass);
Provider p = ks.getProvider();
Signature sig = Signature.getInstance("SHA1withRSA",p);
PrivateKey key = (PrivateKey) ks.getKey(alias, pass)
sig.initSign(key);
sig.update("Testing".getBytes());
sig.sign();
```
This is working great, but I get a popup asking for the password when the last line is run. How do I prevent that? | The MSCAPI provider does not support providing the password to CAPI:
> A compatibility mode is supported for applications that assume a password must be supplied. It permits (but ignores) a non-null password. The mode is enabled by default. [(1)](http://www.java2s.com/Open-Source/Java-Document/6.0-JDK-Platform/windows/sun/security/mscapi/KeyStore.java.htm)
To set the password through CAPI, you must call [CryptSetKeyParam](http://msdn.microsoft.com/en-us/library/aa380272(VS.85).aspx) with the undocumented KP\_KEYEXCHANGE\_PIN or KP\_SIGNATURE\_PIN and hope your underlying hardware token provider supports it. (They are not completely undocumented - the documentation for Windows CE and Windows Mobile mention them [(2)](http://msdn.microsoft.com/en-us/library/aa923647.aspx) and they are included in the header files). | My guess is that Windows is popping up the pop up.
Import your key again using the Certificate Import Wizard, but make sure that you don't check the following option on the "Password" screen.
> [\_] Enable strong private key protection. You will be prompted every time the private key is used by an application if you enable this option. | Java security - MSCAPI provider: How to use without password popup? | [
"",
"java",
"security",
"cryptography",
"cryptoapi",
""
] |
I was refactoring some code, and part of it included moving it from VB.Net to C#.
The old code declared a member like this:
```
Protected viewMode As New WebControl
```
The new code, I eventually got working, like this:
```
protected WebControl _viewMode = new WebControl(HtmlTextWriterTag.Span);
```
I can presume that the `New` keyword meant: call the constructor! But how was VB.Net calling a constructor (a parameter-less one) that I couldn't call in C#? | The reason this worked in VB, and not in C#, had nothing to do with assemblies.
The default constructor for WebControl is protected.
VB and C# have different interpretations of what "protected" means.
In VB, you can access a protected member of a class from any method in any type that derives from the class.
That is, VB allows this code to compile:
```
class Base
protected m_x as integer
end class
class Derived1
inherits Base
public sub Foo(other as Base)
other.m_x = 2
end sub
end class
class Derived2
inherits Base
end class
```
Because a "Derived1" is a base, it can access protected members of "other", which is also a base.
C# takes a different point of view. It doesn't allow the "sideways" access that VB does.
It says that access to protected members can be made via "this" or any object of the same type as the class that contains the method.
Because "Foo" here is defined in "Derived1", C# will only allows "Foo" to access "Base" members from a "Derived1" instance. It's possible for "other" to be something that is not a "Derived1" (it could, for example, be a "Derived2"), and so it does not allow access to "m\_x".
In this case of your code, VB allowed "sideways" access to the "WebControl" constructor.
C#, however, did not. | The default constructor for WebControl (implicit in the VB line) is to use a span. You can call that constructor in c# as well as VB.NET. | Difference between VB.Net and C# "As New WebControl" | [
"",
"c#",
".net",
"asp.net",
"vb.net",
"clr",
""
] |
I am familiar with sending email from Java programs. Is it possible to configure the email so that Outlook will recognize that it should expire at a certain time? | Add a header to the `MimeMessage` called `"Expiry-Date"` using the (joda-time) format `"EEE MMM d HH:mm:ss yyyy Z"`
The other answers are good, but I used a slightly different format. | I believe Outlook honors the, now deprecated, Expiry-Date header. You can add this to the MimeMessage headers. The format for the value is `"EEE, d MMM yyyy hh:mm:ss Z"` | How can I send an email from Java that will auto-expire in Outlook? | [
"",
"java",
"email",
"outlook",
""
] |
So I was playing around the other day just to see exactly how mass assignment works in JavaScript.
First I tried this example in the console:
```
a = b = {};
a.foo = 'bar';
console.log(b.foo);
```
The result was "bar" being displayed in an alert. That is fair enough, `a` and `b` are really just aliases to the same object. Then I thought, how could I make this example simpler.
```
a = b = 'foo';
a = 'bar';
console.log(b);
```
That is pretty much the same thing, isn't it? Well this time, it returns `foo` not `bar` as I would expect from the behaviour of the first example.
Why does this happen?
**N.B.** This example could be simplified even more with the following code:
```
a = {};
b = a;
a.foo = 'bar';
console.log(b.foo);
a = 'foo';
b = a;
a = 'bar';
console.log(b);
```
(I suspect that JavaScript treats primitives such as strings and integers differently to hashes. Hashes return a pointer while "core" primitives return a copy of themselves) | In the first example, you are setting a property of an existing object. In the second example, you are assigning a brand new object.
```
a = b = {};
```
`a` and `b` are now pointers to the same object. So when you do:
```
a.foo = 'bar';
```
It sets `b.foo` as well since `a` and `b` point to the same object.
*However!*
If you do this instead:
```
a = 'bar';
```
you are saying that `a` points to a different object now. This has no effect on what `a` pointed to before.
In JavaScript, assigning a variable and assigning a property are 2 different operations. It's best to think of variables as pointers to objects, and when you assign directly to a variable, you are not modifying any objects, merely repointing your variable to a different object.
But assigning a property, like `a.foo`, will modify the object that `a` points to. This, of course, also modifies all other references that point to this object simply because they all point to the same object. | Your question has already been satisfyingly answered by Squeegy - it has nothing to do with objects vs. primitives, but with reassignment of variables vs. setting properties in the same referenced object.
There seems to be a lot of confusion about JavaScript types in the answers and comments, so here's a small introduction to JavaScript's type system:
In JavaScript, there are two fundamentally different kinds of values: primitives and objects (and there is no thing like a 'hash').
Strings, numbers and booleans as well as `null` and `undefined` are primitives, objects are everything which can have properties. Even arrays and functions are regular objects and therefore can hold arbitrary properties. They just differ in the internal [[Class]] property (functions additionally have a property called [[Call]] and [[Construct]], but hey, that's details).
The reason that primitive values may behave like objects is because of autoboxing, but the primitives themselves can't hold any properties.
Here is an example:
```
var a = 'quux';
a.foo = 'bar';
document.writeln(a.foo);
```
This will output `undefined`: `a` holds a primitive value, which gets promoted to an object when assigning the property `foo`. But this new object is immediately discarded, so the value of `foo` is lost.
Think of it like this:
```
var a = 'quux';
new String(a).foo = 'bar'; // we never save this new object anywhere!
document.writeln(new String(a).foo); // a completly new object gets created
``` | How does variable assignment work in JavaScript? | [
"",
"javascript",
""
] |
I'm writing some code that handles logging xml data and I would like to be able to replace the content of certain elements (eg passwords) in the document. I'd rather not serialize and parse the document as my code will be handling a variety of schemas.
Sample input documents:
doc #1:
```
<user>
<userid>jsmith</userid>
<password>myPword</password>
</user>
```
doc #2:
```
<secinfo>
<ns:username>jsmith</ns:username>
<ns:password>myPword</ns:password>
</secinfo>
```
What I'd like my output to be:
output doc #1:
```
<user>
<userid>jsmith</userid>
<password>XXXXX</password>
</user>
```
output doc #2:
```
<secinfo>
<ns:username>jsmith</ns:username>
<ns:password>XXXXX</ns:password>
</secinfo>
```
Since the documents I'll be processing could have a variety of schemas, I was hoping to come up with a nice generic regular expression solution that could find elements with password in them and mask the content accordingly.
Can I solve this using regular expressions and C# or is there a more efficient way? | This problem is best solved with XSLT:
```
<?xml version="1.0" encoding="utf-8"?>
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:template match="@* | node()">
<xsl:copy>
<xsl:apply-templates select="@* | node()"/>
</xsl:copy>
</xsl:template>
<xsl:template match="//password">
<xsl:copy>
<xsl:text>XXXXX</xsl:text>
</xsl:copy>
</xsl:template>
</xsl:stylesheet>
```
This will work for both inputs as long as you handle the namespaces properly.
### Edit : Clarification of what I mean by "handle namespaces properly"
Make sure your source document that has the `ns` name prefix has as namespace defined for the document like so:
```
<?xml version="1.0" encoding="utf-8"?>
<secinfo xmlns:ns="urn:foo">
<ns:username>jsmith</ns:username>
<ns:password>XXXXX</ns:password>
</secinfo>
``` | I'd say you're better off parsing the content with a .NET XmlDocument object and finding password elements using XPath, then changing their innerXML properties. It has the advantage of being more correct (since XML isn't regular in the first place), and it's conceptually easy to understand. | Using C# Regular expression to replace XML element content | [
"",
"c#",
".net",
"xml",
"regex",
"parsing",
""
] |
I a modeling a system to evaluate expressions. Now the operands in these expressions can be of one of several types including some primitive .NET types. When defining my Expression class, I want some degree of type-safety and therefore don't want to use 'object' for the operand object type, so I am considering defining an abstract Operand base class with nothing in it and creating a subclass for each type of Operand. What do you think of this?
Also, only some types of operands make sense with others. And finally, only some operators make sense with particular operands. I can't really think of a way to implement these rules at compile-time so I'm thinking I'll have to do these checks at runtime.
Any ideas on how I might be able to do this better? | How about `Expression` in 3.5? I recently wrote an expression parser/compiler using this. | I'm not sure if C based languages have this, however Java has several packages that would really make sense for this.
The JavaCC or java compiler compiler allows you to define a language (your expressions for example) and them build the corresponding java classes. A somewhat more user friendly if not more experimental and academic package is DemeterJ - this allows you to very easily specify the expression language and comes with a library for defining visitors and strategies to operate over the generated class structure. If you could afford to switch to Java I might try that. Other wise I'd look for a C# clone of one of these technologies.
Another thing to consider if you go down this route is that once you've generated your class structure within some reasonable approximation of the end result, you can subclass all of the generated classes and build all of your application specific login into the subclasses. That way if you really need to regenerate a new model for the expression language your logic will be relatively independent of your class hierarchy.
Update: Actually it looks as though some of this stuff is ported to .NET technology though I havent used it so I'm not sure what shape it may be in:
<http://www.ccs.neu.edu/home/lieber/inside-impl.html>
good luck! | Expression evaluation design questions | [
"",
"c#",
"oop",
""
] |
A few weeks ago I asked the question *"Is a PHP, Python, PostgreSQL design suitable for a non-web business application?"* [Is a PHP, Python, PostgreSQL design suitable for a business application?](https://stackoverflow.com/questions/439759/is-a-php-python-postgresql-design-suitable-for-a-business-application)
A lot of the answers recommended skipping the **PHP** piece and using **Django** to build the application. As I've explored Django, I've started to question one specific aspect of my goals and how Django comes into play for **a non-web business application**.
Based on my understanding, Django would manage both the view and controller pieces and **PostgreSQL** or **MySQL** would handle the data. But my goal was to clearly separate the layers so that the database, domain logic, and presentation could each be changed without significantly affecting the others. It seems like I'm only separating the M from the VC layers with the Django solution.
So, is it counterproductive for me to build the domain layer in **Python** with an **SQL Alchemy/Elixir ORM tool**, **PostgreSQL** for the database layer, and then still use **Django** or **PHP** for the presentation layer? Is this possible or pure insanity?
Basically, I'd be looking at an architecture of **Django/PHP > Python/SQLAlchemy > PostgreSQL/MySQL**.
*Edit: Before the fanboys get mad at me for asking a question about Django, just realize: It's a question, not an accusation. If I knew the answer or had my own opinion, I wouldn't have asked!* | You seem to be saying that choosing Django would prevent you from using a more heterogenous solution later. This isn't the case. Django provides a number of interesting connections between the layers, and using Django for all the layers lets you take advantage of those connections. For example, using the Django ORM means that you get the great Django admin app almost for free.
You can choose to use a different ORM within Django, you just won't get the admin app (or generic views, for example) along with it. So a different ORM takes you a step backward from full Django top-to-bottom, but it isn't a step backward from other heterogenous solutions, because those solutions didn't give you intra-layer goodness the admin app in the first place.
Django shouldn't be criticized for not providing a flexible architecture: it's as flexible as any other solution, you just forgo some of the Django benefits if you choose to swap out a layer.
If you choose to start with Django, you can use the Django ORM now, and then later, if you need to switch, you can change over to SQLalchemy. That will be no more difficult than starting with SQLalchemy now and later moving to some other ORM solution.
You haven't said why you anticipate needing to swap out layers. It will be a painful process no matter what, because there is necessarily much code that relies on the behavior of whichever toolset and library you're currently using. | Django will happily let you use whatever libraries you want for whatever you want to use them for -- you want a different ORM, use it, you want a different template engine, use it, and so on -- but is designed to provide a common default stack used by many interoperable applications. In other words, if you swap out an ORM or a template system, you'll lose compatibility with a lot of applications, but the ability to take advantage of a large base of applications typically outweighs this.
In broader terms, however, I'd advise you to spend a bit more time reading up on architectural patterns for web applications, since you seem to have some major conceptual confusion going on. One might just as easily say that, for example, Rails doesn't have a "view" layer since you could use different file systems as the storage location for the view code (in other words: being able to change where and how the data is stored by your model layer doesn't mean you don't *have* a model layer).
(and it goes without saying that it's also important to know why "strict" or "pure" MVC is an absolutely *horrid* fit for web applications; MVC in its pure form is useful for applications with many independent ways to initiate interaction, like a word processor with lots of toolbars and input panes, but its benefits quickly start to disappear when you move to the web and have only one way -- an HTTP request -- to interact with the application. This is why there are no "true" MVC web frameworks; they all borrow certain ideas about separation of concerns, but none of them implement the pattern strictly) | Does Django development provide a truly flexible 3 layer architecture? | [
"",
"python",
"django",
"model-view-controller",
"orm",
""
] |
Following up on [this comment](https://stackoverflow.com/questions/452139/writing-firmware-assembly-or-high-level#452401) from the question [Writing firmware: assembly or high level?](https://stackoverflow.com/questions/452139/writing-firmware-assembly-or-high-level):
When compiling C++ code for the [Arduino](http://arduino.cc) platform, can you use virtual functions, exceptions, etc? Or would you want to (have to) use a subset of C++ (as described in [the comment](https://stackoverflow.com/questions/452139/writing-firmware-assembly-or-high-level#452401))?
Any other caveats when programming for the Arduino platform? | The Arduino environment uses the AVR version of the GCC toolchain. The code is compiled as C++, so you can use classes. Virtual functions are possible; the vtables will be stored in the .data section and have the correct addresses. In fact, the Print base class uses virtual functions to adapt the various "print" methods to the different output types.
Exceptions are not supported because of code space reasons. The Arduino environment passes "-fno-exceptions" to the compiler command line. See [the source](http://code.google.com/p/arduino/source/browse/trunk/app/Compiler.java#518) for verification of this.
Templates are supported. For example, [this no-cost stream insertion operator technique](http://arduiniana.org/libraries/streaming/) works fine using a simple template and inline operator. | The Arduino software uses avr-gcc to compile sketches. The following limitations were sourced from the [avrlibc FAQ (Can I use C++ on the AVR?)](http://www.nongnu.org/avr-libc/user-manual/FAQ.html#faq_cplusplus):
**Supported**
* Virtual functions
* Constructors and destructors (including global ones)
**Not supported**
* C++ standard functions, classes, and template classes (but see [this port of uClibc++ for Arduino](https://github.com/maniacbug/StandardCplusplus))
* operators `new` and `delete` (attempting to use them will cause the linker to complain about undefined external references). This means that objects can only be created on the stack. If dynamic memory allocation is required it must be implemented using `malloc()` and `free()` of `C` types
* Exceptions. Since exceptions are enabled by default in the C++ frontend, they explicitly need to be turned off using `-fno-exceptions` in the compiler options. This is done automatically by the Arduino IDE when it launches `avr-gcc`
**Other issues**
* Some of the supplied include files are not C++ safe, i.e. they need to be wrapped with
`extern "C" { . . . }` | Arduino C++ code: can you use virtual functions and exceptions? | [
"",
"c++",
"arduino",
"virtual-functions",
"firmware",
""
] |
I have been wondering how to put all the stored procedures on a SQL 2000 under source control.
We are using Subversion for all our normal source code, so it would be great if there were a solution to the problem using Subversion.
Do you have any ideas?
Update 16-02-2009: This is the vbs script i used to export all the stored procedures:
```
Set con = CreateObject("ADODB.Connection")
con.ConnectionString = "*** Database connection string here ***"
con.Open
Set rs = CreateObject("ADODB.RecordSet")
rs.ActiveConnection = con
strSQL = "SELECT ROUTINE_NAME, ROUTINE_DEFINITION " & _
"FROM INFORMATION_SCHEMA.routines " & _
"WHERE ROUTINE_NAME NOT LIKE 'dt_%' " & _
"ORDER BY 1"
Set fso = CreateObject("Scripting.FileSystemObject")
rs.Open strSQL
While Not rs.Eof
filename = rs("ROUTINE_NAME") & ".sql"
routineSQL = rs("ROUTINE_DEFINITION")
Set tf = fso.CreateTextFile(filename, True)
tf.Write routineSQL
tf.Close
set tf = Nothing
rs.MoveNext
Wend
Set fso = Nothing
rs.Close
Set rs = Nothing
``` | As other people have said, start off with each stored proc in a separated text file that is under source control. Write a script that deletes all you stored procedures then re-creates them from the text files (while logging/reporting any errors) – this script should be easy to run. Then every time you update from source control rerun the script. All edits to stored procedures should be done to the text file, not the “live” copy on your local database otherwise you will loose changes when you do a update.
You will soon want someway of auditing your database schema and creating upgrade scripts etc.
If you are only using SQL server then consider [SQL Compare](http://www.red-gate.com/products/SQL_Compare/index.htm) from [Reg-Gate](http://www.red-gate.com/). I think it will compare stored procs (and other sql) in a text file with what is in your database and sync the two. So letting you use the editing tools in SqlServer to edit the live stored procedures.
(As of the end of 2009, Red-Gate is just about to ship [Sql Compare for Oracle](http://www.red-gate.com/products/schema_compare_for_oracle/index.htm))
I have been told that ApexSQL's [Diff](http://www.apexsql.com/sql_tools_diff.asp) tool is another option instead of Sql Compare, ApexSQL's [Edit](http://www.apexsql.com/sql_tools_edit.asp) claims to provide source control integration.
At the high-end consider Visual Studio Team System Database Edition, however it costs a lot, then you may have to pay even more for Oracle support from a 3rd party. But if you are a Microsoft partner (or can become one) you may get some copes very cheaply.
[See also Do you source control your databases?](https://stackoverflow.com/questions/115369) on StackOverflow for a good set of answers on the bigger problem. | Usually you track the changes to SQL scripts in source control.
For example, you have a checkin for your base schema for your database.
Then you keep adding new SQL files for changes to your schema. That way you can deploy to an exact version for testing purposes. Then you can use build automation to automatically test some of your scripts by executing them against test databases with actual data in them.
There are lots of database diff tools around that can help you work out what's changed between versions. | Source Control and stored procedures | [
"",
"sql",
"sql-server",
"version-control",
"stored-procedures",
""
] |
I read the Java tutorials on Sun for JAR files, but I still can't find a solution for my problem. I need to use a class from a jar file called jtwitter.jar, I downloaded the file, and tried executing it (I found out yesterday that .jar files can be executed by double clicking on them) and Vista gave me an error saying "Failed to load Main-Class Manifest attribute from [path]/jtwitter.jar".
The guy who coded the .jar file wants me to import it, but where do I store the .jar file to import it in my code? I tried putting both the .jar file and my .java file in the same directory, didn't work.
The file I'm trying to work for is here: <http://www.winterwell.com/software/jtwitter.php>
I'm using JCreator LE. | Not every jar file is executable.
Now, you need to import the classes, which are there under the jar, in your java file. For example,
```
import org.xml.sax.SAXException;
```
If you are working on an IDE, then you should refer its documentation. Or at least specify which one you are using here in this thread. It would definitely enable us to help you further.
And if you are not using any IDE, then please look at [javac -cp](http://docs.oracle.com/javase/8/docs/technotes/tools/windows/javac.html) option. However, it's much better idea to package your program in a `jar` file, and include all the required `jar`s within that. Then, in order to execute your `jar`, like,
```
java -jar my_program.jar
```
you should have a `META-INF/MANIFEST.MF` file in your `jar`. See [here](https://stackoverflow.com/questions/2848642/how-to-setup-main-class-in-manifest-file-in-jar-produced-by-netbeans-project), for how-to. | Let's say we need to use the class `Classname` that is contained in the jar file `org.example.jar`
And your source is in the file `mysource.java` Like this:
```
import org.example.Classname;
public class mysource {
public static void main(String[] argv) {
......
}
}
```
First, as you see, in your code you have to import the classes. To do that you need `import org.example.Classname;`
Second, when you compile the source, you have to reference the jar file.
Please note the difference in using `:` and `;` while compiling
* If you are under a unix like operating system:
```
javac -cp '.:org.example.jar' mysource.java
```
* If you are under windows:
```
javac -cp .;org.example.jar mysource.java
```
After this, you obtain the bytecode file `mysource.class`
Now you can run this :
* If you are under a unix like operating system:
```
java -cp '.:org.example.jar' mysource
```
* If you are under windows:
```
java -cp .;org.example.jar mysource
``` | How to use classes from .jar files? | [
"",
"java",
"jar",
""
] |
I'm working on an interface to allow our clients to update their DNS on their own.
I have 2 questions:
1. What constitutes valid a valid host and target records? (A, CNAME, MX, TXT) i.e. if the user enters ........ for the host and target the DNS server won't like that.
2. Is there a regex I can use to sanitize user input?
BTW it is BIND9 DNS and C# web app.
Thanks,
Kyle | Domain name *labels* can technically contain any octet value, but *usually* they only contain alphanumerics and the hyphen and underscore characters.
This comes from recommendations in section 2.3.1 of [RFC 1035](http://www.ietf.org/rfc/rfc1035.txt):
> The labels must follow the rules for
> ARPANET host names. They must start
> with a letter, end with a letter or
> digit, and have as interior characters
> only letters, digits, and hyphen.
> There are also some restrictions on
> the length. Labels must be 63
> characters or less.
The underscore character is a more recent addition, typically used in the label portion of `SRV` records.
You could also permit the "`.`" character if you're going to let users create their own subdomains.
The *values* that are possible are:
* `A` record - must be a dotted-quad IP address
* `CNAME` record - must be some other legal label
* `MX` record - 16-bit integer priority field, and a legal hostname. NB: some people put in labels which themselves point only to a `CNAME` record. This is frowned upon.
* `TXT` record - anything you like!
Note that in every case, if you do allow any of the characters not in the normal set they would need to be escaped if they're being stored in a BIND format zone file. | The answer used to be easy, but not anymore.
You can use almost any Unicode characters, but they should go thru a normalization, and
encoding process.
See RFC 3490 (IDNA), RFC 3454 (Stringprep), RFC 3491 (Nameprep), RFC 3492 (Punycode)
Or go with Wikipedia for the big picture (<http://en.wikipedia.org/wiki/Internationalized_domain_name>). | What are valid characters for a DNS Zone file and how can I sanitize user input? | [
"",
"c#",
"dns",
"bind",
""
] |
Every time I use Setup & Deployment to create a new Web Setup, and run it (after edit all the nice things in the properties), the output is always a copy of the Web Site project...
How can I output a PreCompile version of the WebSite project?
What I did was, publish the Web Site (so I get the precompiled version), add this new precompiled web site as an existing Web site to my solution and add it to the content output of the Setup...
well, the idea was good but I get an error saying:
```
"This application is already precompiled."
```
[alt text http://www.balexandre.com/temp/stackoverflow\_precompiledquestion.png](http://www.balexandre.com/temp/stackoverflow_precompiledquestion.png)
:-(
Bottom line is that I just want a Setup file that gives me the precompiled version of my Web project, how can I accomplish this? | I see you tried the standard Web Setup project from VS.
Scott Gu's blog post takes you to this page:
[Visual Studio 2008 Web Deployment Projects](http://www.microsoft.com/downloads/details.aspx?FamilyID=0aa30ae8-c73b-4bdd-bb1b-fe697256c459&DisplayLang=en)
which is a plugin for Visual Studio that activates an additional "right click" option to any Web Site project to add such a deployment project. You can see here what I created.... And the output is a pre-compiled web application. Now, if you add a regular Web Setup project to your solution, and point it to the previously created Web Deploy project as its content, ... I got a valid build, no errors and an MSI file was created...with dlls inside it.
<http://img222.imageshack.us/img222/6177/71881923mj9.jpg>
I hope this helps you. | I was searching for this solution from google for long days. What i did is i precompiled my website to one folder and added that folder as a new website to my
solution(While adding it will give a warning message that it was already precompiled content. No probs). Now add this project output to the my setup project and one more important thing is in my web deployment project i just disabled Building of my precompiled
website.
I just got my thinks working. If you want you can try that.
Regards,
Rousseau.A | pre compile website in Setup & Deployment | [
"",
"c#",
"asp.net",
"visual-studio-2008",
"setup-deployment",
""
] |
I often find I want to write code something like this in C#, but I am uncomfortable with the identifier names:
```
public class Car
{
private Engine engine;
public Engine Engine
{
get
{
return engine;
}
set
{
engine = value;
}
}
public Car(Engine engine)
{
this.engine = engine;
}
}
```
Here we have four different things called "engine":
* `Engine` the class. Engine seems like a good, natural name.
* `Engine` the public property. Seems silly to call it MyEngine or TheCarsEngine.
* `engine` the private field backing the property. Some naming schemes will recommend `m_engine` or `_engine`, but others say that all prefixes should be avoided.
* `engine` the parameter name on the constructor. I've seen naming schemes that recommend prefixing an underscore on all parameters, e.g., `_engine`. I really dislike this, since the parameter is visible to callers via Intellisense.
The particular things I don't like about the code as written are that:
* If you change the parameter name in the constructor but miss a use of it in the constructor body, you get a subtle bug that the compiler probably won't be able to spot.
* Intellisense has a bad habit of autocompleting the wrong thing for you, and sometimes you won't notice it's changed the case. You will again get a subtle bug if the constructor body accidentally ends up `this.engine = Engine;`
It seems that each name is appropriate in isolation, but together they are bad. Something has to yield, but what? I prefer to change the private field, since it's not visible to users, so I'll usually end up with `m_engine`, which solves some problems, but introduces a prefix and doesn't stop Intellisense from changing `engine` to `Engine`.
How would you rename these four items? Why?
(Note: I realise the property in this example could be an automatic property. I just didn't want to make the example overcomplicated.)
See also:
[Am I immoral for using a variable name that differs from its type only by case?](https://stackoverflow.com/questions/461231/am-i-immoral-for-using-a-variable-name-that-differs-from-its-type-only-by-case) | In this case, I would name them exactly as they are in the example.
This is because the naming is clear as to what data each element holds and/or will be used for.
The only thing I would change for C#3 is to use an auto-property which would remove the local variable. | For private members I always prefix with an underscore:
```
private Engine engine;
```
becomes:
```
private Engine _engine;
```
Whenever I see `m_`, it makes my stomach churn. | How would you name these related Property, Class, Parameter and Field in .NET? | [
"",
"c#",
".net",
""
] |
My compiler (VC++ 6.0 sp6) has apparently gone insane. In certain pieces of code I'm seeing that '`bool mybool = true;`' evalutes to and assigns false, and vice versa for true. Changing the true/false keywords to 1/0 makes it work fine. The same code compiles elsewhere fine without changing the true/false keywords.
What could possibly cause this? My first thought was RAM or disk corruption, but that all checked out fine. I'm not far from reformatting my drive and reinstalling everything, but I'm terrified I'd still see the same misbehavior.
Is it even technically possible for a macro or linked-in library somewhere to screw up the meaning of '`true`' and '`false`'?
UPDATE:
Mystery solved. An environment variable flag on my machine was set to 'false' and the way this was interpolated by some preprocessor code redefined the keyword. | A preprocessor macro could certainly do it, although that would be pretty surprising. One way to check if that is the case would be
```
#ifdef true
# error "true is defined as a macro"
#endif
#ifdef false
# error "false is defined as a macro"
#endif
```
Response to comments:
Find a non-header file where you see this behavior, preferably one with few #includes.
In the middle of the list of includes, put the #ifdef #error directives.
if the error trips you know it's in the first half of includes, if it doesn't it's in the second half. Split the half in half and repeat. When you narrow it down to one header, open that header. If that header includes any headers repeat the process for the list of headers it includes. Eventually you should be able to find the #defines . Tedious, I agree. | Buffer overflows and writing into uninitialized memory can also account for such behavior. For example, if you have an array and bool allocated in adjacent memory locations and accidentally write beyond the bounds of the array. | C++ 'true' and 'false' keywords suddenly not true or false in Visual C++ 6.0 | [
"",
"c++",
"visual-c++",
""
] |
I am getting a string hash like this:
```
string content = "a very long string";
int contentHash = content.GetHashCode();
```
I am then storing the hash into a dictionary as key mapping to another ID. This is useful so I don't have to compare [big strings](http://dotnetperls.com/Content/Dictionary-String-Key.aspx) during default dictionary hash computation but I can just fish the ID from the dictionary by key.
Can I be sure that the hash for a given string ("a very long string") will be always the same?
Can I be sure that two different strings won't have the same hash?
Also, if possible, how likely is it to get the same hash for different strings? | Just to add some detail as to where the idea of a changing hashcode may have come from.
As the other answers have rightly said the hashcode for a specific string will always be the same for a specific runtime version. There is no guarantee that a newer runtime might use a different algorithm perhaps for performance reasons.
The String class overrides the default GetHashCode implementation in object.
The default implementation for a reference type in .NET is to allocate a sequential ID (held internally by .NET) and assign it to the object (the objects heap storage has slot for storing this hashcode, it only assigned on the first call to GetHashCode for that object).
Hence creating an instance of a class, assigning it some values then retrieving the hashcode, followed by doing the exact same sequence with the same set of values will yeild different hashcodes. This may be the reason why some have been led to believe that hashcodes can change. In fact though its the instance of a class which is allocated a hashcode once allocated that hashcode does not change for that instance.
**Edit**: I've just noticed that none of the answers directly reference each of you questions (although I think the answer to them is clear) but just to tidy up:-
> Can I be sure that the hash for a given string ("a very long string") will be always the same?
In your usage, yes.
> Can I be sure that two different strings won't have the same hash?
No. Two different strings may have the same hash.
> Also, if possible, how likely is it to get the same hash for different strings?
The probability is quite low, resulting hash is pretty random from a 4G domain. | Yes, it will be consistent since strings are immutable. However, I think you're misusing the dictionary. You should let the dictionary take the hash of the string for you by using the string as the key. Hashes are not guaranteed to be unique, so you may overwrite one key with another. | Can I be sure the built-in hash for a given string is always the same? | [
"",
"c#",
".net",
"string",
"hash",
""
] |
This might be a stupid question, but I can't for the life of me figure out how to select the row of a given index in a QListView.
QAbstractItemView , QListView's parent has a setCurrentIndex(const QModelIndex &index). The problem is, I can't construct a QModelIndex with the row number I want since the row and column field of the QModelIndex has no mutators.
QTableView, which also inherits from QAbstractItemView has a selectRow(int row) function, why in the seven hells doesn't the QListView have this?
Good ol' windows forms has the SelectedIndex property on it's listviews. | You construct the QModelIndex by using the createIndex(int row, int column) function of the model you gave to the view. QModelIndexes should only be used once, and must be created by the factory in the model. | [This](http://doc.trolltech.com/4.4/model-view-selection.html) should help you get started
```
QModelIndex index = model->createIndex( row, column );
if ( index.isValid() )
model->selectionModel()->select( index, QItemSelectionModel::Select );
``` | Selecting an index in a QListView | [
"",
"c++",
"user-interface",
"qt",
""
] |
This is a follow up to a question I just posted.
I'm wondering how you all handle member variables in javascript clases when using MyClass.prototype to define methods.
If you define all of the methods in the constructor function:
```
function MyClass(){
this.myMethod = function(){}
}
```
You can very nicely declare member variables and access them from inside your methods:
```
function MyClass(){
var myVar = "hello";
this.myMethod = function(){
alert(myVar);
}
}
```
When using the Object.prototype technique, you lose this nicety, and have to do it like this;
```
function MyClass(){}
MyClass.prototype.myVar = "hello";
MyClass.prototype.myMethod = function(){alert(this.hello)};
```
I'm not crazy about having to write "this" every time I access a member variable. I want to use the Object.prototype approach for memory and flexibility reasons, but it seems a lot clumsier syntax-wise. Is this how you folks generally work?
thanks,
-Morgan | You should get over your aversion to using the `this` pointer to access member variables.
Assign member variables in the constructor, and you can access them with prototype methods:
```
function Cat(){
this.legs = 4;
this.temperament = 'Apathetic';
this.sound = 'Meow';
}
Cat.prototype.speak = function(){alert(this.sound)}
var cat = new Cat();
cat.speak();
```
Yes those object attributes are public but, as Guido would say, we're all adults here. Javascript is, after all, a plain-text, loosely-typed, interpreted language. The benefits of "private" variables in this environment are shaky at best.
I say just be explicit and obvious about how your object should be accessed, and violators will stray from that at their own risk. | The visiblity of object attributes varies according to how you declare them
```
function Cat( name ) {
//private variable unique to each instance of Cat
var privateName = 'Cat_'+Math.floor( Math.random() * 100 );
//public variable unique to each instance of Cat
this.givenName = name;
//this method has access to private variables
this.sayPrivateName = function() {
alert( privateName );
}
}
//this variable is shared by all cats
Cat.prototype.generalName = 'tiddles';
//this method is shared by all cats and has no access to private vars
Cat.prototype.sayname = function( type ) {
alert( this[type+'Name'] || 'private!' );
}
var vic = new Cat('Victor');
var ellers = new Cat('Elmore');
vic.sayname('general'); //tiddles
vic.sayname('given'); //Victor
vic.sayname('private'); //private - no access
vic.sayPrivateName(); //cat will say its name
ellers.sayname('general'); //tiddles
ellers.sayname('given'); //Elmore
ellers.sayname('private'); //private - no access
ellers.sayPrivateName(); //cat will say its name
``` | best approach to member variables in object-oriented javascript? | [
"",
"javascript",
""
] |
I frequently use HTML output in applications, up to now I've used some simple routines to build an HTML string. I want to try something different now:
I want to serialize the data to XML, and provide some XSLT templates to generate the HTML for the webbrowser control. I get that to work statically (slowly digging through writing XSL), but I'd need to change some attributes of the XSL document on the fly: filters, paths etc. to follow the current UI selection.
The data is small enough that the XML serialization on change should not be a problem. I'd like to use "static" external XSL files (so that the user can customize the output to a certain extent), but adjust .
My questions:
Is this idea technically sound?
Can I feed a web browser control dynamically with XML and an XSL document?
Can I modify selected path elements and attributes in the XSL on the fly (i.e. without generating a new file)?
How to do that in a web browser control? (some pointers to get me started would be nice..)
[edit]Thanks folks, it's working :D[/edit] | Usually the way to go is to provide parameters to the transform at runtime and writing the transform so that its behavior is controlled by the parameters.
Usually when I do this, I only pass in one parameter - an XML document - and make the templates query it to determine what they should do. So you'll see stuff like:
```
<xsl:template match="*">
<xsl:variable name="cfg" select="$config/*[name() = current()]"/>
<xsl:choose>
<xsl:when test="cfg/elementType = 'option'">
<xsl:apply-templates select="." mode="option">
<xsl:when test="cfg/elementType = 'optgroup'">
<xsl:apply-templates select="." mode="optgroup">
<xsl:when test="cfg/elementType = 'a'">
<xsl:apply-templates select="." mode="a">
```
and so on.
It's very straightforward to feed a WebBrowser control dynamically with XML/XSLT:
```
using (XmlWriter xw = XmlWriter.Create(new StringWriter(output)))
{
StringBuilder output = new StringBuilder();
XsltArgumentList args = new XsltArgumentList();
args.AddParam("config", myConfigXml);
myXslt.Transform(myXml, args, xw);
xw.Flush();
myWebBrowser.DocumentText = output.ToString();
}
```
If the UI that the user is updating is in the WebBrowser itself (that is, the HTML page contains HTML UI controls), you should be using dynamic HTML techniques, the same way you would if the page was being displayed in a normal browser. That's a whole different bag of bananas. | [`XslCompiledTransform`](http://msdn.microsoft.com/en-us/library/system.xml.xsl.xslcompiledtransform.aspx) supports [parameters](http://msdn.microsoft.com/en-us/library/system.xml.xsl.xsltargumentlist.addparam.aspx), and also [extension objects](http://msdn.microsoft.com/en-us/library/system.xml.xsl.xsltargumentlist.addextensionobject.aspx) (both via [`XsltArgumentList`.](http://msdn.microsoft.com/en-us/library/system.xml.xsl.xsltargumentlist.aspx) For anything simple, try to use a parameter; extension objects allow much richer functionality (up to your imagination), but are not as portable to other xslt vendors. A third option is an external file for options, loaded into a variable with [`xsl:document`](http://www.w3schools.com/Xsl/func_document.asp).
Of course, if you are feeling brave, you can use xslt to write an xslt dynamically - not trivial, though.
In most non-trivial cases, it is simplest to use `WebBrowser` against a flat file (in the %tmp% area, or against a local web-server (such as `HttpListener`); changing the html directly tends to leave the control slightly confused re the effective security context. | "dynamic" XSLT to feed webbrowser control? | [
"",
"c#",
"xml",
"xslt",
"webbrowser-control",
""
] |
I'm writing this question with reference to [this one](https://stackoverflow.com/questions/475888/will-this-lead-to-a-memory-leak-in-c) which I wrote yesterday. After a little documentation, it seems clear to me that what I wanted to do (and what I believed to be possible) is nearly impossible if not impossible at all. There are several ways to implement it, and since I'm not an experienced programmer, I ask you which choice would you take. I explain again my problem, but now I have some solutions to explore.
**What I need**
I have a Matrix class, and I want to implement multiplication between matrices so that the class usage is very intuitive:
```
Matrix a(5,2);
a(4,1) = 6 ;
a(3,1) = 9.4 ;
... // And so on ...
Matrix b(2,9);
b(0,2) = 3;
... // And so on ...
// After a while
Matrix i = a * b;
```
**What I had yesterday**
At the moment I overloaded the two operators `operator*` and `operator=` and until yesterday night the were defined in this way:
```
Matrix& operator*(Matrix& m);
Matrix& operator=(Matrix& m);
```
The operator\* instantiates a new Matrix object (`Matrix return = new Matrix(...)`) on the heap, set the values and then just:
```
return *result;
```
**What I have today**
After the [discussion](https://stackoverflow.com/questions/475888/will-this-lead-to-a-memory-leak-in-c) I decided to implement it in a "different way" to avoid the user to be bothered bother by pointers of any type and to keep the usage unchanged. The "different way" is to pass the returning value of operator\* by value:
```
Matrix operator*(Matrix& m);
Matrix& operator=(Matrix& m);
```
The operator\* instantiates `return` on the stack, set the values and then return the object.
There is a problem with this approach: it doesn't work. The operator= expects a Matrix& and operator\* returns a Matrix. Moreover this approach doesn't look so good to me for another reason: I'm dealing with matrices, that can be very large and the aims of this library were to be 1) good enough for my project 2) fast, so probably passing by value should not be an option.
**Which solutions I have explored**
Well, following the suggestions in the previous [discussion](https://stackoverflow.com/questions/475888/will-this-lead-to-a-memory-leak-in-c) I read some stuff about smart pointers, they look great but I can't still figure out how to solve my problem with them. They deal with memory freeing and pointer copying, but I'm basicly using references, so they don't look the right choice for me. But I may be wrong.
Maybe the only solution is to pass by value, maybe I can't get both efficiency and a good interface. But again, you're the expert, and I would like to know your opinion. | The problem you are having is that the expression `a * b` creates a *temporary* object, and in C++, a temporary is not allowed to bind to a non-constant reference, which is what your `Matrix& operator=(Matrix& m)` takes. If you change it to:
```
Matrix& operator=(Matrix const& m);
```
The code should now compile. As well as the obvious benefit of producing compilable code :), adding the `const` also communicates to your callers that you will not be modifying the argument `m`, which may be helpful information.
You should also do the same for your `operator*()`:
```
Matrix operator*(Matrix const& m) const;
```
**[EDIT:** The additional `const` at the end indicates that the method promises not to alter `*this`, the object on the *left-hand side* of the multiplication, either. This is necessary to cope with expressions such as `a * b * c` -- the subexpression `a * b` creates a temporary and won't bind without the `const` at the end. Thanks to Greg Rogers for pointing this out in the comments.**]**
P.S. The reason why C++ does not allow a temporary to bind to a non-constant reference is because temporaries exist (as the name suggests) for only a very short time, and in most cases, it would be a mistake to attempt to modify them. | You should really read [Effective C++](https://rads.stackoverflow.com/amzn/click/com/0321334876) by Scott Meyers, it has great topics on that.
As already said, the best signatures for `operator=` and `operator*` are
```
Matrix& operator=(Matrix const& m);
Matrix operator*(Matrix const& m) const;
```
but I have to say you should implement multiplication code in
```
Matrix& operator*=(Matrix const& m);
```
and just reuse it in `operator*`
```
Matrix operator*(Matrix const &m) const {
return Matrix(*this) *= m;
}
```
that way user could multiply without creating new matrices when she wants to.
Of course for this code to work you also should have copy constructor :) | Coding practice: return by value or by reference in Matrix multiplication? | [
"",
"c++",
"reference",
"scope",
"return",
""
] |
I am starting a project from scratch using Intersystems Cache. I would like to setup a Continuous Integration Server for the project. Cache has unit test libraries, so the idea is to import source into a test database, build the source, run unit tests in the cache terminal, based on changes in the version control system (ClearCase).
Apart from Cache Objectscript, there will definitely be some java code that needs to be built as well. Other technologies could be added later. So I need a Continuous Integration tool that is not bound to one specific technology and that is easily extendible. I have used CruiseControl for building java solutions in the past, but that has been quite some time ago and I am wondering if no better solution is available since.
What is the best (and hopefully free) Continuous Integration product, that is easiest to extend for different technologies? | I'd recommend looking at [Hudson](http://hudson-ci.org/). It's insanely easy to try out as it is delivered as an executable jar. It also supports [plugins](http://hudson.gotdns.com/wiki/display/HUDSON/Extend+Hudson) so it may be better suited to extension and customization. There are also a good deal of very handy plugins for Hudson [already out there](http://hudson.gotdns.com/wiki/display/HUDSON/Plugins). Its ClearCase support comes via a plugin. There's even a plugin to start and stop VMWare virtual machines from within your build process which may be of interest depending on how you're planning on handling your database server "needs." | I have built a makeshift Continuous Integration Server in the following screencast: <http://www.ensemblisms.com/episodes/2> | Continuous Integration for Intersystems Cache solutions | [
"",
"java",
"continuous-integration",
"intersystems-cache",
"intersystems",
""
] |
I just wonder if it is possible to send Meeting Requests to people without having Outlook installed on the Server and using COM Interop (which I want to avoid on a server at all costs).
We have Exchange 2003 in a Windows 2003 Domain and all users are domain Users. I guess I can send 'round iCal/vCal or something, but I wonder if there is a proper standard way to send Meeting Requests through Exchange without Outlook?
This is C#/.net if it matters. | The way to send a meeting request to Outlook (and have it recognized) goes like this:
* prepare an iCalendar file, be sure to set these additional properties, as Outlook needs them:
+ [`UID`](http://www.kanzaki.com/docs/ical/uid.html)
+ [`SEQUENCE`](http://www.kanzaki.com/docs/ical/sequence.html)
+ [`CREATED`](http://www.kanzaki.com/docs/ical/created.html)
+ [`LAST-MODIFIED`](http://www.kanzaki.com/docs/ical/lastModified.html)
+ [`DTSTAMP`](http://www.kanzaki.com/docs/ical/dtstamp.html)
* prepare a `multipart/alternative` mail:
+ Part 1: `text/html` (or whatever you like) - this is displayed to "ordinary" mail readers or as a fall-back and contains a summary of the event in human readable form
+ Part 2: `text/calendar; method=REQUEST`, holds the contents of the ics file (the header `method` parameter must match the method in the ics). Watch out for the correct text encoding, declaring a `charset` header parameter won't hurt.
+ Part 3: Optionally, attach the .ics file itself, so ordinary mail readers can offer the user something to click on. Outlook does not really require the attachment because it just reads the `text/calendar` part.
* Send the mail to an outlook user. If you got everything right the mail shows up as a meeting request, complete with attendance buttons and automatic entry in the users calendar upon accept.
* Set up something that processes the responses (they go to the meeting organizer). I have not yet been able to get automatic attendee tracking to work with an Exchange mailbox because the event won't exist in the organizers calendar. Outlook needs the UIDs and SEQUENCES to match it's expectations, but with a UID you made up this will hardly work.
For help on the details and peculiarities of the ics file format, be sure to visit the [iCalendar Specification Excerpts by Masahide Kanzaki](http://www.kanzaki.com/docs/ical/). They are a light in the dark, much better than gnawing your way through [RFC 2445](https://www.rfc-editor.org/rfc/rfc2445). But then again, maybe a handy library exists for .NET. | See the DDay.iCal C# library on sourceforge:
<http://sourceforge.net/projects/dday-ical/>
Then read this codeproject article:
<http://www.codeproject.com/Articles/17980/Adding-iCalendar-Support-to-Your-Program-Part-1>
And read this:
[Export event with C# to iCalendar and vCalendar format](https://stackoverflow.com/questions/5461681/export-event-with-c-sharp-to-icalendar-and-vcalendar-format) | Sending Outlook meeting requests without Outlook? | [
"",
"c#",
".net",
"outlook",
""
] |
I wrote an application server (using python & twisted) and I want to start writing some tests. But I do not want to use Twisted's Trial due to time constraints and not having time to play with it now. So here is what I have in mind: write a small test client that connects to the app server and makes the necessary requests (the communication protocol is some in-house XML), store in a static way the received XML and then write some tests on those static data using unitest.
My question is: Is this a correct approach and if yes, what kind of tests are covered with this approach?
Also, using this method has several disadvantages, like: not being able to access the database layer in order to build/rebuild the schema, when will the test client going to connect to the server: per each unit test or before running the test suite? | **"My question is: Is this a correct approach?"**
It's what you chose. You made a lot of excuses, so I'm assuming that your pretty well fixed on this course. It's not the best, but you've already listed all your reasons for doing it (and then asked follow-up questions on this specific course of action). "correct" doesn't enter into it anymore, so there's no answer to this question.
**"what kind of tests are covered with this approach?"**
They call it "black-box" testing. The application server is a black box that has a few inputs and outputs, and you can't test any of it's internals. It's considered one acceptable form of testing because it tests the bottom-line external interfaces for acceptable behavior.
If you have problems, it turns out to be useless for doing diagnostic work. You'll find that you need to also to white-box testing on the internal structures.
**"not being able to access the database layer in order to build/rebuild the schema,"**
Why not? This is Python. Write a separate tool that imports that layer and does database builds.
**"when will the test client going to connect to the server: per each unit test or before running the test suite?"**
Depends on the intent of the test. Depends on your use cases. What happens in the "real world" with your actual intended clients?
You'll want to test client-like behavior, making connections the way clients make connections.
Also, you'll want to test abnormal behavior, like clients dropping connections or doing things out of order, or unconnected. | You should use Trial. It really isn't very hard. Trial's documentation could stand to be improved, but if you know how to use the standard library unit test, the only difference is that instead of writing
```
import unittest
```
you should write
```
from twisted.trial import unittest
```
... and then you can return Deferreds from your `test_` methods. Pretty much everything else is the same.
The one other difference is that instead of building a giant test object at the bottom of your module and then running
```
python your/test_module.py
```
you can simply define your test cases and then run
```
trial your.test_module
```
If you don't care about reactor integration at all, in fact, you can just run `trial` on a set of existing Python unit tests. Trial supports the standard library '`unittest`' module. | unit testing for an application server | [
"",
"python",
"unit-testing",
"twisted",
""
] |
Let's say I have a database column 'grade' like this:
```
|grade|
| 1|
| 2|
| 1|
| 3|
| 4|
| 5|
```
Is there a non-trivial way in SQL to generate a histogram like this?
```
|2,1,1,1,1,0|
```
where 2 means the grade 1 occurs twice, the 1s mean grades {2..5} occur once and 0 means grade 6 does not occur at all.
I don't mind if the histogram is one row per count.
If that matters, the database is SQL Server accessed by a perl CGI through unixODBC/FreeTDS.
**EDIT:** Thanks for your quick replies! It is okay if non-existing values (like grade 6 in the example above) do not occur as long as I can make out which histogram value belongs to which grade. | ```
SELECT COUNT(grade) FROM table GROUP BY grade ORDER BY grade
```
Haven't verified it, but it should work.It will not, however, show count for 6s grade, since it's not present in the table at all... | If there are a lot of data points, you can also [group ranges together](https://web.archive.org/web/20180205094300/https://www.wagonhq.com/sql-tutorial/creating-a-histogram-sql "ARCHIVE of https://www.wagonhq.com/sql-tutorial/creating-a-histogram-sql") like this:
```
SELECT FLOOR(grade/5.00)*5 As Grade,
COUNT(*) AS [Grade Count]
FROM TableName
GROUP BY FLOOR(Grade/5.00)*5
ORDER BY 1
```
Additionally, if you wanted to label the full range, you can get the floor and ceiling ahead of time with a CTE.
```
With GradeRanges As (
SELECT FLOOR(Score/5.00)*5 As GradeFloor,
FLOOR(Score/5.00)*5 + 4 As GradeCeiling
FROM TableName
)
SELECT GradeFloor,
CONCAT(GradeFloor, ' to ', GradeCeiling) AS GradeRange,
COUNT(*) AS [Grade Count]
FROM GradeRanges
GROUP BY GradeFloor, CONCAT(GradeFloor, ' to ', GradeCeiling)
ORDER BY GradeFloor
```
**Note**: In some SQL engines, you can `GROUP BY` an Ordinal Column Index, but with MS SQL, if you want it in the `SELECT` statement, you're going to need to group by it also, hence copying the Range into the Group Expression as well.
**Option 2**: You could use [case statements to selectively count values into arbitrary bins and then unpivot them](https://stackoverflow.com/a/52105218/1366033) to get a row by row count of included values | Generating a histogram from column values in a database | [
"",
"sql",
"sql-server",
"histogram",
""
] |
This is a poll of sorts about common concurrency problems in Java. An example might be the classic deadlock or race condition or perhaps EDT threading bugs in Swing. I'm interested both in a breadth of possible issues but also in what issues are most common. So, please leave one specific answer of a Java concurrency bug per comment and vote up if you see one you've encountered. | The most common concurrency problem I've seen, is not realizing that a field written by one thread is *not guaranteed* to be seen by a different thread. A common application of this:
```
class MyThread extends Thread {
private boolean stop = false;
public void run() {
while(!stop) {
doSomeWork();
}
}
public void setStop() {
this.stop = true;
}
}
```
As long as `stop` is not *volatile* or `setStop` and `run` are not *synchronized* this is not guaranteed to work. This mistake is especially devilish as in 99.999% it won't matter in practice as the reader thread will eventually see the change - but we don't know how soon he saw it. | My **#1 most painful** concurrency problem ever occurred when **two different** open source libraries did something like this:
```
private static final String LOCK = "LOCK"; // use matching strings
// in two different libraries
public doSomestuff() {
synchronized(LOCK) {
this.work();
}
}
```
At first glance, this looks like a pretty trivial synchronization example. However; because Strings are **interned** in Java, the literal string `"LOCK"` turns out to be the same instance of `java.lang.String` (even though they are declared completely disparately from each other.) The result is obviously bad. | What is the most frequent concurrency issue you've encountered in Java? | [
"",
"java",
"multithreading",
"concurrency",
""
] |
I have a List<> of objects in C# and I need a way to return those objects that are considered duplicates within the list. I do not need the Distinct resultset, I need a list of those items that I will be deleting from my repository.
For the sake of this example, lets say I have a list of "Car" types and I need to know which of these cars are the same color as another in the list. Here are the cars in the list and their color property:
```
Car1.Color = Red;
Car2.Color = Blue;
Car3.Color = Green;
Car4.Color = Red;
Car5.Color = Red;
```
For this example I need the result (IEnumerable<>, List<>, or whatever) to contain Car4 and Car5 because I want to delete these from my repository or db so that I only have one car per color in my repository. Any help would be appreciated. | I inadvertently coded this yesterday, when I was trying to write a "distinct by a projection". I included a ! when I shouldn't have, but this time it's just right:
```
public static IEnumerable<TSource> DuplicatesBy<TSource, TKey>
(this IEnumerable<TSource> source, Func<TSource, TKey> keySelector)
{
HashSet<TKey> seenKeys = new HashSet<TKey>();
foreach (TSource element in source)
{
// Yield it if the key hasn't actually been added - i.e. it
// was already in the set
if (!seenKeys.Add(keySelector(element)))
{
yield return element;
}
}
}
```
You'd then call it with:
```
var duplicates = cars.DuplicatesBy(car => car.Color);
``` | ```
var duplicates = from car in cars
group car by car.Color into grouped
from car in grouped.Skip(1)
select car;
```
This groups the cars by color and then skips the first result from each group, returning the remainder from each group flattened into a single sequence.
If you have particular requirements about which one you want to keep, e.g. if the car has an `Id` property and you want to keep the car with the lowest `Id`, then you could add some ordering in there, e.g.
```
var duplicates = from car in cars
group car by car.Color into grouped
from car in grouped.OrderBy(c => c.Id).Skip(1)
select car;
``` | C# 3.0: Need to return duplicates from a List<> | [
"",
"c#",
"linq",
"lambda",
"iequalitycomparer",
""
] |
I would like to store an object **FOO** in a database.
Lets say FOO contains three integers and a list of "*Fruits*".
The list can have any length, the only thing I know is that the all the fruits allowed are stored in another table.
Can I store the fruit list in a column? | In a normalized relational database, such a situation is unacceptable. You should have a junction table that stores one row for each distinct ID of the FOO object and the ID of the Fruit. Existence of such a row means the fruit is in that list for the FOO.
```
CREATE TABLE FOO (
id int primary key not null,
int1 int,
int2 int,
int3 int
)
CREATE TABLE Fruits (
id int primary key not null,
name varchar(30)
)
CREATE TABLE FOOFruits (
FruitID int references Fruits (ID),
FooID int references FOO(id),
constraint pk_FooFruits primary key (FruitID, FooID)
)
```
To add Apple fruit to the list of a specific FOO object with ID=5, you would:
```
INSERT FOOFruits(FooID, FruitID)
SELECT 5, ID FROM Fruits WHERE name = 'Apple'
``` | If you're quite sure of what you're doing (ie. you won't need to look up the list's values, for example), you could also serialize your object, or just the list object, and store it in a binary column.
Just character-separating the values may be fine too, and cheaper in terms of saving and loading, but be careful your data doesn't contain the separator character, or escape it (and handle the escapes accordingly while loading, etc... Your language of choice may do a better job at this than you, though. ;) )
However, for a "proper" solution, do what Mehrdad described above. | How to store a list in a db column | [
"",
"sql",
"database",
""
] |
I have create a getDBConnection method in my Java application. This returns a connection object, and hence I haven't closed this connection in this method itself.
Now, I am invoking this method from various methods in my application at regular intervals, and closing them inside a try - finally block. I thought this should free up the connection after use. However, I am seeing a large number of connections opened (about 50) in the MySQL Administrator's Server Connections tab.
```
//Defining a method to retrieve a database connection
// PropDemo is a properties class that retrieves Database related values from a file
public Connection getDBConnection() {
//Instantiating the Properties object
PropDemo prop = new PropDemo();
Connection con = null;
// Retrieving values from the parameters.properties file
String JdbcDriver = prop.getMessage("JdbcDriver");
String JdbcUrlPrefix = prop.getMessage("JdbcUrlPrefix");
String DBIP = prop.getMessage("DBIP");
String DBName = prop.getMessage("DBName");
String DBUser = prop.getMessage("DBUser");
String DBPassword = prop.getMessage("DBPassword");
try {
// Loading and instantiating the JDBC MySQL connector driver class
Class.forName(JdbcDriver).newInstance();
con = DriverManager.getConnection(JdbcUrlPrefix + DBIP + "/" + DBName, DBUser, DBPassword);
if (con.isClosed())
Logger.log("Connection cannot be established", "vm");
} catch (Exception e) {
Logger.log("Exception: " + e, "vm");
Logger.log(Logger.stack2string(e), "vm");
}
return con;
}
```
I am also closing the associated ResultSet and Statement Objects. What could be missing here?
I am planning to replace all the Statements with PreparedStatements for efficiency and security reasons. Will that help significantly? What else can be done?
EDIT:
This is just a core java application that is repeatedly quering for changes in some fields in a MySQL database through MySQL-JDBC connector. I am not using any framework like Spring or Hibernate. | Your code looks sane.
That's how you're creating a new connection.
Probably the error is where you close it.
You should close it in a finally block.
Some additional questions.
1) Are you sure those 50 conections come from this program ? Maybe there are some others comming from your same office. To confirm this you would need to stop the program, and look again in your connection monitor.
2) Does your application uses many connection simultaneously? Probably its a peak when you're using 50 at the same time.
If you can post the code where you close the connection. Chances are the problem is there.
Additionally I would suggest you to use a connection pool. You can build one your self or you can see the results from this page:
[How many JDBC connections in Java?](https://stackoverflow.com/questions/471745/how-many-jdbc-connections-in-java) | You could take a Singleton approach to the problem and only create a new Connection object if the current one is null:
```
If (connectionObject != null){
return connectionObject;
}else {
//create new connection object
}
```
This will make sure that you only have one non-null connection at any time. | JDBC Connection Issue | [
"",
"java",
"jdbc",
"connection",
""
] |
I'm hoping to clear some things up with anonymous delegates and lambda expressions being used to create a method for event handlers in C#, for myself at least.
Suppose we have an event that adds either an anonymous delegate or a lambda expression (for you lucky crowds that can use newer versions of .NET).
```
SomeClass.SomeEvent += delegate(object o, EventArg e) { /* do something */ };
```
I have read that people in the past have forgotten about events that still have handlers which prevent the class from being garbage collected. How would one go about removing the added handler without just setting SomeEvent to null within the class. Wouldn't the following be an entirely new handler?
```
SomeClass.SomeEvent -= delegate(object o, EventArg e) { /* do something */ };
```
I could see storing the anonymous delegate or lambda expression in a variable. But that, to me at least, seems to defeat the entire purpose of being able to simply and succinctly add an event handler.
```
SomeEventDelegate handler = new SomeEventDelegate(delegate(object o, EventArg e) { /* do something */ });
SomeClass.SomeEvent += handler;
// ... stuff
SomeClass.SomeEvent -= handler;
```
Again, I understand that you could just do...
```
public override Dispose(bool disposing)
{
_someEvent = null;
this.Dispose();
}
```
But I'm more interesting with just removing the dynamically created method from the event. Hopefully someone can shed some light onto this for me. Thanks! | If object X has an event handler whose *target* is object Y, then object X being alive means that object Y can't be garbage collected. It doesn't stop object X from being garbage collected.
Normally when something is disposed, it will become garbage pretty soon anyway, which means you don't have a problem.
The problem with events and GC is if you forget to remove a subscribed handler from a *different* object - i.e. you have a listener which is disposed, but will never be garbage collected because there's still a reference to it from the event in a different object. | I think the problem is you seem to be proceeding from the assumption that having a delegate assigned to an object's event prevents it from being GCed.
This as a simple statement is not true.
With that said the perceived problem disappears.
Initially in garbage collection everything is garbage. The GC runs through every thing currently available globally and on each stack and from these those other objects that they are referencing and so on, marking each as not garbage.
How would such a graphing process manage to arrive at this object? | EventHandlers and Anonymous Delegates / Lambda Expressions | [
"",
"c#",
"events",
"delegates",
"lambda",
"anonymous-methods",
""
] |
I have a a C# (FFx 3.5) application that loads DLLs as plug-ins. These plug-ins are loaded in separate AppDomains (for lots of good reasons, and this architecture cannot change). This is all well and good.
I now have a requirement to show a Dialog from one of those plug-ins. Bear in mind that I *cannot* return the dialog Form to the main application and have it displayed there (the current infrastructure doesn't support it).
Failure 1
In my DLL I created a Form and called Show. The dialog outline showed up but did not paint and it doesn't respond to mouse events. I assumed that this is becasue the DLL is in a separate AppDomain and the message pump for the app is somehow unable to dispatch messages to the new Form.
Failure 2
In my DLL I created a Form and called ShowDialog, which by all rights should create an internal message pump for the dialog.. The dialog is displayed and responded to clicks (hooray), but it appears that the primary app no longer is processing or dispatching windows messages because it quits painting and no longer responds to mouse events. For some reason now it seems that the main app's message pump is not dispatching.
Failure 3
In my DLL I created a Form and called Application.Run. This will certainly create a complete second message pump. I get the same behavior as Failure 2 - the Dialog behaves, but the calling app does not.
Any thoughts on what exactly is going on here and how I might go about showing a dialog from the other AppDomain's DLL and have both the caller and the callee still respond and paint properly? | Try using appdomain1's main form's BeginInvoke with a delegate that displays the form from appdomain2. So in Pseudocode:
```
Appdomain1:
AppDomain2.DoSomething(myMainForm);
AppDomain2:
DoSomething(Form parent)
{
Form foolishForm = new Form();
parent.BeginInvoke(new Action( delegate { foolishForm.Show(); } ));
}
```
The code may not be perfect, but it demonstrates the concept.
By the way, if you are having problems passing forms around because of remoting you can:
```
public class Container<T> : MarshalByRefObject
{
private T _value;
public T Value { get { return _value; } set { _value = value; } }
public Container() { }
public Container(T value) { Value = value; }
public static implicit operator T(Container<T> container)
{
return container.Value;
}
}
```
That will contain object you throw at it. | We have a very similarly architected application that loads DLL files and plugins. Each DLL file is loaded in a separate [application domain](http://en.wikipedia.org/wiki/Application_Domain), which is created on a separate thread. We have a third-party control in a form that would not appear unless we call `System.Windows.Forms.Application.DoEvents()` regularly.
Pseudo code:
```
<In new thread>
<Application domain created. Start called inside new application domain.>
<Start loads new DLL file, calls init function in DLL file>
<Start loops, calling DoEvents until the DLL file exits>
<Application domain unloaded>
<Thread exits>
```
This solved all of our GUI issues. | Message Pumps and AppDomains | [
"",
"c#",
"appdomain",
""
] |
When I use file\_get\_contents and pass it as a parameter to another function, without assigning it to a variable, does that memory get released before the script execution finishes?
For Example:
```
preg_match($pattern, file_get_contents('http://domain.tld/path/to/file.ext'), $matches);
```
Will the memory used by file\_get\_contents be released before the script finishes? | The temporary string created to hold the file contents will be destroyed. Without delving into the sources to confirm, here's a couple of ways you can test that a temporary value created as a function parameter gets destroyed:
## Method 1: a class which reports its destruction
This demonstrates lifetime by using a class which reports on its own demise:
```
class lifetime
{
public function __construct()
{
echo "construct\n";
}
public function __destruct()
{
echo "destruct\n";
}
}
function getTestObject()
{
return new lifetime();
}
function foo($obj)
{
echo "inside foo\n";
}
echo "Calling foo\n";
foo(getTestObject());
echo "foo complete\n";
```
This outputs
```
Calling foo
construct
inside foo
destruct
foo complete
```
Which indicates that the implied temporary variable *is* destroyed right after the foo function call.
## Method 2: measure memory usage
Here's another method which offers further confirmation using [memory\_get\_usage](http://memory_get_usage) to measure how much we've consumed.
```
function foo($str)
{
$length=strlen($str);
echo "in foo: data is $length, memory usage=".memory_get_usage()."\n";
}
echo "start: ".memory_get_usage()."\n";
foo(file_get_contents('/tmp/three_megabyte_file'));
echo "end: ".memory_get_usage()."\n";
```
This outputs
```
start: 50672
in foo: data is 2999384, memory usage=3050884
end: 51544
``` | In your example the memory will be released when `$matches` goes out of scope.
If you weren't storing the result of the match the memory would be released immediately | Does the memory used by file_get_contents() get released when it is not assigned to a variable? | [
"",
"php",
"memory-consumption",
""
] |
I'm wondering how can I submit a form via Ajax (using prototype framework) and display the server response in a "result" div.
The html looks like this :
```
<form id="myForm" action="/getResults">
[...]
<input type="submit" value="submit" />
</form>
<div id="result"></div>
```
I tried to attach a javascript function (which uses Ajax.Updater) to "onsubmit" (on the form) and "onclick" (on the input) but the form is still "non-Ajax" submitted after the function ends (so the whole page is replaced by the results). | Check out Prototype API's pages on [`Form.Request`](http://www.prototypejs.org/api/form/request) and [`Event`](http://www.prototypejs.org/api/event) handling.
Basically, if you have this:
```
<form id='myForm'>
.... fields ....
<input type='submit' value='Go'>
</form>
<div id='result'></div>
```
Your js would be, more or less:
```
Event.observe('myForm', 'submit', function(event) {
$('myForm').request({
onFailure: function() { .... },
onSuccess: function(t) {
$('result').update(t.responseText);
}
});
Event.stop(event); // stop the form from submitting
});
``` | You need to return the value false from the ajax function, which blocks the standard form submit.
```
<form id="myForm" onsubmit="return myfunc()" action="/getResults">
function myfunc(){
... do prototype ajax stuff...
return false;
```
}
Using onsubmit on the form also captures users who submit with the enter key. | submit a form via Ajax using prototype and update a result div | [
"",
"javascript",
"ajax",
"prototypejs",
"form-submit",
""
] |
How do I change directory to the directory with my Python script in? So far, I figured out I should use `os.chdir` and `sys.argv[0]`. I'm sure there is a better way then to write my own function to parse argv[0]. | ```
os.chdir(os.path.dirname(__file__))
``` | `os.chdir(os.path.dirname(os.path.abspath(__file__)))` should do it.
`os.chdir(os.path.dirname(__file__))` would not work if the script is run from the directory in which it is present. | Change directory to the directory of a Python script | [
"",
"python",
"scripting",
"directory",
""
] |
In Yahoo or Google and in many websites when you fill up a form and if your form has any errors it gets redirected to the same page.
Note that the data in the form remains as it is. I mean the data in the text fields remains the same.
I tried ‹form action="(same page here)" method="post or get"›. It gets redirected to the page, but the contents of the form gets cleared.
I want the data to be displayed.
You know how tiresome it will be for the user if he has to fill up the entire form once again if he just forgets to check the accept terms and conditions checkbox.
Need help! | Here is a modified version of what I use for *very* simple websites where I don't want/need an entire framework to get the job done.
```
function input($name, $options = array()) {
if(!isset($options['type'])) $options['type'] = 'text';
$options['name'] = $name;
if(isset($_POST[$name]) && $options['type'] != 'password') {
$options['value'] = htmlspecialchars($_POST[$name]);
}
$opts = array();
foreach($options as $key => $value) {
$opts[] = $key . '="' . $value . '"';
}
return '<input ' . implode(' ', $opts) . '/>';
}
```
(I have a few similar functions for `<select`> and `<textarea>` and so on)
When you're building fields you can do something like:
```
First Name: <?=input('first_name')?>
Last Name: <?=input('last_name')?>
Password: <?=input('password', array('type' => 'password'))?>
```
If you process your forms in the same page as the form itself, they will get auto filled if there are any errors. Most frameworks, though, do all of this for you (and in a much better way than the code above), I personally suggest [`CakePHP`](http://cakephp.org/) or [`CodeIgniter`](http://codeigniter.com/). | You need to do this yourself. When the page gets posted you'll have access to all the form values the user entered via $POST['...']. You can then re-populate the form fields with this data. | How do you post the contents of form to the page in which it is? | [
"",
"php",
"html",
""
] |
Suppose I have the following (rather common) model
Client invokes web service request -> request added to server queue -> server invokes 3rd party app via web service -> 3rd party app notifies server that event processing is finished -> server notifies client that request is completed
What I am wondering about is the 'server invokes 3rd party app via web service' stage. The 3rd party app exposes web service methods that are configured inside the application. For instance, I might create a method in this app called 'MultiplyByTwo'. Then I click 'GO' and it generates a web service with methods like BeginCalculateMultiplyByTwo and EndMultiplyByTwo (using the IAsync model). This is great.
Now I am creating a queue object so that multiple clients can request this service and have the server queue them up for sequential execution. So this queue object will have a method like runNext() that will invoke the web service on the 3rd party app. This is great so long as I know the name of the methods that are being called (BegingCaculateMultiplyByTwo in this case), but what if I want to dynamically change the name of the method?
So in the 3rd party app, I change my web service method and call it 'MultiplyByThree'. This will expose BeginMultiplyByThree and some other methods with a predictable naming scheme. How can I set my class up to dynamically call a method of which I dont yet know the name?
Ideally if I could set the name of the method to be called in an app.config file that would be great.
I suppose this is something I can achieve via reflection? | You can certainly do this via reflection:
```
MyClass o = new MyClass();
MethodInfo method = o.GetType().GetMethod("UnknownMethod",
BindingFlags.Instance | BindingFlags.Public);
MyRetValue retValue = (MyRetValue)
method.Invoke(o, new object[] { "Arg1", 2, "Arg3" });
``` | To expand on Robert's answer you can do it with generics and stuff:
```
public TReturn DynamicInvoker<T, TReturn>(T obj, string methodName, param[] args){
MethodInfo method = obj.GetType.GetMethod(methodName, BindingFlags.Instance | BindingFlags.Public);
(TResult)method.Invoke(obj, args);
}
```
If you wanted to make it completely horrible you can do that as a static method of object. You also can pass the `BindingFlags` as a method parameter. | C# Calling a method I dont yet know the name of yet? Reflection? | [
"",
"c#",
"web-services",
""
] |
We have to build Strings all the time for log output and so on. Over the JDK versions we have learned when to use `StringBuffer` (many appends, thread safe) and `StringBuilder` (many appends, non-thread-safe).
What's the advice on using `String.format()`? Is it efficient, or are we forced to stick with concatenation for one-liners where performance is important?
e.g. ugly old style,
```
String s = "What do you get if you multiply " + varSix + " by " + varNine + "?";
```
vs. tidy new style (String.format, which is possibly slower),
```
String s = String.format("What do you get if you multiply %d by %d?", varSix, varNine);
```
Note: my specific use case is the hundreds of 'one-liner' log strings throughout my code. They don't involve a loop, so `StringBuilder` is too heavyweight. I'm interested in `String.format()` specifically. | I wrote a small class to test which has the better performance of the two and + comes ahead of format. by a factor of 5 to 6.
Try it your self
```
import java.io.*;
import java.util.Date;
public class StringTest{
public static void main( String[] args ){
int i = 0;
long prev_time = System.currentTimeMillis();
long time;
for( i = 0; i< 100000; i++){
String s = "Blah" + i + "Blah";
}
time = System.currentTimeMillis() - prev_time;
System.out.println("Time after for loop " + time);
prev_time = System.currentTimeMillis();
for( i = 0; i<100000; i++){
String s = String.format("Blah %d Blah", i);
}
time = System.currentTimeMillis() - prev_time;
System.out.println("Time after for loop " + time);
}
}
```
Running the above for different N shows that both behave linearly, but `String.format` is 5-30 times slower.
The reason is that in the current implementation `String.format` first parses the input with regular expressions and then fills in the parameters. Concatenation with plus, on the other hand, gets optimized by javac (not by the JIT) and uses `StringBuilder.append` directly.
 | I took [hhafez](https://stackoverflow.com/a/513705/1845976)'s code and added a **memory test**:
```
private static void test() {
Runtime runtime = Runtime.getRuntime();
long memory;
...
memory = runtime.freeMemory();
// for loop code
memory = memory-runtime.freeMemory();
```
I run this separately for each approach, the '+' operator, String.format and StringBuilder (calling toString()), so the memory used will not be affected by other approaches.
I added more concatenations, making the string as "Blah" + i + "Blah"+ i +"Blah" + i + "Blah".
The result are as follows (average of 5 runs each):
| Approach | Time(ms) | Memory allocated (long) |
| --- | --- | --- |
| `+` operator | 747 | 320,504 |
| `String.format` | 16484 | 373,312 |
| `StringBuilder` | 769 | 57,344 |
We can see that String `+` and `StringBuilder` are practically identical time-wise, but `StringBuilder` is much more efficient in memory use.
This is very important when we have many log calls (or any other statements involving strings) in a time interval short enough so the Garbage Collector won't get to clean the many string instances resulting of the `+` operator.
And a note, BTW, don't forget to check the logging **level** before constructing the message.
Conclusions:
1. I'll keep on using `StringBuilder`.
2. I have too much time or too little life. | Should I use Java's String.format() if performance is important? | [
"",
"java",
"string",
"performance",
"string-formatting",
"micro-optimization",
""
] |
How can I replace `\r\n` in an `std::string`? | Use this :
```
while ( str.find ("\r\n") != string::npos )
{
str.erase ( str.find ("\r\n"), 2 );
}
```
more efficient form is :
```
string::size_type pos = 0; // Must initialize
while ( ( pos = str.find ("\r\n",pos) ) != string::npos )
{
str.erase ( pos, 2 );
}
``` | don't reinvent the wheel, Boost String Algorithms is a header only library and I'm reasonably certain that it works everywhere. If you think the accepted answer code is better because its been provided and you don't need to look in docs, here.
```
#include <boost/algorithm/string.hpp>
#include <string>
#include <iostream>
int main()
{
std::string str1 = "\r\nsomksdfkmsdf\r\nslkdmsldkslfdkm\r\n";
boost::replace_all(str1, "\r\n", "Jane");
std::cout<<str1;
}
``` | Replace line breaks in a STL string | [
"",
"c++",
"stl",
""
] |
I have two divs and two separate links that triggers slideDown and slideUp for the divs.
When one of the divs are slided down and I click the other one, I hide the first div (slidingUp) and then open the other div (slidingDown) but, at the moment it's like while one div is sliding down, the other, also in the same time, is sliding up.
Is there a way that would tell jQuery to wait to finish sliding down of one div and only then start sliding up the other? | ```
$('#Div1').slideDown('fast', function(){
$('#Div2').slideUp('fast');
});
```
Edit: Have you checked out the accordion plugin (if that's what you're trying to do)? | You should chain it like this
```
function animationStep1()
{
$('#yourDiv1').slideUp('normal', animationStep2);
}
function animationStep2()
{
$('#yourDiv2').slideDown('normal', animationStep3);
}
// etc
```
Of course you can spice this up with recursive functions, arrays holding animation queues, etc., according to your needs. | Finish one animation then start the other one | [
"",
"javascript",
"jquery",
""
] |
I am thinking is that a good idea to define exception with template. Defining different types of exception is a super verbose task. You have to inherit exception, there is nothing changed, just inherit. Like this..
```
class FooException : public BaseException {
public:
...
};
class BarException : public BaseException {
public:
...
};
...
```
That's a nightmare isn't it? So I am considering to define different exception with template
```
/**
@brief Exception of radio
**/
class Exception : public runtime_error {
private:
/// Name of file that throw
const string m_FileName;
/// Line number of file that throw
size_t m_Line;
public:
Exception(const string &what, const string &File, size_t Line)
throw()
: runtime_error(what),
m_FileName(File),
m_Line(Line)
{}
virtual ~Exception() throw() {}
/**
@brief Get name of file that throw
@return Name of file that throw
**/
virtual const string getFileName() const throw() {
return m_FileName;
}
/**
@brief Get throw exception line
@return Throw exception line
**/
virtual size_t getLine() const throw() {
return m_Line;
}
/**
@brief Get description of this exception
@return Description of this exception
**/
virtual const string getMessage() const throw() {
return what();
}
virtual void print(ostream &stream = cerr) const throw() {
stream << "# RunTimeError #" << endl;
stream << "Error : " << what() << endl;
stream << "File : " << getFileName() << endl;
stream << "Line : " << getLine() << endl;
}
};
/**
@brief Template exception of radio
**/
template <typename T>
class TemplateException : public Exception {
public:
TemplateException (const string &what, const string &File, size_t
Line) throw()
: Exception(what, File, Line)
{}
virtual ~TemplateException () throw() {}
};
}
#define THROW(type, error) (throw TemplateRadioException<type>(
(error), __FILE__, __LINE__))
```
So if I have to define a new exception, I can just define a empty class like this.
```
class NuclearException {};
```
To throw exception
```
THROW(NuclearException , "Boom!!");
```
To catch
```
try {
} catch (TemplateException<NuclearException> &e) {
// ...
}
```
If we want to catch all exception, we can write this
```
try {
} catch (Exception &e) {
// ...
}
```
It works fine, but I don't sure are there any side effects? Is this a good idea to define different type of exception? Or there is better solution? I have no idea :S
Thanks.
Victor Lin. | It's definitely possible and works fine, but i would avoid it. It obscures diagnostics. GCC will display the name of the exception type, with the usual template stuff included. I would take the few minutes to define the new exception class, personally. It's not like you would do it all the time. | It's an interesting idea, but apart from the drawbacks already pointed out, it would also not allow you to define an exception hierarchy: suppose that you want to define
```
class InvalidArgumentException {};
class NullPointerException : public InvalidArgumentException {};
```
then a TemplatedException<NullPointerException> would not inherit from TemplatedException<InvalidArgumentException>, and your exception handling mechanism might end up being more clumsy than the "plain" one. | Is that a good idea to define exception with template? | [
"",
"c++",
"exception",
"templates",
""
] |
I have a C++ library app which talks to a C++ server and I am creating a vector of my custom class objects. But my Cpp/CLI console app(which interacts with native C++ ), throws a memory violation error when I try to return my custom class obj vector.
Code Sample -
In my native C++ class -
```
std::vector<a> GetStuff(int x)
{
-- do stuff
std::vector<a> vec;
A a;
vec.push_back(a);
--- push more A objs
return vec;
}
```
In my Cpp/CLI class
```
public void doStuff()
{
std::vector<a> vec;
vec = m_nativeCpp->GetStuff(4); // where nativeCpp is a dynamically allocated class in nativecpp DLL, the app throws up a memory violation error here!
}
```
exact error message
> An unhandled exception of type 'System.AccessViolationException' occurred in CLIConsole.exe -- which is my console cpp/CLI project
>
> Additional information: Attempted to read or write protected memory. This is often an indication that other memory is corrupt. | I'll assume that the native code is in a separately compiled unit, like a .dll. First thing the worry about is the native code using a different allocator (new/delete), you'll get that when it is compiled with /MT or linked to another version of the CRT.
Next thing to worry about is STL iterator debugging. You should make sure both modules were compiled with the same setting for \_HAS\_ITERATOR\_DEBUGGING. They won't be the same if the native code was built with an old version of the CRT of is the Release mode build. | Take a look at [this support article](http://support.microsoft.com/default.aspx?scid=kb;en-us;172396). I think what's happening is that your vector in CLI tries to access internal vector data from DLL and fails to do so because of different static variables. I guess the only good solution is to pass simple array through DLL boundaries, `&vector[0]` returns it.
But there might be also some magic happening in A class copy constructors. If they missing and class have pointers as members you could easily get the same error. | Transferring vector of objects between C++ DLL and Cpp/CLI console project | [
"",
"c++",
"c++-cli",
"interop",
""
] |
i recently stumbled upon a seemingly weird behavior that Google completely failed to explain.
```
using Microsoft.VisualStudio.TestTools.UnitTesting;
class TestClass
{
public override bool Equals(object obj)
{
return true;
}
}
[TestMethod]
public void TestMethod1()
{
TestClass t = new TestClass ();
Assert.AreEqual (t, null); // fails
Assert.IsTrue (t.Equals (null)); // passes
}
```
I would expect this test to succeed. However, in Visual Studio 2008 / .NET 3.5 it fails. Is it intended to be like that or is it a bug? | Your TestClass violates the contract of [`Object.Equals`](http://msdn.microsoft.com/en-us/library/bsc2ak47.aspx). `Assert.AreEqual` is relying on that contract, quite reasonably.
The docs state (in the list of requirements):
* x.Equals(a null reference (Nothing in Visual Basic)) returns false. | When testing for nulls, do not use `Assert.AreEqual`.
You have to use `Assert.IsNull()` for that. | C# UnitTest - Assert.AreEqual() does not call Equals if the argument is null | [
"",
"c#",
"visual-studio",
"unit-testing",
"equals",
""
] |
I'm new to python, so please excuse what is probably a pretty dumb question.
Basically, I have a single global variable, called \_debug, which is used to determine whether or not the script should output debugging information. My problem is, I can't set it in a different python script than the one that uses it.
I have two scripts:
```
one.py:
-------
def my_function():
if _debug:
print "debugging!"
two.py:
-------
from one import *
_debug = False
my_function()
```
Running two.py generates an error:
```
NameError: global name '_debug' is not defined
```
Can anyone tell me what I'm doing wrong? | There are more problems than just the leading underscore I'm afraid.
When you call `my_function()`, it still won't have your `debug` variable in its namespace, unless you import it from `two.py`.
Of course, doing that means you'll end up with cyclic dependencies (`one.py -> two.py -> one.py`), and you'll get `NameError`s unless you refactor where various things are imported and declared.
One solution would be to create a simple third module which defines 'constants' like this, which can be safely imported from anywhere, e.g.:
```
constants.py
------------
debug = True
one.py
------
from constants import debug
#...
two.py
------
from constants import debug
#...
```
However, I would recommend just using the built in [logging](http://docs.python.org/library/logging.html#module-logging) module for this - why not? It's easy to configure, simpler to use, reliable, flexible and extensible. | Names beginning with an underscore aren't imported with
```
from one import *
``` | Confusion about global variables in python | [
"",
"python",
"global-variables",
"python-import",
""
] |
I've been learning C#, and I'm trying to understand lambdas. In this sample below, it prints out 10 ten times.
```
class Program
{
delegate void Action();
static void Main(string[] args)
{
List<Action> actions = new List<Action>();
for (int i = 0; i < 10; ++i )
actions.Add(()=>Console.WriteLine(i));
foreach (Action a in actions)
a();
}
}
```
Obviously, the generated class behind the lambda is storing a reference or pointer to the `int i` variable, and is assigning a new value to the same reference every time the loop iterates. Is there a way to force the lamda to grab a copy instead, like the C++0x syntax
```
[&](){ ... } // Capture by reference
```
vs.
```
[=](){ ... } // Capture copies
``` | What the compiler is doing is pulling your lambda and any variables captured by the lambda into a compiler generated nested class.
After compilation your example looks a lot like this:
```
class Program
{
delegate void Action();
static void Main(string[] args)
{
List<Action> actions = new List<Action>();
DisplayClass1 displayClass1 = new DisplayClass1();
for (displayClass1.i = 0; displayClass1.i < 10; ++displayClass1.i )
actions.Add(new Action(displayClass1.Lambda));
foreach (Action a in actions)
a();
}
class DisplayClass1
{
int i;
void Lambda()
{
Console.WriteLine(i);
}
}
}
```
By making a copy within the for loop, the compiler generates new objects in each iteration, like so:
```
for (int i = 0; i < 10; ++i)
{
DisplayClass1 displayClass1 = new DisplayClass1();
displayClass1.i = i;
actions.Add(new Action(displayClass1.Lambda));
}
``` | The only solution I've been able to find is to make a local copy first:
```
for (int i = 0; i < 10; ++i)
{
int copy = i;
actions.Add(() => Console.WriteLine(copy));
}
```
But I'm having trouble understanding why putting a copy inside the for-loop is any different than having the lambda capture `i`. | How to tell a lambda function to capture a copy instead of a reference in C#? | [
"",
"c#",
"loops",
"lambda",
"capture",
""
] |
I know it is possible to set the current folder of the OpenFolderDialog to a special folder, like "Program Files" or Desktop?
But where do I find this? | Look at the System.Environment class, e.g:
```
string programFiles = System.Environment.GetFolderPath(
System.Environment.SpecialFolder.ProgramFiles);
```
Update:
I'm not sure if this is part of the question, but to open the folder selection dialog, you then use this code:
```
using System.Windows.Forms;
//...
FolderBrowserDialog dialog = new FolderBrowserDialog();
dialog.SelectedPath = programFiles;
dialog.ShowDialog();
string selectedPath = dialog.SelectedPath;
``` | Have you tried setting the folder to `System.Environment.GetFolderPath(Environment.SpecialFolder.ProgramFiles)`? This should do the trick. | Set OpenFolderDialog to a special folder | [
"",
"c#",
".net",
"file",
"directory",
""
] |
In JavaScript, there are two values which basically say 'I don't exist' - `undefined` and `null`.
A property to which a programmer has not assigned anything will be `undefined`, but in order for a property to become `null`, `null` must be explicitly assigned to it.
I once thought that there was a need for `null` because `undefined` is a primitive value and `null` an object. It's not, even if `typeof null` will yield `'object'`: Actually, both are primitive values - which means neither `undefined` nor `null` can be returned from a constructor function, as both will be converted to an empty object (one has to throw an error to proclaim failure in constructors).
They also both evaluate to `false` in boolean contexts. The only real difference I can think of is that one evaluates to `NaN`, the other to `0` in numeric contexts.
So why is there both `undefined` and `null` if this just confuses programmers who are incorrectly checking for `null` when trying to find out whether a property has been set or not?
What I'd like to know is if anyone has a reasonable example where it's necessary to use `null` which couldn't be expressed using `undefined` instead.
So the general consensus seems to be that `undefined` means 'there is no such property' while `null` means 'the property does exist, but holds no value'.
I could live with that if JavaScript implementations would actually enforce this behavior - but `undefined` is a perfectly valid primitive value, so it can easily be assigned to existing properties to break this contract. Therefore, if you want to make sure if a property exists, you have to use the `in` operator or `hasOwnProperty()` anyway. So once again: what's the practical use for separate values for `undefined` and `null`?
I actually use `undefined` when I want to unset the values of properties no longer in use but which I don't want to `delete`. Should I use `null` instead? | The question isn't really "why is there a null value in JS" - there is a null value of some sort in most languages and it is generally considered very useful.
The question is, "why is there an *undefined* value in JS". Major places where it is used:
1. when you declare `var x;` but don't assign to it, `x` holds undefined;
2. when your function gets fewer arguments than it declares;
3. when you access a non-existent object property.
`null` would certainly have worked just as well for (1) and (2)\*. (3) should really throw an exception straight away, and the fact that it doesn't, instead of returning this weird `undefined` that will fail later, is a big source of debugging difficulty.
\*: you could also argue that (2) should throw an exception, but then you'd have to provide a better, more explicit mechanism for default/variable arguments.
However JavaScript didn't originally have exceptions, or any way to ask an object if it had a member under a certain name - the only way was (and sometimes still is) to access the member and see what you get. Given that `null` already had a purpose and you might well want to set a member to it, a different out-of-band value was required. So we have `undefined`, it's problematic as you point out, and it's another great JavaScript 'feature' we'll never be able to get rid of.
> I actually use undefined when I want to unset the values of properties no longer in use but which I don't want to delete. Should I use null instead?
Yes. Keep `undefined` as a special value for signaling when other languages might throw an exception instead.
`null` is generally better, except on some IE DOM interfaces where setting something to `null` can give you an error. Often in this case setting to the empty string tends to work. | Best described [here](http://saladwithsteve.com/2008/02/javascript-undefined-vs-null.html), but in summary:
undefined is the lack of a type and value, and null is the lack of a value.
Furthermore, if you're doing simple '==' comparisons, you're right, they come out the same. But try ===, which compares both type and value, and you'll notice the difference. | Why is there a `null` value in JavaScript? | [
"",
"javascript",
"null",
"language-features",
"undefined",
""
] |
i have an input element, and i want bind both change and keypress event with the input, but event handling code is same for both the events. is there any short way of doing this instead of writing the same code twice. well, i could write a method, but wanted to see if there is any easier way of doing this
```
$("#inpt").change(function(){
some code
});
$("#inpt").keypress(function(){
same code
});
```
is there any way i can bind both events? | You can use the jQuery on method.
```
$("#inpt").on( "change keypress", function () {
code
});
``` | You can save the function and bind it to both:
```
var fn = function(){ /*some code*/ };
$("#inpt").change(fn).keypress(fn);
``` | attaching same event handling code to multiple events in jquery | [
"",
"javascript",
"jquery",
""
] |
Specifically (taking a deep breath): How would you go about finding all the XML name spaces within a C#/.NET `XmlDocument` for which there are no applicable schemas in the instance's `XmlSchemaSet` (`Schemas` property)?
My XPath magic is lacking the sophistication to do something like this, but I will keep looking in the meantime ... | You need to get a list of all the distinct namespaces in the document, and then compare that with the distinct namespaces in the schema set.
But namespace declaration names are typically not exposed in the XPath document model. But given a node you can get its namespace:
```
// Match every element and attribute in the document
var allNodes = xmlDoc.SelectNodes("//(*|@*)");
var found = new Dictionary<String, bool>(); // Want a Set<string> really
foreach (XmlNode n in allNodes) {
found[n.NamespaceURI] = true;
}
var allNamespaces = found.Keys.OrderBy(s => s);
``` | The easiest way I've ever found to retrieve all of the namespaces from a given XmlDocument is to XPath through all the nodes finding unique Prefix and NamespaceURI values.
I've got a helper routine that I use to return these unique values in an XmlNamespaceManager to make life simpler when I'm dealing with complex Xml documents.
The code is as follows:
```
private static XmlNamespaceManager PrepopulateNamespaces( XmlDocument document )
{
XmlNamespaceManager result = new XmlNamespaceManager( document.NameTable );
var namespaces = ( from XmlNode n in document.SelectNodes( "//*|@*" )
where n.NamespaceURI != string.Empty
select new
{
Prefix = n.Prefix,
Namespace = n.NamespaceURI
} ).Distinct();
foreach ( var item in namespaces )
result.AddNamespace( item.Prefix, item.Namespace );
return result;
}
``` | How can one find unknown XML namespaces in a document? | [
"",
"c#",
".net",
"xsd",
"schema",
"xml-namespaces",
""
] |
Is there a general way to implement part of an application with JavaScript and supplying a persistent connection to a server? I need the server to be able to push data to the client, regardless of the client being behind a firewall. Thanks in advance | See [Comet](http://en.wikipedia.org/wiki/Comet_%28programming%29) - it's like ajax, but it holds a connection open so the server can push information to the client.
Note that compliant browsers will only hold 2 connections (note: [most modern browsers no longer comply](https://stackoverflow.com/questions/5751515/official-references-for-default-values-of-concurrent-http-1-1-connections-per-se)) to a particular domain (by default), so you might want to split your domains (e.g. www.yourdomain.com and comet.yourdomain.com) so that you don't drastically slow down the loading of your pages. Or you could just make sure you don't open the comet connection until everything else is loaded. It's just something to be careful of. | You should look into Comet:
<http://ajaxian.com/archives/comet-a-new-approach-to-ajax-applications> | Persistent connection with client | [
"",
"javascript",
"connection",
"persistence",
""
] |
In attempting to use scipy's quad method to integrate a gaussian (lets say there's a gaussian method named gauss), I was having problems passing needed parameters to gauss and leaving quad to do the integration over the correct variable. Does anyone have a good example of how to use quad w/ a multidimensional function?
But this led me to a more grand question about the best way to integrate a gaussian in general. I didn't find a gaussian integrate in scipy (to my surprise). My plan was to write a simple gaussian function and pass it to quad (or maybe now a fixed width integrator). What would you do?
Edit: Fixed-width meaning something like trapz that uses a fixed dx to calculate areas under a curve.
What I've come to so far is a method make\_\_\_gauss that returns a lambda function that can then go into quad. This way I can make a normal function with the average and variance I need before integrating.
```
def make_gauss(N, sigma, mu):
return (lambda x: N/(sigma * (2*numpy.pi)**.5) *
numpy.e ** (-(x-mu)**2/(2 * sigma**2)))
quad(make_gauss(N=10, sigma=2, mu=0), -inf, inf)
```
When I tried passing a general gaussian function (that needs to be called with x, N, mu, and sigma) and filling in some of the values using quad like
```
quad(gen_gauss, -inf, inf, (10,2,0))
```
the parameters 10, 2, and 0 did NOT necessarily match N=10, sigma=2, mu=0, which prompted the more extended definition.
The erf(z) in scipy.special would require me to define exactly what t is initially, but it nice to know it is there. | Okay, you appear to be pretty confused about several things. Let's start at the beginning: you mentioned a "multidimensional function", but then go on to discuss the usual one-variable Gaussian curve. This is *not* a multidimensional function: when you integrate it, you only integrate one variable (x). The distinction is important to make, because there *is* a monster called a "multivariate Gaussian distribution" which is a true multidimensional function and, if integrated, requires integrating over two or more variables (which uses the expensive Monte Carlo technique I mentioned before). But you seem to just be talking about the regular one-variable Gaussian, which is much easier to work with, integrate, and all that.
The one-variable Gaussian distribution has two parameters, `sigma` and `mu`, and is a function of a single variable we'll denote `x`. You also appear to be carrying around a normalization parameter `n` (which is useful in a couple of applications). Normalization parameters are usually *not* included in calculations, since you can just tack them back on at the end (remember, integration is a linear operator: `int(n*f(x), x) = n*int(f(x), x)` ). But we can carry it around if you like; the notation I like for a normal distribution is then
`N(x | mu, sigma, n) := (n/(sigma*sqrt(2*pi))) * exp((-(x-mu)^2)/(2*sigma^2))`
(read that as "the normal distribution of `x` given `sigma`, `mu`, and `n` is given by...") So far, so good; this matches the function you've got. Notice that the only *true variable* here is `x`: the other three parameters are *fixed* for any particular Gaussian.
Now for a mathematical fact: it is provably true that all Gaussian curves have the same shape, they're just shifted around a little bit. So we can work with `N(x|0,1,1)`, called the "standard normal distribution", and just translate our results back to the general Gaussian curve. So if you have the integral of `N(x|0,1,1)`, you can trivially calculate the integral of any Gaussian. This integral appears so frequently that it has a special name: the *error function* `erf`. Because of some old conventions, it's not *exactly* `erf`; there are a couple additive and multiplicative factors also being carried around.
If `Phi(z) = integral(N(x|0,1,1), -inf, z)`; that is, `Phi(z)` is the integral of the standard normal distribution from minus infinity up to `z`, then it's true by the definition of the error function that
`Phi(z) = 0.5 + 0.5 * erf(z / sqrt(2))`.
Likewise, if `Phi(z | mu, sigma, n) = integral( N(x|sigma, mu, n), -inf, z)`; that is, `Phi(z | mu, sigma, n)` is the integral of the normal distribution given parameters `mu`, `sigma`, and `n` from minus infinity up to `z`, then it's true by the definition of the error function that
`Phi(z | mu, sigma, n) = (n/2) * (1 + erf((x - mu) / (sigma * sqrt(2))))`.
Take a look at [the Wikipedia article on the normal CDF](http://en.wikipedia.org/wiki/Normal_distribution#Cumulative_distribution_function) if you want more detail or a proof of this fact.
Okay, that should be enough background explanation. Back to your (edited) post. You say "The erf(z) in scipy.special would require me to define exactly what t is initially". I have no idea what you mean by this; where does `t` (time?) enter into this at all? Hopefully the explanation above has demystified the error function a bit and it's clearer now as to why the error function is the right function for the job.
Your Python code is OK, but I would prefer a closure over a lambda:
```
def make_gauss(N, sigma, mu):
k = N / (sigma * math.sqrt(2*math.pi))
s = -1.0 / (2 * sigma * sigma)
def f(x):
return k * math.exp(s * (x - mu)*(x - mu))
return f
```
Using a closure enables precomputation of constants `k` and `s`, so the returned function will need to do less work each time it's called (which can be important if you're integrating it, which means it'll be called many times). Also, I have avoided any use of the exponentiation operator `**`, which is slower than just writing the squaring out, and hoisted the divide out of the inner loop and replaced it with a multiply. I haven't looked at all at their implementation in Python, but from my last time tuning an inner loop for pure speed using raw x87 assembly, I seem to remember that adds, subtracts, or multiplies take about 4 CPU cycles each, divides about 36, and exponentiation about 200. That was a couple years ago, so take those numbers with a grain of salt; still, it illustrates their relative complexity. As well, calculating `exp(x)` the brute-force way is a very bad idea; there are tricks you can take when writing a good implementation of `exp(x)` that make it significantly faster and more accurate than a general `a**b` style exponentiation.
I've never used the numpy version of the constants pi and e; I've always stuck with the plain old math module's versions. I don't know why you might prefer either one.
I'm not sure what you're going for with the `quad()` call. `quad(gen_gauss, -inf, inf, (10,2,0))` ought to integrate a renormalized Gaussian from minus infinity to plus infinity, and should always spit out 10 (your normalization factor), since the Gaussian integrates to 1 over the real line. Any answer far from 10 (I wouldn't expect *exactly* 10 since `quad()` is only an approximation, after all) means something is screwed up somewhere... hard to say what's screwed up without knowing the actual return value and possibly the inner workings of `quad()`.
Hopefully that has demystified some of the confusion, and explained why the error function is the right answer to your problem, as well as how to do it all yourself if you're curious. If any of my explanation wasn't clear, I suggest taking a quick look at Wikipedia first; if you still have questions, don't hesitate to ask. | scipy ships with the "error function", aka Gaussian integral:
```
import scipy.special
help(scipy.special.erf)
``` | Best way to write a Python function that integrates a gaussian? | [
"",
"python",
"scipy",
"gaussian",
"integral",
""
] |
I have two tables with the same columns
```
tbl_source (ID, Title)
tbl_dest (ID, Title)
```
I want to update tbl\_dest titles from the tbl\_source where the ids in dest and source match. However, I don't want to update the dest title if the source title is null (or blank).
I've got this:
```
UPDATE tbl_dest
SET tbl_dest.Title =
(SELECT title
FROM tbl_source
WHERE tbl_dest.id = tbl_source.ID and tbl_source.title is not null)
```
But it keeps inserting the nulls.
How would I construct such a query?
I am using SQL server 2005.
Thanks. | Use an inner join...
```
Update tbl_dest
Set tbl_dest.Title = tbl_source.Title
From tbl_dest inner join tbl_source on tbl_dest.ID = tbl_source.ID
Where tbl_source.Title is not null and tbl_source.Title <> ''
``` | It's setting the value to null because the subquery is returning null, and you're not filtering records in your update clause.
Try something like this instead:
```
UPDATE tbl_dest
SET tbl_dest.Title =
(SELECT title
FROM tbl_source
WHERE tbl_source.id = tbl_dest.id)
WHERE EXISTS
(SELECT 1
FROM tbl_source
WHERE tbl_source.id = tbl_dest.id
AND tbl_source.title IS NOT NULL)
``` | Updating a DB table excluding NULLs | [
"",
"sql",
"sql-server-2005",
"sql-update",
""
] |
I'm currently developing a web application in ASP.Net with SQL Server and I would like to have some sort of public API so that my users can get their data and manipulate it at their own will.
I've never done something like this, can you recomend some guidelines for me to follow? | You are going to want to look into web services. [Here is a good article that shows you how to create RESTful WCF services.](http://www.developer.com/net/article.php/3695436) Uising RESTful services will allow your users to invoke API methods in your service with nice clean URLs. | WCF is the way to go- you can use SOAP / REST services - since you are planning a public API using REST is the right way to go - the following links from MSDN (starter kit and lab) will get you started
<http://msdn.microsoft.com/en-us/netframework/cc950529.aspx>
<http://code.msdn.microsoft.com/wcfrestlabs> | How to design a public API in ASP.Net? | [
"",
"c#",
"asp.net",
"api",
""
] |
Using standard mysql functions is there a way to write a query that will return a list of days between two dates.
eg given 2009-01-01 and 2009-01-13 it would return a one column table with the values:
```
2009-01-01
2009-01-02
2009-01-03
2009-01-04
2009-01-05
2009-01-06
2009-01-07
2009-01-08
2009-01-09
2009-01-10
2009-01-11
2009-01-12
2009-01-13
```
Edit: It appears I have not been clear. I want to GENERATE this list. I have values stored in the database (by datetime) but want them to be aggregated on a left outer join to a list of dates as above (I am expecting null from the right side of some of this join for some days and will handle this). | I would use this stored procedure to generate the intervals you need into the temp table named **time\_intervals**, then JOIN and aggregate your data table with the temp **time\_intervals** table.
The procedure can generate intervals of all the different types you see specified in it:
```
call make_intervals('2009-01-01 00:00:00','2009-01-10 00:00:00',1,'DAY')
.
select * from time_intervals
.
interval_start interval_end
------------------- -------------------
2009-01-01 00:00:00 2009-01-01 23:59:59
2009-01-02 00:00:00 2009-01-02 23:59:59
2009-01-03 00:00:00 2009-01-03 23:59:59
2009-01-04 00:00:00 2009-01-04 23:59:59
2009-01-05 00:00:00 2009-01-05 23:59:59
2009-01-06 00:00:00 2009-01-06 23:59:59
2009-01-07 00:00:00 2009-01-07 23:59:59
2009-01-08 00:00:00 2009-01-08 23:59:59
2009-01-09 00:00:00 2009-01-09 23:59:59
.
call make_intervals('2009-01-01 00:00:00','2009-01-01 02:00:00',10,'MINUTE')
.
select * from time_intervals
.
interval_start interval_end
------------------- -------------------
2009-01-01 00:00:00 2009-01-01 00:09:59
2009-01-01 00:10:00 2009-01-01 00:19:59
2009-01-01 00:20:00 2009-01-01 00:29:59
2009-01-01 00:30:00 2009-01-01 00:39:59
2009-01-01 00:40:00 2009-01-01 00:49:59
2009-01-01 00:50:00 2009-01-01 00:59:59
2009-01-01 01:00:00 2009-01-01 01:09:59
2009-01-01 01:10:00 2009-01-01 01:19:59
2009-01-01 01:20:00 2009-01-01 01:29:59
2009-01-01 01:30:00 2009-01-01 01:39:59
2009-01-01 01:40:00 2009-01-01 01:49:59
2009-01-01 01:50:00 2009-01-01 01:59:59
.
I specified an interval_start and interval_end so you can aggregate the
data timestamps with a "between interval_start and interval_end" type of JOIN.
.
Code for the proc:
.
-- drop procedure make_intervals
.
CREATE PROCEDURE make_intervals(startdate timestamp, enddate timestamp, intval integer, unitval varchar(10))
BEGIN
-- *************************************************************************
-- Procedure: make_intervals()
-- Author: Ron Savage
-- Date: 02/03/2009
--
-- Description:
-- This procedure creates a temporary table named time_intervals with the
-- interval_start and interval_end fields specifed from the startdate and
-- enddate arguments, at intervals of intval (unitval) size.
-- *************************************************************************
declare thisDate timestamp;
declare nextDate timestamp;
set thisDate = startdate;
-- *************************************************************************
-- Drop / create the temp table
-- *************************************************************************
drop temporary table if exists time_intervals;
create temporary table if not exists time_intervals
(
interval_start timestamp,
interval_end timestamp
);
-- *************************************************************************
-- Loop through the startdate adding each intval interval until enddate
-- *************************************************************************
repeat
select
case unitval
when 'MICROSECOND' then timestampadd(MICROSECOND, intval, thisDate)
when 'SECOND' then timestampadd(SECOND, intval, thisDate)
when 'MINUTE' then timestampadd(MINUTE, intval, thisDate)
when 'HOUR' then timestampadd(HOUR, intval, thisDate)
when 'DAY' then timestampadd(DAY, intval, thisDate)
when 'WEEK' then timestampadd(WEEK, intval, thisDate)
when 'MONTH' then timestampadd(MONTH, intval, thisDate)
when 'QUARTER' then timestampadd(QUARTER, intval, thisDate)
when 'YEAR' then timestampadd(YEAR, intval, thisDate)
end into nextDate;
insert into time_intervals select thisDate, timestampadd(MICROSECOND, -1, nextDate);
set thisDate = nextDate;
until thisDate >= enddate
end repeat;
END;
```
Similar example data scenario at the bottom of [this post](https://stackoverflow.com/questions/373490/insert-dates-in-the-return-from-a-query-where-there-is-none/373734#373734), where I built a similar function for SQL Server. | For MSSQL you can use this. It is VERY quick.
You can wrap this up in a table valued function or stored proc and parse in the start and end dates as variables.
```
DECLARE @startDate DATETIME
DECLARE @endDate DATETIME
SET @startDate = '2011-01-01'
SET @endDate = '2011-01-31';
WITH dates(Date) AS
(
SELECT @startdate as Date
UNION ALL
SELECT DATEADD(d,1,[Date])
FROM dates
WHERE DATE < @enddate
)
SELECT Date
FROM dates
OPTION (MAXRECURSION 0)
GO
```
**Edit 2021/01 (Dr. V):**
I liked this solution and made it work for mySQL V8. Here is the code, wrapping it into a procedure:
```
DELIMITER //
CREATE PROCEDURE dates_between (IN from_date DATETIME,
IN to_date DATETIME) BEGIN
WITH RECURSIVE dates(Date) AS
(
SELECT from_date as Date
UNION ALL
SELECT DATE_ADD(Date, INTERVAL 1 day) FROM dates WHERE Date < to_date
)
SELECT DATE(Date) FROM dates;
END//
DELIMITER ;
``` | Get a list of dates between two dates | [
"",
"mysql",
"sql",
"date",
"gaps-and-islands",
""
] |
I have a SQL query that is supposed to pull out a record and concat each to a string, then output that string. The important part of the query is below.
```
DECLARE @counter int;
SET @counter = 1;
DECLARE @tempID varchar(50);
SET @tempID = '';
DECLARE @tempCat varchar(255);
SET @tempCat = '';
DECLARE @tempCatString varchar(5000);
SET @tempCatString = '';
WHILE @counter <= @tempCount
BEGIN
SET @tempID = (
SELECT [Val]
FROM #vals
WHERE [ID] = @counter);
SET @tempCat = (SELECT [Description] FROM [Categories] WHERE [ID] = @tempID);
print @tempCat;
SET @tempCatString = @tempCatString + '<br/>' + @tempCat;
SET @counter = @counter + 1;
END
```
When the script runs, `@tempCatString` outputs as null while `@tempCat` always outputs correctly. Is there some reason that concatenation won't work inside a While loop? That seems wrong, since incrementing `@counter` works perfectly. So is there something else I'm missing? | Looks like it should work but for somereason it seems to think @tempCatString is null which is why you are always getting a null value as nullconcatenated to anything else is still null. Suggest you try with `COALESCE()` on each of the variables to set them to " " if they are null. | this would be more efficient....
```
select @tempCatString = @tempCatString + Coalesce(Description,'') + '<br/>' from Categories...
select @fn
```
also look at concat\_null\_yields\_null as an option to fix your concatenation issue, although I would avoid that route | T-SQL While Loop and concatenation | [
"",
"sql",
"sql-server",
"while-loop",
"string-concatenation",
""
] |
Is there a Windows standard way to do things such as "start fan", "decrease speed" or the like, from C/C++?
I have a suspicion it might be ACPI, but I am a frail mortal and cannot read that kind of documentation.
Edit: e.g. Windows 7 lets you select in your power plan options such as "passive cooling" (only when things get hot?) vs. "active cooling" (keep the CPU proactively cool?). It seems the OS does have a way to control the fan generically. | I am at the moment working on a project that, among other things, controls the computer fans. Basically, the fans are controlled by the superIO chip of your computer. We access the chip directly using port-mapped IO, and from there we can get to the logical fan device. Using port-mapped IO requires the code to run in kernel mode, but windows does not supply any drivers for generic port IO (with good reason, since it is a very powerful tool), so we wrote our own driver, and used that.
If you want to go down this route, you basically need knowledge in two areas: driver development and how to access and interpret superIO chip information. When we started the project, we didn't know anything in either of these areas, so it has been learning by browsing, reading and finally doing. To gain the knowledge, we have been especially helped by looking at these links:
1. The [WDK](https://learn.microsoft.com/en-us/windows-hardware/drivers/), which is the Windows Driver Kit. You need this to compile any driver you write for windows, With it comes a whole lot of source code for example drivers, including a driver for general port-mapped IO, called portio.
2. [WinIO](http://www.internals.com/) has source code for a driver in C, a dll in C that programmatically installs and loads that driver, and some C# code for a GUI, that loads the dll and reads/writes to the ports. The driver is very similar to the one in portio.
3. [lm-sensors](http://www.lm-sensors.org/) is a linux project, that, among other things, detects your superIO chip. /prog/detect/sensors-detect is the perl program, that does the detecting, and we have spent some time going through the code to see how to interface with a superIO chip.
4. When we were going through the lm-sensors code, it was very nice to have tools like [RapidDriver](http://www.entechtaiwan.com/dev/rapid/index.shtm) and [RW-everything](http://jacky5488.myweb.hinet.net/), since they allowed us to simulate a run of sensors-detect. The latter is the more powerful, and is very helpful in visualising the IO space, while the former provides easier access to some operations which map better to the ones in sensors-detect (read/write byte to port)
5. Finally, you need to find the datasheet of your superIO chip. From the examples, that I have seen, the environment controllers of each chip provide similar functionality (r/w fan speed, read temperature, read chip voltage), but vary in what registers you have to write to in order to get to this functionality. [This](http://www.alldatasheet.com/) place has had all the datasheets, we have needed so far. | If you want something real quick to just lower fans to a level where you know things won't overheat, there's the speedfan program to do so. Figuring out how to configure it in the early versions to automatically lower fans to 50% on computer startup was so painful that my first approach was to simply byte-patch it to start the only superio managed fan I had at lower speed. The newer versions are still bit tough but it's doable - there's a graphical slider system that looks like audio equalizer except that the x axis is temp and y is fan speed. You drag them down one by one. After you figure out how to get manual control for the fan you want, this is next step.
There's a project to monitor hardware (like fans) with C#:
<http://code.google.com/p/open-hardware-monitor/>
I haven't extensively looked at it, but the source code and use of WinRing0.sys atleast gives the impression that if you know what fan controller you have and have the datasheet, it should be modifiable to also set values instead of just getting them. I don't know what tool is suited (beside kernel debugger) to look at what Speedfan does, if you preferred to snoop around and imitate speedfan instead of looking at the datasheets and trying things out. | Programmatically access CPU fan on a laptop? (Windows) | [
"",
"c++",
"windows",
"acpi",
""
] |
I'm trying to implement some basic linear algebra operations and one of these operations is the inversion of a triangular (upper and/or lower) matrix. Is there an easy and stable algorithm to do that?
Thank you. | Yes, use [back substitution](http://en.wikipedia.org/wiki/Triangular_matrix#Forward_and_back_substitution). A standard algorithm to invert a matrix is to find its LU decomposition (decomposition into a lower-triangular and an upper-triangular matrix), use back subsitution on the triangular pieces, and then combine the results to obtain the inverse of the original matrix. | Don't invert it if you can. It's one of the basic commandments of numerical linear algebra.
It is much faster and numerically stabler to keep the matrix L itself in memory and compute
```
inv(L)b
```
with back-substitution whenever you need to do something else with inv(L).
Note that the customary algorithm for inverting it requires solving the systems
```
inv(L)[1 0 0 ...],
inv(L)[0 1 0 ....],
inv(L)[0 0 1 ....]
```
and so on, so you see it is much easier not to invert it at all. | Is there around a straightforward way to invert a triangular (upper or lower) matrix? | [
"",
"c++",
"math",
"matrix",
"linear-algebra",
"triangular",
""
] |
I'm working on a WCF service that will be communicating over net.tcp to n instances of a client app (that is being developed by another programmer in my office).
At the moment I am using net.tcp without any security as I felt setting this up at this stage wasn't necessary, at least not until we are closer to rolling out.
During the development of a WCF application, is there any harm in using a standard binding (net.tcp in my case) without security, then once the business logic has been completed, implement all the security requirements? Are there any things I need to be aware of that may not function after the implementation of security? | While your overall design should consider security from the very beginning, I don't think it's a good idea to couple your components to any particular security strategy. You may very well want to use some of your components in a non-secure manner or across a different protocol which offers different security options.
So my answer is yes and no. Yes you need to think about it from the beginning, but no you should not couple your components to your security needs.
That said, since you know that you'll be using net.tcp you should be aware that transport security is turned on by default for this binding.
For tons more information, see Juval Lowy's fantastic [Programming WCF Services](https://rads.stackoverflow.com/amzn/click/com/0596521308), chapter 10. Lowy, in his ServiceModelEx library (discussed at length in the book) provides a really nice framework that you can plug in after you've created your components. Even if it's not exactly what you're looking for you can customize it to suit your needs. | Security should be thought about from the start, and not added on at the end.
Develop a plan for your security, and implement it as you go, rather then at the end.
Reference: Microsoft .NET: Architecting Applications for the Enterprise
<http://www.amazon.com/Microsoft>®-NET-Architecting-Applications-PRO-Developer/dp/073562609X | Logic first, WCF security later? | [
"",
"c#",
"xml",
"wcf",
"security",
"wcf-binding",
""
] |
When I'm inside a destructor is it possible that some other thread will start executing object's member function? How to deal with this situation? | C++ has no intrinsic protection against using an object after it's been deleting - forget about race conditions - another thread could use your object after it's been completely deleted.
Either:
1. Make sure only one place in the
code owns the object, and it's
responsible for deleting when no-one
is using the object.
2. Make the
object reference counted - by added
explicit reference counting code, or
finding an appropriate base-class
that implements reference counting | You shouldn't be destroying an object unless you are sure that nothing else will be trying to use it - ideally nothing else has a reference to it. You will need to look more closely at when you call delete. | Destructor vs member function race | [
"",
"c++",
"multithreading",
"destructor",
""
] |
For instance in C# or Java, you always have a main() method used to get your program running. What do you name the class that it is in? Some ideas I would use would just be "Program" or the name of the program itself. What would be considered conventional in this case? | Visual Studio creates "Program.cs" these days, which seems pretty reasonable. Another self-documenting name I rather like is "EntryPoint". | I use either Main or Main | In an OO language, what do you name your class that contains the Main method? | [
"",
"c#",
"oop",
"naming-conventions",
""
] |
Does anyone have any experience that indicates what kind of performance hit a developer could expect by choosing to use an ORM (in Django, RoR, SQLAlechemy, etc) over SQL and hand-designed databases? I imagine there are complicating issues, including whether specifying a database within the constraints of an ORM increases or decreases the chances of creating an efficient database structure (based on the developer's level of experience), and the question of how well the developer constructs either the SQL or ORM-based queries (again based on his/her experience). Any information regarding these or intrinsic performance issues would be really interesting to me. | My advice is not to worry about this until you need to - don't optimise prematurely. An ORM can provide many benefits to development speed, code readability and can remove a lot of code repetition. I would recommend using one if it will make your application easier to develop.
As you progress through the development use benchmarks and profiling to determine the bottlenecks in the code and if needed you can bypass the ORM and use manual queries where they are required. Normally you will be able to improve the speed of the ORM using caching and database indexes (amongst other things) and then you can decide where manual queries are required. For the most part, the ORM performance will probably acceptable and the benefits of using it will far outweigh the performance cost. | Performance has always been an after thought in most DAL Layer development / architecture. I think its about time we start questioning the performance of these ORM tools, for the so-called ease of development they promise:
The 2 biggest areas of performance issues in ORMs are:
1. Inability to write Optimum SQL. You have to use an Object Query Language which is interpreted into SQL by the framework. Mostly it is good SQL, but often enough it is not the most efficient SQL.
2. Reflection. Most ORM frameworks use Reflection to populate objects with Data from the database. Reflection operations are costly, and with increasing number of load and data, the performance degradation becomes obvious.
Other performance issues that arise are because of inefficient Database Design or Entity Model design due to the tight coupling of Entity objects to Tables. | ORM performance cost | [
"",
"sql",
"database",
"performance",
"orm",
"frameworks",
""
] |
Is there any material about how to use `#include` correctly?
I didn't find any C/C++ text book that explains this usage in detail.
In formal project, I always get confused in dealing with it. | * Check Large-Scale C++ Software Design from John Lakos if you have the money.
* Google C++ coding guidelines also have some OK stuff.
* Check Sutter Herb materials online (blog) as well.
Basically you need to understand where include headers are NOT required, eg. forward declaration. Also try to make sure that include files compiles one by one, and only put #includes in h files when it's a must (eg. templates). | The big one that always tripped me up was this:
This searches in the header path:
```
#include <stdio.h>
```
This searches in your local directory:
```
#include "myfile.h"
```
Second thing you should do with EVERY header is this:
myfilename.h:
```
#ifndef MYFILENAME_H
#define MYFILENAME_H
//put code here
#endif
```
This pattern means that you cannot fall over on redefining the headers in your compilation (Cheers to orsogufo for pointing out to me this is called an "include guard"). Do some reading on how the C compiler actually compiles the files (before linking) because that will make the world of #define and #include make a whole lot of sense to you, the C compiler when it comes to parsing text isn't very intelligent. (The C compiler itself however is another matter) | How to use #include directive correctly? | [
"",
"c++",
"c",
""
] |
I need to make multiple divs move from right to left across the screen and stop when it gets to the edge. I have been playing with jQuery lately, and it seem like what I want can be done using that. Does anyone have or know where I can find an example of this? | You will want to check out the jQuery animate() feature. The standard way of doing this is positioning an element absolutely and then animating the "left" or "right" CSS property. An equally popular way is to increase/decrease the left or right margin.
Now, having said this, you need to be aware of severe performance loss for any type of animation that lasts longer than a second or two. Javascript was simply not meant to handle long, sustained, slow animations. This has to do with the way the DOM element is redrawn and recalculated for each "frame" of the animation. If you're doing a page-width animation that lasts more than a couple seconds, expect to see your processor spike by 50% or more. If you're on IE6, prepare to see your computer spontaneously combust into a flaming ball of browser incompetence.
To read up on this, check out [this thread](https://stackoverflow.com/questions/459302/cross-browser-jquery-transition-animation/459547#459547) (from my very first Stackoverflow post no less)!
Here's a link to the jQuery docs for the animate() feature: <http://docs.jquery.com/Effects/animate> | In jQuery 1.2 and newer you no longer have to position the element absolutely; you can use normal relative positioning and use += or -= to add to or subtract from properties, e.g.
```
$("#startAnimation").click(function(){
$(".toBeAnimated").animate({
marginLeft: "+=250px",
}, 1000 );
});
```
And to echo the guy who answered first's advice: Javascript is not performant. Don't overuse animations, or expect things than run nice and fast on your high performance PC on Chrome to look good on a bog-standard PC running IE. Test it, and make sure it degrades well! | How can I use jQuery to move a div across the screen | [
"",
"javascript",
"jquery",
""
] |
Does anyone know why std::queue, std::stack, and std::priority\_queue don't provide a `clear()` member function? I have to fake one like this:
```
std::queue<int> q;
// time passes...
q = std::queue<int>(); // equivalent to clear()
```
IIRC, `clear()` is provided by everything that could serve as the underlying container. Is there a good reason to not have the container adaptors provide it? | Well, I think this is because `clear` was not considered a valid operation on a queue, a priority\_queue or a stack (by the way, deque is not and adaptor but a container).
> The only reason to use the container
> adaptor queue instead of the container
> deque is to make it clear that you are
> performing only queue operations, and
> no other operations. [(from the sgi page on queue)](http://www.sgi.com/tech/stl/queue.html)
So when using a queue, all you can do is push/pop elements; clearing the queue can be seen as a violation of the FIFO concept. Consequently, if you need to clear your queue, maybe it's not really a queue and you should better use a deque.
However, this conception of things is a little narrow-minded, and I think clearing the queue as you do is fair enough. | Deque has clear(). See, e.g., <http://www.cplusplus.com/reference/stl/deque/clear.html>.
However, queue does not. But why would you choose queue over deque, anyway?
> The only reason to use the container
> adaptor queue instead of the container
> deque is to make it clear that you are
> performing only queue operations, and
> no other operations.
(<http://www.sgi.com/tech/stl/queue.html>)
So I guess clear() is not a queue operation, then. | Why don't the standard C++ container adaptors provide a clear function? | [
"",
"c++",
"stl",
"standards",
""
] |
I'm trying to compile a simple program, with
```
#include <gtkmm.h>
```
The path to `gtkmm.h` is `/usr/include/gtkmm-2.4/gtkmm.h`. g++ doesn't see this file unless I specifically tell it `-I /usr/include/gtkmm-2.4`.
My question is, how can I have g++ automatically look recursively through all the directories in `/usr/include` for all the header files contained therein, and why is this not the default action? | In this case, the correct thing to do is to use `pkg-config` in your `Makefile` or buildscripts:
```
# Makefile
ifeq ($(shell pkg-config --modversion gtkmm-2.4),)
$(error Package gtkmm-2.4 needed to compile)
endif
CXXFLAGS += `pkg-config --cflags gtkmm-2.4`
LDLIBS += `pkg-config --libs gtkmm-2.4`
BINS = program
program_OBJS = a.o b.o c.o
all: $(BINS)
program: $(program_OBJS)
$(CXX) $(LDFLAGS) $^ $(LOADLIBES) $(LDLIBS) -o $@
# this part is actually optional, since it's covered by gmake's implicit rules
%.o: %.cc
$(CXX) -c $(CPPFLAGS) $(CXXFLAGS) $< -o $@
```
If you're missing `gtkmm-2.4`, this will produce
```
$ make
Package gtkmm-2.4 was not found in the pkg-config search path.
Perhaps you should add the directory containing `gtkmm-2.4.pc'
to the PKG_CONFIG_PATH environment variable
No package 'gtkmm-2.4' found
Makefile:3: *** Package gtkmm-2.4 needed to compile. Stop.
```
Otherwise, you'll get all the appropriate paths and libraries sucked in for you, without specifying them all by hand. (Check the output of `pkg-config --cflags --libs gtkmm-2.4`: that's far more than you want to type by hand, ever.) | I guess you are not using a makefile? The only thing that could be annoying is having to type the long -I option *each time* you compile your program. A makefile makes it a lot easier.
For example, you could modify the *hello world* makefile from [wikipedia](http://en.wikipedia.org/wiki/Make_(software)) to something like the following:
```
INC=-I/usr/include/gtkmm-2.4/
helloworld: helloworld.o
g++ -o $@ $<
helloworld.o: helloworld.c
g++ $(INC) -c -o $@ $<
.PHONY: clean
clean:
rm -f helloworld helloworld.o
``` | g++ include all /usr/include recursively | [
"",
"c++",
"g++",
""
] |
I need to check a double value for infinity in a C++ app on Linux. On most platforms this works by comparing with `std::numeric_limits<double>::infinity()`. However, on some old platforms (RedHat 9 for example, with gcc 3.2.2) this is not available, and `std::numeric_limits<double>::has_infinity` is false there.
What workaround would you recommend for those platforms? | Ok, I have now resorted to using the `INFINITY` and `NAN` macros on that particular machine - seems to work fine. They come from `math.h`. | If you're using IEEE 754 arithmetic, as you almost certainly are, infinities are well defined values and have defined outcomes for all arithmetic operations. In particular,
```
infinity - infinity = NaN
```
Positive and negative infinity and `NaN` values are the only values for which this is true. NaNs are special "not-a-number" values used to indicate domain errors of functions, e.g. `sqrt(-1)`. Also:
```
NaN != NaN
```
`NaN`s are the only values for which this is true.
Therefore:
```
bool is_infinite(double x) {
double y = x - x;
return x == x && y != y;
}
```
will return true if and only if `x` is either positive or negative infinity. Add a test for `x > 0` if you only want to check for positive infinity. | What's the recommended workaround if numeric_limits<double>::has_infinity is false? | [
"",
"c++",
"linux",
"double",
"limit",
"infinity",
""
] |
I am using sqlite with python. When i insert into table A i need to feed it an ID from table B. So what i wanted to do is insert default data into B, grab the id (which is auto increment) and use it in table A. Whats the best way receive the key from the table i just inserted into? | As Christian said, `sqlite3_last_insert_rowid()` is what you want... but that's the C level API, and you're using the Python DB-API bindings for SQLite.
It looks like the cursor method `lastrowid` will do what you want [(search for 'lastrowid' in the documentation for more information)](http://docs.python.org/library/sqlite3.html). Insert your row with `cursor.execute( ... )`, then do something like `lastid = cursor.lastrowid` to check the last ID inserted.
That you say you need "an" ID worries me, though... it doesn't matter *which* ID you have? Unless you are using the data just inserted into B for something, in which case you need *that* row ID, your database structure is seriously screwed up if you just need any old row ID for table B. | Check out [sqlite3\_last\_insert\_rowid()](http://www.sqlite.org/c3ref/last_insert_rowid.html) -- it's probably what you're looking for:
> Each entry in an SQLite table has a
> unique 64-bit signed integer key
> called the "rowid". The rowid is
> always available as an undeclared
> column named ROWID, OID, or \_ROWID\_ as
> long as those names are not also used
> by explicitly declared columns. If the
> table has a column of type INTEGER
> PRIMARY KEY then that column is
> another alias for the rowid.
>
> This routine returns the rowid of the
> most recent successful INSERT into the
> database from the database connection
> in the first argument. If no
> successful INSERTs have ever occurred
> on that database connection, zero is
> returned.
Hope it helps! (More info on ROWID is available [here](http://www.sqlite.org/lang_createtable.html#rowid) and [here](http://www.sqlite.org/lang_corefunc.html).) | creating blank field and receving the INTEGER PRIMARY KEY with sqlite, python | [
"",
"python",
"sqlite",
""
] |
I'm working on a program that will tell what level a programmer is at beginner, intermediate, or expert based on 32 subjects from a test in Code Complete 2nd Edition. I'm using 32 check boxes and one method to tell which ones are clicked. The problem is that when I check to see if the check boxes checked property is equal to true, it gets the result before the check box actually becomes checked. Here is all of my source code (so far):
```
public partial class Main : Form
{
private int baseScore = 0;
public Main()
{
InitializeComponent();
}
private void buttonCalculateScore_Click(object sender, EventArgs e)
{
DetermineLevelOfProgrammer();
}
private void DetermineLevelOfProgrammer()
{
if ((baseScore >= 0) || (baseScore <= 14))
{
labelYourScore.Text += " " + baseScore.ToString();
labelDescription.Text = "You are a beginning programmer, probably in your first year of computer \n"+
"science in school or teaching yourself your first programming language. ";
}
// Do the other checks here!
}
// If checkbox is checked then increment base score,
// otherwise decrement base score.
private void checkBoxVariant_CheckedChanged(object sender, EventArgs e)
{
if (checkBoxVariant.Checked)
baseScore++;
else
baseScore--;
}
}
``` | I'm not sure what checkBoxVariant is exacty but...
I think the problem is that checkBoxVariant is just 1 of the 32 CheckBoxes. I'm assuming you wired all 32 CheckChanged events to the checkBoxVariant\_CheckedChanged method.
What it should look like is:
```
// If checkbox is checked then increment base score,
// otherwise decrement base score.
private void checkBoxVariant_CheckedChanged(object sender, EventArgs e)
{
if (((CheckBox)sender).Checked)
baseScore++;
else
baseScore--;
}
```
sender is an Object that points to the actual Object that caused the event to be raised. Since anything could raise the event, it's just an Object that must be cast to a CheckBox. | if ((baseScore >= 0) || (baseScore <= 14))
Be careful - this will always evaluate to true. You may have intended to use &&. | Programming Skill Tester (Problem) | [
"",
"c#",
"winforms",
""
] |
I am trying to develop an Online editor (like FCKEditor/etc) but I have no idea how they work. I know that the WYSIWYG ones have Javascript and IFrames, but how do they actually work?
I'm especially curious about having a real-time preview of what's being typed into the editor. | RTE are usually (always?) implemented using an iframe. The document object which is available inside that iframe must have the property [designMode set to on](https://developer.mozilla.org/en/Rich-Text_Editing_in_Mozilla). After this point all you have to do in order to implement basic functionality like bold, italic, color, background, etc. are done using the execCommand method of the document object.
The main reason for using an iframe is that you won't lose focus of the selection when clicking styling buttons (Firefox allows setting this property only on iframes). Further more, the contentEditable attribute is not available in Firefox versions previous to 3.
Things get a little more complicated when you want to do fancy things with that RTE. At that point you must use [Range objects](https://developer.mozilla.org/En/DOM/Range) (which are implemented differently in the various browsers). | [FCKEditor](http://www.fckeditor.net/download) is open source and the source code is freely available.
The code for [the editor](https://blog.stackoverflow.com/2009/01/updated-wmd-editor/) used on Stackoverflow [is also available](http://github.com/derobins/wmd/tree/master)
It might be worth spending some time reading through the source code. People here would be happy to help explain any bits of code that were unclear. | How do online text editors work? | [
"",
"javascript",
"editor",
"wysiwyg",
"fckeditor",
""
] |
Oh, 2 things:
1) It is a console application.
2 ) I know it is in danish, but it doesn't really matter, its just an example of asking for some input. The text and variables does not matter.
Alright, consider this simple input:
It could be any sort of input question really.
```
Console.WriteLine("Hvad er dit kundenummer: (Kun hele tal tilladt)");
string inputKnr = Console.ReadLine();
kundenummer = Convert.ToInt16(inputKnr);
```
Now, what if the customer types something wrong? Such as a letter.
A try & catch would make sure the application does not break, but that is not the solution I want.
I want it to say that you did it wrong, try again.
Pretty classic right?
But what is the best way to solve this solution? I have thought of this:
```
bool fangetKundenummer = true;
while (fangetKundenummer)
{
Console.WriteLine("Hvad er dit kundenummer: (Kun hele tal tilladt)");
string inputKnr = Console.ReadLine();
try
{
kundenummer = Convert.ToInt16(inputKnr);
fangetKundenummer = false;
}
catch
{
Console.WriteLine("Fejl. Prøv igen");
}
}
```
But it just doesn't seem like the right way to do it.
Also, just to mention it, this little application I am playing with has **4 input questions in a row**. This would mean 4 times this nasty while() loop.
You could also write a function. Something like this (no reason to do it the right way, its just to illustrate a concept):
```
static void verifyInput()
{
try
{
Console.WriteLine("question");
input = Console.ReadLine();
kundenummer = Convert.ToInt16(input)
}
catch
{
Console.WriteLine("Wrong. Do it over");
verifyInput(); //start the function all over
}
}
```
But you'd have to write a function for each and every input question, even though they might ask exactly for the same! (meaning perhaps all asking for an integer; but with a different question and variable).
This doesn't seem much better than the while() solution.
Does anyone have a clever idea? | Use [Int16.TryParse](http://msdn.microsoft.com/en-us/library/system.int16.tryparse.aspx) and the equivalents for other numeric types. All of these return a Boolean result to indicate success or failure for parsing, and take an `out` parameter which is set to the result of the parsing (or 0 in case of failure). In your case you may want to wrap the call in a method to keep prompting:
```
static Int16 PromptForInt16(string prompt)
{
while (true)
{
Console.Write(prompt);
Int16 result;
if (Int16.TryParse(Console.ReadLine(), out result))
{
return result;
}
Console.WriteLine("Sorry, invalid number entered. Try again.");
}
}
``` | You can use the TryParse pattern:
```
string s; // for "is not valid" message
short val; // final value
while(!short.TryParse(s=Console.ReadLine(), out val)) {
Console.WriteLine(s + " is not valid...");
}
``` | Best way to verify readline input in C#? | [
"",
"c#",
"readline",
"verify",
""
] |
I am working on a DAL that is getting a DataReader Asynchronously.
I would like to write a single method for transforming the DataReader into a DataSet. It needs to handle different schema so that this one method will handle all of my fetch needs.
P.S. I am populating the SQLDataReader Asynchronously, please don't give answers that are getting rid of the DataReader. | Try [DataSet.Load()](http://msdn.microsoft.com/en-us/library/5fd1ahe2.aspx). It has several overloads taking an IDataReader. | [DataTable.load()](http://msdn.microsoft.com/en-us/library/system.data.datatable.load.aspx) can be used for a generic approach.
```
do {
var table = new DataTable();
table.Load(reader);
dataset.Tables.Add(table);
} while(!reader.IsClosed);
``` | Best method for Populating DataSet from a SQLDataReader | [
"",
"sql",
"dataset",
"sqldatareader",
""
] |
So, the question is: I get some notifications I don't want to get. But I don't know for what file/dir I got them. Is there a way to know why given notification was fired?
If you think about ReadDirectoryChangesW, please include a meaningful code sample. | If you would like Windows to tell you what specific file or subdirectory changed, you will need to use [ReadDirectoryChangesW](http://msdn.microsoft.com/en-us/library/aa365465(VS.85).aspx). The asynchronous mode is fairly simple if you use a completion routine.
On the other hand, you will probably get better performance by using the slightly more complicated [I/O completion ports](http://msdn.microsoft.com/en-us/library/aa365198(VS.85).aspx) approach. I would recommend downloading Wes Jones' excellent [CDirectoryChangeWatcher](http://www.codeproject.com/KB/files/directorychangewatcher.aspx) source code as a starting point. There are several gotchas that his code will help you avoid, particularly in parsing the `FILE_NOTIFY_INFORMATION` records. | ~pseudocode
```
HANDLE handles[MAX_HANDLES];
std::string dir_array[MAX_HANDLES];
for i from 0 to MAX_HANDLES:
h[i] = FindFirstChangeNotification(dir_array[i]...);
nCount = MAX_HANDLES;
ret = WaitForMultipleObjects(handles, nCount ...);
// check if ret returns something between WAIT_OBJECT_0 and WAIT_OBJECT_0+nCount-1
if "so":
ret -= WAIT_OBJECT_0;
cout << "Directory " << dir_array[ret] << " changed" << endl;
```
See: <http://msdn.microsoft.com/en-us/library/ms687025(VS.85).aspx> | How to debug file change notifications obtained by FindFirstChangeNotification? | [
"",
"c++",
"debugging",
"winapi",
"notifications",
""
] |
as explained before, I'm currently working on a small linear algebra library to use in a personal project. Matrices are implemented as C++ vectors and element assignment ( a(i,j) = v; ) is delegated to the assignment to the vector's elements. For my project I'll need to solve tons of square equation systems and, in order to do that, I implemented the LU factorization (Gaussian Elimination) for square matrices. In the current implementation I'm avoiding to recalculate each time the LU factorization by caching the L and U matrices, the problem is that since I'm delegating the element assignment to vector, I can't find a way to say if the matrix is being changed and whether to recalculate the factorization. Any ideas on how to solve this?
Thank you | ```
template<class T>
class matrix {
public:
class accessor {
public:
accessor(T& dest, matrix& parent) : dest(dest), parent(parent) { }
operator T const& () const { return dest; }
accessor& operator=(T const& t) { dest = t; parent.invalidate_cache(); return *this; }
private:
T& dest;
matrix& parent;
};
// replace those with actual implementation..
accessor operator()(int x, int y) {
static T t; return accessor(t, *this);
}
T const& operator()(int x, int y) const {
static T t; return t;
}
private:
void invalidate_cache() { cout << "Cache invalidated !!\n"; }
vector<T> impl;
};
```
thanks go to to ##iso-c++ @ irc.freenode.net for some helpful corrections | If I understand correctly you need to check during the execution of your code whether a matrix has changed or not.
Well, vectors don't support such functionality. However, what you can do is write a Matrix class of your own, add such functionality to it and use it instead of vectors.
An example implementation could be:
```
class Matrix {
public:
Matrix() : hasChanged(false) {}
double setElement(int i, int j, double value) {
innerStorage[i][j] = value;
hasChanged = true;
}
double getElement(int i, int j) {
return innerStorage[i][j];
}
void clearHasChangedFlag() {
hasChanged = false;
}
private:
vector<vector<double> > innerStorage;
bool hasChanged;
}
``` | Caching policies and techniques for matrices | [
"",
"c++",
"caching",
""
] |
In Visual C++ , when I build a dll , the output files are .dll and .lib.
Is the name of the dll built into the .lib file .
The reasson I ask this question is : When I built my exe by importing this dll and run the exe , the exe tries to locate the dll to load it in the process address space .
As we just specify the library name (.lib file) in the project properties , how does the exe get to know the name of the dll .
Note : I dumpbin libary file (.lib) and saw that it does not contain the name of the dll . | The LIB file is turned into an import table in the EXE. This *does* contain the name of the DLL.
You can see this if you run `dumpbin /all MyDLL.lib`. Note that `dumpbin MyDll.lib` by itself doesn't show anything useful: you should use `/all`.
This shows all of the sections defined in the .LIB file. You can ignore any `.debug` sections, because they wouldn't be present in a Release build. In the .LIB file, there are a collection of .idata sections. In the DLL project that I just built, the LIB file contains a `.idata$4` section which defines the symbols to be put in the EXE's import table, including the DLL name:
```
Archive member name at 83E: MyDll.dll/
497C3B9F time/date Sun Jan 25 10:14:55 2009
uid
gid
0 mode
2E size
correct header end
Version : 0
Machine : 14C (x86)
TimeDateStamp: 497C3B9F Sun Jan 25 10:14:55 2009
SizeOfData : 0000001A
DLL name : MyDll.dll
Symbol name : ?fnMyDll@@YAHXZ (int __cdecl fnMyDll(void))
Type : code
Name type : name
Hint : 2
Name : ?fnMyDll@@YAHXZ
``` | Yes, the lib contains the name of the DLL.
Functionally, the import library implements the `LoadLibrary` and `GetProcAdress` calls, and makes the exported functions available as if they were linked statically.
The search path is the same as documented for [LoadLibrary](http://msdn.microsoft.com/en-us/library/ms684175(VS.85).aspx), the name is fixed, though. | When building a DLL file, does the generated LIB file contain the DLL name? | [
"",
"c++",
"winapi",
"linker",
""
] |
I'd like to be able to use php search an array (or better yet, a column of a mysql table) for a particular string. However, my goal is for it to return the string it finds and the number of matching characters (in the right order) or some other way to see how reasonable the search results are, so then I can make use of that info to decide if I want to display the top result by default or give the user options of the top few.
I know I can do something like
```
$citysearch = mysql_query(" SELECT city FROM $table WHERE city LIKE '$city' ");
```
but I can't figure out a way to determine how accurate it is.
The goal would be:
a) find "Milwaukee" if the search term were "milwakee" or something similar.
b) if the search term were "west", return things like "West Bend" and "Westmont".
Anyone know a good way to do this? | More searching led me to the Levenshtein distance and then to similar\_text, which proved to be the best way to do this.
```
similar_text("input string", "match against this", $pct_accuracy);
```
compares the strings and then saves the accuracy as a variable. The Levenshtein distance determines how many delete, insert, or replace functions on a single character it would need to do to get from one string to the other, with an allowance for weighting each function differently (eg. you can make it cost more to replace a character than to delete a character). It's apparently faster but less accurate than similar\_text. Other posts I've read elsewhere have mentioned that for strings of fewer than 10000 characters, there's no functional difference in speed.
I ended up using a modified version of something I found to make it work. This ends up saving the top 3 results (except in the case of an exact match).
```
$input = $_POST["searchcity"];
$accuracy = 0;
$runner1acc = 0;
$runner2acc = 0;
while ($cityarr = mysql_fetch_row($allcities)) {
$cityname = $cityarr[1];
$cityid = $cityarr[0];
$city = strtolower($cityname);
$diff = similar_text($input, $city, $tempacc);
// check for an exact match
if ($tempacc == '100') {
// closest word is this one (exact match)
$closest = $cityname;
$closestid = $cityid;
$accuracy = 100;
break;
}
if ($tempacc >= $accuracy) { // more accurate than current leader
$runner2 = $runner1;
$runner2id = $runner1id;
$runner2acc = $runner1acc;
$runner1 = $closest;
$runner1id = $closestid;
$runner1acc = $accuracy;
$closest = $cityname;
$closestid = $cityid;
$accuracy = $tempacc;
}
if (($tempacc < $accuracy)&&($tempacc >= $runner1acc)) { // new 2nd place
$runner2 = $runner1;
$runner2id = $runner1id;
$runner2acc = $runner1acc;
$runner1 = $cityname;
$runner1id = $cityid;
$runner1acc = $tempacc;
}
if (($tempacc < $runner1acc)&&($tempacc >= $runner2acc)) { // new 3rd place
$runner2 = $cityname;
$runner2id = $cityid;
$runner2acc = $tempacc;
}
}
echo "Input word: $input\n<BR>";
if ($accuracy == 100) {
echo "Exact match found: $closestid $closest\n";
} elseif ($accuracy > 70) { // for high accuracies, assumes that it's correct
echo "We think you meant $closestid $closest ($accuracy)\n";
} else {
echo "Did you mean:<BR>";
echo "$closestid $closest? ($accuracy)<BR>\n";
echo "$runner1id $runner1 ($runner1acc)<BR>\n";
echo "$runner2id $runner2 ($runner2acc)<BR>\n";
}
``` | You should check out [full text searching](http://dev.mysql.com/doc/refman/5.0/en/fulltext-search.html) in MySQL. Also check out Zend's port of the Apache Lucene project, [Zend\_Search\_Lucene](http://framework.zend.com/manual/en/zend.search.lucene.html). | PHP/mysql array search algorithm | [
"",
"php",
"mysql",
"match",
""
] |
I am looking at building a client server application in C# using winforms or WPF. The client application must be a local application because it has to interact with specialised hardware.
The architecture I'm looking for is the client connects to a server port using TCP/IP. The client will then make requests to the server and the server will send responses to the client. The client wills stay connected to the server while the user is logged in.
I have looked at web services and as far as I can figure out, WCF extends web services which means there is no way for the server to send a message to the client.
Am I incorrect about WCF? If not what is the best way to accomplish this? | WCF supports [duplex messaging](https://web.archive.org/web/20120417170915/http://geekswithblogs.net:80/claeyskurt/archive/2007/09/05/115169.aspx) which should accomplish what you need.
See also: [http://msdn.microsoft.com/en-us/library/cc645027(VS.95).aspx](https://learn.microsoft.com/en-us/previous-versions/windows/silverlight/dotnet-windows-silverlight/cc645027%28v=vs.95%29) | Try reading [this article by Juval Lowy](http://msdn.microsoft.com/en-gb/magazine/cc163537.aspx) which is an excellent discussion of the issues in WCF messaging. He offers an alternative to duplexing which is his pub-sub framework. I'm going to have to recommend you buy his WCF book as well. He is one of the best technical authors I have come accross on any subject. | Two way client server network communication | [
"",
"c#",
"wcf",
"networking",
""
] |
What is the linq equivalent of the following statement ?
```
IF NOT EXISTS(SELECT UserName FROM Users WHERE UserName='michael')
BEGIN
INSERT INTO Users (UserName) values ('michael');
END
```
also can you suggest any sql-to-linq converters? I am currently using LINQPad which does a great job in terms of writing linq code where you can also see the generated sql code however when I click the little linq sign, nothing is displayed. | It can't be done in LINQ2SQL with a single statement as the LINQ syntax and extension methods don't support inserts. The following (assuming a datacontext named `db`) should do the trick.
```
if (!db.Users.Any( u => u.UserName == "michael" ))
{
db.Users.InsertOnSubmit( new User { UserName = "michael" } );
db.SubmitChanges();
}
``` | Extension method that implements tvanfosson's solution:
```
/// <summary>
/// Method that provides the T-SQL EXISTS call for any IQueryable (thus extending Linq).
/// </summary>
/// <remarks>Returns whether or not the predicate conditions exists at least one time.</remarks>
public static bool Exists<TSource>(this IQueryable<TSource> source, Expression<Func<TSource, bool>> predicate)
{
return source.Where(predicate).Any();
}
/// <summary>
/// Method that provides the T-SQL EXISTS call for any IQueryable (thus extending Linq).
/// </summary>
/// <remarks>Returns whether or not the predicate conditions exists at least one time.</remarks>
public static bool Exists<TSource>(this IQueryable<TSource> source, Expression<Func<TSource, int, bool>> predicate)
{
return source.Where(predicate).Any();
}
```
The extension method would then be used:
```
bool exists = dataContext.Widgets.Exists(a => a.Name == "Premier Widget");
```
Although the .Where().Any() combination works sufficiently, it does certainly help the logic flow of the code presentation. | if exists statement in sql to linq | [
"",
"sql",
"sql-server",
"linq",
""
] |
I am working with a set of what is essentially Attribute/Value pairs (there's actually quite a bit more to this, but I'm simplifying for the sake of this question). Effectively you can think of the tables as such:
Entities (EntityID,AttributeName,AttributeValue) PK=EntityID,AttributeName
Targets (TargetID,AttributeName,AttributeValue) PK=TargetID,AttributeName
How would you query with SQL the set of EntityID,TargetID for which an Entity has all the attributes for a target as well as the corresponding value?
EDIT (DDL as requested):
```
CREATE TABLE Entities(
EntityID INTEGER NOT NULL,
AttributeName CHAR(50) NOT NULL,
AttributeValue CHAR(50) NOT NULL,
CONSTRAINT EntitiesPK PRIMARY KEY (EntityID,AttributeName)
);
CREATE TABLE Targets(
TargetID INTEGER NOT NULL,
AttributeName CHAR(50) NOT NULL,
AttributeValue CHAR(50) NOT NULL,
CONSTRAINT TargetsPK PRIMARY KEY (TargetID,AttributeName)
);
``` | Okay, I think after several tries and edits, this solution finally works:
```
SELECT e1.EntityID, t1.TargetID
FROM Entities e1
JOIN Entities e2 ON (e1.EntityID = e2.EntityID)
CROSS JOIN Targets t1
LEFT OUTER JOIN Targets t2 ON (t1.TargetID = t2.TargetID
AND e2.AttributeName = t2.AttributeName
AND e2.AttributeValue = t2.AttributeValue)
GROUP BY e1.EntityID, t1.TargetID
HAVING COUNT(e2.AttributeValue) = COUNT(t2.AttributeValue);
```
Test data:
```
INSERT INTO Entities VALUES
-- exact same attributes, should match
(1, 'Foo1', '123'),
(1, 'Bar1', '123'),
-- same attributes but different values, should not match
(2, 'Foo2', '456'),
(2, 'Bar2', '456'),
-- more columns in Entities, should not match
(3, 'Foo3', '789'),
(3, 'Bar3', '789'),
(3, 'Baz3', '789'),
-- fewer columns in Entities, should match
(4, 'Foo4', '012'),
(4, 'Bar4', '012'),
-- same as case 1, should match Target 1
(5, 'Foo1', '123'),
(5, 'Bar1', '123'),
-- one attribute with different value, should not match
(6, 'A', 'one'),
(6, 'B', 'two');
INSERT INTO Targets VALUES
(1, 'Foo1', '123'),
(1, 'Bar1', '123'),
(2, 'Foo2', 'abc'),
(2, 'Bar2', 'abc'),
(3, 'Foo3', '789'),
(3, 'Bar3', '789'),
(4, 'Foo4', '012'),
(4, 'Bar4', '012'),
(4, 'Baz4', '012'),
(6, 'A', 'one'),
(6, 'B', 'twox');
```
Test results:
```
+----------+----------+
| EntityID | TargetID |
+----------+----------+
| 1 | 1 |
| 4 | 4 |
| 5 | 1 |
+----------+----------+
```
---
To respond to your comment, here is a query with the tables reversed:
```
SELECT e1.EntityID, t1.TargetID
FROM Targets t1
JOIN Targets t2 ON (t1.TargetID = t2.TargetID)
CROSS JOIN Entities e1
LEFT OUTER JOIN Entities e2 ON (e1.EntityID = e2.EntityID
AND t2.AttributeName = e2.AttributeName
AND t2.AttributeValue = e2.AttributeValue)
GROUP BY e1.EntityID, t1.TargetID
HAVING COUNT(e2.AttributeValue) = COUNT(t2.AttributeValue);
```
And here's the output, given the same input data above.
```
+----------+----------+
| EntityID | TargetID |
+----------+----------+
| 1 | 1 |
| 3 | 3 |
| 5 | 1 |
+----------+----------+
``` | I like these kind of questions but I think it is not unreasonable to hope that the OP provides at least create scripts for the table(s) and maybe even some sample data.
I like to hear who agrees and who disagrees. | Querying based on a set of Named Attributes/Values | [
"",
"sql",
"database",
"oracle",
"sql-match-all",
""
] |
How do I iterate between 0 and 1 by a step of 0.1?
This says that the step argument cannot be zero:
```
for i in range(0, 1, 0.1):
print(i)
``` | Rather than using a decimal step directly, it's much safer to express this in terms of how many points you want. Otherwise, floating-point rounding error is likely to give you a wrong result.
Use the [`linspace`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.linspace.html) function from the [NumPy](http://en.wikipedia.org/wiki/NumPy) library (which isn't part of the standard library but is relatively easy to obtain). `linspace` takes a number of points to return, and also lets you specify whether or not to include the right endpoint:
```
>>> np.linspace(0,1,11)
array([ 0. , 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1. ])
>>> np.linspace(0,1,10,endpoint=False)
array([ 0. , 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9])
```
If you really want to use a floating-point step value, use `numpy.arange`:
```
>>> import numpy as np
>>> np.arange(0.0, 1.0, 0.1)
array([ 0. , 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9])
```
Floating-point rounding error *will* cause problems, though. Here's a simple case where rounding error causes `arange` to produce a length-4 array when it should only produce 3 numbers:
```
>>> numpy.arange(1, 1.3, 0.1)
array([1. , 1.1, 1.2, 1.3])
``` | [`range()`](https://docs.python.org/3/library/functions.html#func-range) can only do integers, not floating point.
Use a [list comprehension](https://docs.python.org/3/tutorial/datastructures.html#list-comprehensions) instead to obtain a list of steps:
```
[x * 0.1 for x in range(0, 10)]
```
More generally, a [generator](https://docs.python.org/3/howto/functional.html#generators) comprehension minimizes memory allocations:
```
xs = (x * 0.1 for x in range(0, 10))
for x in xs:
print(x)
``` | How do I use a decimal step value for range()? | [
"",
"python",
"floating-point",
"range",
""
] |
I tried to execute the `DESCRIBE` command via a database link, but this was the return message:
```
DESCRIBE <table>@<database>;
ERROR:
------------------------------------
ERROR: object <table> does not exist
1 rows selected
```
A `SELECT` on this table works well.
**Does Oracle permitts `DESCRIBE` via a database link?**
I'm using the Oracle SQL Developer 1.5.1.
Edit:
Is there another option to describe a table?
Thanks in advance! | You could do something with the [all\_tab\_columns](http://download.oracle.com/docs/cd/B28359_01/server.111/b28320/statviews_2091.htm) table to get some table information.
```
select column_name, data_type from all_tab_columns where table_name = 'TABLE_NAME';
``` | I think DESCRIBE is a SQL\*Plus feature. See [here](http://www.ss64.com/ora/desc.html). | DESCRIBE via database link? | [
"",
"sql",
"oracle",
""
] |
I'm creating a class named `TetraQueue` that inherits `System.Collections.Generic.Queue` class overriding the `Dequeue` method, here is the code:
```
public class TetraQueue : Queue<Tetrablock>
{
public override Tetrablock Dequeue()
{
return base.Dequeue();
}
}
```
But when I try to compile this I get:
> Error TetraQueue.Dequeue()': no suitable method found to override TetraQueue.cs
Thanks in advance.
How do I know if a method is virtual(to avoid this kind of situation)? | Unfortunately the `Dequeue` method is not virtual so you can't override it. | Two ways to know whether or not a method is virtual:
* Type in "override" in Visual Studio - it will offer you all the methods you're allowed to override.
* [View the documentation](http://msdn.microsoft.com/en-us/library/1c8bzx97.aspx) - Dequeue doesn't say it's virtual, so you can't override it.
EDIT: Others have suggested hiding the method. I strongly recommend that you *don't* do that unless you really, really have to. It's a recipe for difficult debugging sessions and frustration when inheritance *appears* to sometimes work and sometimes not, depending on the compile-time type of the reference. Blurgh. If you're not going to specialize the behaviour, use composition instead of inheritance. | How do I override an generic method?(C#) | [
"",
"c#",
".net",
"collections",
""
] |
I am writing a Wordpress plugin.
I want to perform a redirect (after creating DB records from POST data, etc...) to other ADMIN page.
Neither header("Location: ...) nor wp\_redirect() work - i get
Warning: Cannot modify header information - headers already sent by
which comes from obvious reason.
How do I properly perform a redirect in a Wordpress? | On your form action, add 'noheader=true' to the action URL. This will prevent the headers for the admin area from being outputted before your redirect. For example:
```
<form name="post" action="<?php echo admin_url('admin.php?page=your-admin-page&noheader=true'); ?>" method="post" id="post">
``` | If you still want to redirect from your plugin admin page to another admin page while using WP add\_page\* functions then, after processing your request, you can just echo something like this:
```
<script type="text/javascript">
window.location = '/whatever_page.php';
</script>
```
This just renders a javascript based redirect to "/whatever\_page.php" thus ensuring no trouble with headers already sent by WP as Chris Ballance already said.
Change "/whatever\_page.php" to something like "/wp-admin/admin.php?page=whatever\_page" | How to redirect to different admin page in Wordpress? | [
"",
"php",
"redirect",
"wordpress",
""
] |
Well, I guess this day had to come.
My client's website has been compromised and blacklisted by Google. When you load the main page this javascript gets automatically added to the bottom of the document:
```
<script type="text/javascript">var str='google-analytics.com';var str2='6b756c6b61726e696f6f37312e636f6d';str4='php';var str3='if';str='';for(var i=0;i<str2.length;i=i+2){str=str+'%'+str2.substr(i,2);}str=unescape(str);document.write('<'+str3+'rame width=1 height=1 src="http://'+str+'/index.'+str4+'?id=382" style="visibility: hidden;"></'+str3+'rame>');</script></head><body><iframe src="http://kulkarnioo71.com/index.php?id=382" style="visibility: hidden;" width="1" height="1"></iframe>
```
I haven't dissected it just yet but it's, quite obviously, an attacker trying to pose as google analytics. What I can't wrap my head around is that if I remove EVERY SINGLE LAST BIT of HTML from the main page, to the point that index.html is an empty document, the javascript STILL gets embedded. What gives? How is that possible?
**updates**
* The website is a very simple calendar application, runs on a $10/month godaddy unix account, MySQL, PHP.
* It is not a local thing specific to my computer as my client was the one that called me with the problem. Also happening on all the computers I have at home (4)
I'll go run a scan on the webserver...
**source identified**
Well, I found out where the javascript is coming from. I had foolishly only emptied the `template.html` file but still ran the script through my php templating system. Apparently, SOMEHOW the code above got appended to the bottom of my `index.php` and `main.php` files. How is this possible?
A little more background:
* It is a calendar application, as mentioned above, and it is used only by my client's small company. Login is required to do anything, and only 5 or so people have accounts. I can guarantee none of them would try any shenanigans. I obviously can't guarantee someone got a hold of their information and did try shenanigans, though.
* Sadly enough, I did make this website almost 4 years ago, so I am not exactly 100% confident I protected against everything kids are trying nowadays, but I still cannot understand how an attacker could have possibly gained access to the webserver to append this javascript to my php files. | A rogue HTTP Module (in IIS), or whatever the equivalent is for apache could prepend, append, or perhaps even modify content for any HTTP request, even for static files. This would suggest that the server itself has been compromised.
EDIT: If you let us know what type of web server you're using, we'll be able to make more specific suggestions for troubleshooting. | Are you serving any content from a SQL database? It's possible that the compromise was a SQL injection attack and your site content in the database has been replaced/amended with this script/tag. | client's website was attacked, eeek! | [
"",
"javascript",
"security",
""
] |
How do I initiate a Windows XP user Log Off from a C# app? The action of my app should produce the same result as clicking "Log Off" in XP start menu -- it's fine if the system asks for a confirmation.
Is there a .Net or an unmanaged API for this? | You could P/Invoke ExitWindowsEx:
> <http://www.pinvoke.net/default.aspx/user32/ExitWindowsEx.html>
Pulling it all together:
```
using System.Runtime.InteropServices;
class Class1
{
[DllImport("user32.dll")]
static extern bool ExitWindowsEx(uint uFlags, uint dwReason);
[STAThread]
static void Main(string[] args)
{
ExitWindowsEx(ExitWindows.LogOff, ShutdownReason.MajorOther | ShutdownReason.MinorOther);
}
}
[Flags]
public enum ExitWindows : uint
{
// ONE of the following five:
LogOff = 0x00,
ShutDown = 0x01,
Reboot = 0x02,
PowerOff = 0x08,
RestartApps = 0x40,
// plus AT MOST ONE of the following two:
Force = 0x04,
ForceIfHung = 0x10,
}
[Flags]
enum ShutdownReason : uint
{
MajorApplication = 0x00040000,
MajorHardware = 0x00010000,
MajorLegacyApi = 0x00070000,
MajorOperatingSystem = 0x00020000,
MajorOther = 0x00000000,
MajorPower = 0x00060000,
MajorSoftware = 0x00030000,
MajorSystem = 0x00050000,
MinorBlueScreen = 0x0000000F,
MinorCordUnplugged = 0x0000000b,
MinorDisk = 0x00000007,
MinorEnvironment = 0x0000000c,
MinorHardwareDriver = 0x0000000d,
MinorHotfix = 0x00000011,
MinorHung = 0x00000005,
MinorInstallation = 0x00000002,
MinorMaintenance = 0x00000001,
MinorMMC = 0x00000019,
MinorNetworkConnectivity = 0x00000014,
MinorNetworkCard = 0x00000009,
MinorOther = 0x00000000,
MinorOtherDriver = 0x0000000e,
MinorPowerSupply = 0x0000000a,
MinorProcessor = 0x00000008,
MinorReconfig = 0x00000004,
MinorSecurity = 0x00000013,
MinorSecurityFix = 0x00000012,
MinorSecurityFixUninstall = 0x00000018,
MinorServicePack = 0x00000010,
MinorServicePackUninstall = 0x00000016,
MinorTermSrv = 0x00000020,
MinorUnstable = 0x00000006,
MinorUpgrade = 0x00000003,
MinorWMI = 0x00000015,
FlagUserDefined = 0x40000000,
FlagPlanned = 0x80000000
}
``` | Try calling the "ExitWindowsEx" API function with the constant "EWX\_LOGOFF" or 0. API Guide tells me that the second parameter is ignored. For instance,
```
//The import
using System.Runtime.InteropServices;
// The declaration
[DllImport("user32.dll")]
public static extern int ExitWindowsEx(int uFlags, int dwReserved);
// The call
ExitWindowsEx(0, 0);
``` | Log off user from Win XP programmatically in C# | [
"",
"c#",
".net",
"windows",
"logoff",
""
] |
How to print the following code to a .txt file
```
y = '10.1.1.' # /24 network,
for x in range(255):
x += 1
print y + str(x) # not happy that it's in string, but how to print it into a.txt
```
There's copy paste, but would rather try something more interesting. | ```
f = open('myfile.txt', 'w')
for x in range(255):
ip = "10.1.1.%s\n" % str(x)
f.write(ip)
f.close()
``` | scriptname.py >> output.txt | Outputting to a text file | [
"",
"python",
"text",
""
] |
This has always confused me. It seems like this would be nicer:
```
["Hello", "world"].join("-")
```
Than this:
```
"-".join(["Hello", "world"])
```
Is there a specific reason it is like this? | It's because any iterable can be joined (e.g, list, tuple, dict, set), but its contents and the "joiner" *must be* strings.
For example:
```
'_'.join(['welcome', 'to', 'stack', 'overflow'])
'_'.join(('welcome', 'to', 'stack', 'overflow'))
```
```
'welcome_to_stack_overflow'
```
Using something other than strings will raise the following error:
> TypeError: sequence item 0: expected str instance, int found | This was discussed in the [String methods... finally](http://mail.python.org/pipermail/python-dev/1999-June/095366.html "String methods... finally") thread in the Python-Dev achive, and was accepted by Guido. This thread began in Jun 1999, and `str.join` was included in Python 1.6 which was released in Sep 2000 (and supported Unicode). Python 2.0 (supported `str` methods including `join`) was released in Oct 2000.
* There were four options proposed in this thread:
+ `separator.join(items)`
+ `items.join(separator)`
+ `items.reduce(separator)`
+ `join` as a built-in function
* Guido wanted to support not only `list`s and `tuple`s, but all sequences/iterables.
* `items.reduce(separator)` is difficult for newcomers.
* `items.join(separator)` introduces unexpected dependency from sequences to str/unicode.
* `join()` as a free-standing built-in function would support only specific data types. So using a built-in namespace is not good. If `join()` were to support many data types, creating an optimized implementation would be difficult: if implemented using the `__add__` method then it would be O(n²).
* The separator string (`separator`) should not be omitted. Explicit is better than implicit.
Here are some additional thoughts (my own, and my friend's):
* Unicode support was coming, but it was not final. At that time UTF-8 was the most likely about to replace UCS-2/-4. To calculate total buffer length for UTF-8 strings, the method needs to know the character encoding.
* At that time, Python had already decided on a common sequence interface rule where a user could create a sequence-like (iterable) class. But Python didn't support extending built-in types until 2.2. At that time it was difficult to provide basic `iterable` class (which is mentioned in another comment).
Guido's decision is recorded in a [historical mail](http://mail.python.org/pipermail/python-dev/1999-June/095436.html "historical mail"), deciding on `separator.join(items)`:
> Funny, but it does seem right! Barry, go for it...
> --Guido van Rossum | Why is it string.join(list) instead of list.join(string)? | [
"",
"python",
"string",
"list",
""
] |
Often I add an `Empty` method to my C++ objects to clear the internal state using code similar to the following.
```
class Foo
{
private:
int n_;
std::string str_;
public:
Foo() : n_(1234), str_("Hello, world!")
{
}
void Empty()
{
*this = Foo();
}
};
```
This seems to be better than duplicating code in the constructor, but I wondered if `*this = Foo()` is a common approach when wanting to clear an object? Are there any problems with this waiting to bite me on the backside? Are there any other better ways to achieve this sort of thing? | I'd let the constructor call my function instead:
```
class Foo
{
private:
int n_;
std::string str_;
public:
Foo()
{
Reset();
}
void Reset()
{
n_ = 1234;
str_ = "Hello, world!";
}
};
```
Yes, you're unnecessarily initializing the string as an empty string first, then doing an assignment, but this is much clearer. | Potential problems? How do you know that \*this really is a Foo? | Emptying a C++ object | [
"",
"c++",
"construction",
""
] |
I can't figure out how I can implement an Icon View in the WPF ListView (a view similar to the Windows Explorer). Searching on google I only found informations about implementing the GridView but no clues about the Icon View. I'm not talking about `System.Windows.Form.ListView` but `System.Windows.Controls.ListView`.
Perhaps there is another control to do that? I didn't find anything relevant about this?
I've only found some people that build the icon view by hand using the listbox and changing the paneltemplate and the icontemplate. I can't believe this is the only way to do it.
Any clues?
Thanks in advance | Same as Tanveer Badar's answer, but with a WrapPanel instead of a UniformGrid. Set the following in your listbox:
```
ScrollViewer.HorizontalScrollBarVisibility="Disabled"
ScrollViewer.VerticalScrollBarVisibility="Auto"
```
to force the WrapPanel to wrap. | **EDIT** Appears i misunderstood what you meant with Explorer view...i have mine set to Details... ;) I'll leave my answer up here in case anyone makes the same mistake as i...
---
There is no such thing as an Icon View in WPF, you'll have to implement it yourself, but you dont have to do everything from scratch.
You can use the ListView in combination with a GridView and at least one CellTemplate for the column that contains the icon.
The general outline would look something like this for an Windows Explorer like view:
```
<ListView>
<ListView.Resources>
<DataTemplate x:Key="IconTemplate">
<Grid>
<Grid.ColumnDefinitions>
<ColumnDefinition Width="Auto"/>
<ColumnDefinition/>
</Grid.ColumnDefinitions>
<Image Grid.Column="0"/>
<TextBlock Grid.Column="1" Text="{Binding Name}"/>
</Grid>
</DataTemplate>
</ListView.Resources>
<ListView.View>
<GridView>
<GridViewColumn CellTemplate="{StaticResource IconTemplate}" Header="Name"/>
<GridViewColumn DisplayMemberBinding="{Binding Size}" Header="Size"/>
<GridViewColumn DisplayMemberBinding="{Binding Type}" Header="Type"/>
</GridView>
</ListView.View>
</ListView>
``` | WPF: ListView with icons view? | [
"",
"c#",
"wpf",
"listview",
""
] |
How can I have a WinForms program do some **specific thing** whenever a certain time-based condition is met?
I was thinking I could do something with two threads, where one thread runs the normal program, and the other thread merely loops through checking if the time-based condition is true yet or not, and when the condition is true it signals an event.
However I am unsure of the best way to do it. Where in the program would I call the two threads? Maybe I am thinking about it all wrong?
How would you do this?
**MORE INFO:**
What it has to do is check the data.dat file and see when the last time it was updated was. If it was a month or more then do the **specific thing**. Could this still be done with a Timer?
**NOTE:**
I think it might be useful to note the difference between the System.Timers and the System.Windows.Forms.Timer... | I think you should use a [Timer](http://msdn.microsoft.com/en-us/library/system.timers.timer(VS.80).aspx) set to an inteligent interval to check if your time-based condition is met.
It depends what your time-based condition is. Is it a special time or an interval after which you want to do something special? If it's the second, you can just use the Timer and do what you have to do when the Timer.Elapsed event is fired.
---
Edit after your edit:
If you want an event to be fired every time the file changes, use a [FileSystemWatcher](http://msdn.microsoft.com/en-us/library/system.io.filesystemwatcher(VS.80).aspx)
---
Edit2:
[Here's](http://msdn.microsoft.com/en-us/library/system.windows.forms.timer(VS.80).aspx) the difference between System.Windows.Forms.Timer and System.Timers:
> The Windows Forms Timer component is
> single-threaded, and is limited to an
> accuracy of 55 milliseconds. If you
> require a multithreaded timer with
> greater accuracy, use the Timer class
> in the System.Timers namespace. | You could add a **System.Windows.Forms.Timer** control to your Form (see the Components category in the toolbox).
Then set the timer's interval to some value (e.g. 1000) and add a handler for its Tick event. This handler will then be called once every 1000 milliseconds.
In the handler you can then check if the conditions are met and if yes, start your specific operation.
---
Update (after you updated the question):
To check if the last modification of a file was more than one month ago, you can use this code:
```
if (File.GetLastWriteTime("data.dat").AddMonths(1) < DateTime.Now)
{
// do whatever has to be done
// if it is a time-consuming task, start a new thread!
}
```
You can still put this into the Tick event handler of the timer component. But in that case it does probably not make sense to fire the timer every second.
Depending on your application (e.g. if it will be started quite often), another possibility would be to execute the above check during the startup of your application. | How can I have a WinForms program do some **specific thing** whenever a certain time-based condition is met? | [
"",
"c#",
".net",
""
] |
Does anyone have any good resources for refining my skills in developing class diagrams? Would like any strong tutorials in UML 2.0 ideally, but searches seem to be returning poor results.
Also currently revising for a final year exam and really want to try and get my teeth into a practice paper with a model answer, I've searched high and low without any luck, does anyone happen to have any suggestions on where i might find some?
Basically any resources to help push my revision along. Would relish the chance to look at more advance stuff and push the boundaries.
Any help appreciated.
Thanks,
Ricky | * Online UML Guide from StackOverFlow (No more available)
* [Practical UML: A Hands-On Introduction for Developers](http://dn.codegear.com/article/31863)
* Unified Modeling Language (UML) Tutorial (No more available)
* [UML Tutorial and Introduction](http://www.cragsystems.co.uk/ITMUML/) | [Martin Fowler Wrote a book](https://rads.stackoverflow.com/amzn/click/com/0321193687). The latest edition was published for UML2... | UML Class Diagram Resources | [
"",
"java",
"oop",
"resources",
"uml",
"class-design",
""
] |
What kinds of activities will trigger reflow of web page with DOM?
It seems there are different points of view. According to <http://www.nczonline.net/blog/2009/02/03/speed-up-your-javascript-part-4/>, it happens
* When you add or remove a DOM node.
* When you apply a style dynamically (such as element.style.width="10px").
* When you retrieve a measurement that must be calculated, such as accessing offsetWidth, clientHeight, or any computed CSS value (via getComputedStyle() in DOM-compliant browsers or currentStyle in IE).
However, according to <http://dev.opera.com/articles/view/efficient-javascript/?page=3>, taking measurement triggers reflow only when there is already reflow action queued.
Does anybody have any more ideas? | Both articles are correct.
One can safely assume that whenever you're doing something that could reasonably require the dimensions of elements in the DOM be calculated that you will trigger reflow.
In addition, as far as I can tell, both articles say the same thing.
The first article says reflow happens when:
> When you **retrieve a measurement that must be calculated**, such as accessing **offsetWidth**, **clientHeight**, or any computed CSS value (via **getComputedStyle()** in DOM-compliant browsers or currentStyle in IE), while DOM changes are queued up to be made.
The second article states:
> As stated earlier, the browser may cache several changes for you, and reflow only once when those changes have all been made. However, note that **taking measurements of the element will force it to reflow**, so that the measurements will be correct. The changes may or may not not be visibly repainted, but the reflow itself still has to happen behind the scenes.
>
> This effect is created when measurements are taken using properties like **offsetWidth**, or using methods like **getComputedStyle**. Even if the numbers are not used, simply using either of these while the browser is still caching changes, will be enough to trigger the hidden reflow. If these measurements are taken repeatedly, you should consider taking them just once, and storing the result, which can then be used later.
I take this to mean the same thing they said earlier. Opera will try its hardest to cache values and avoid reflow for you, but you shouldn't rely on its ability to do so.
For all intents and purposes just believe what they both say when they say that all three types of interactions can cause reflow.
Cheers. | Look at the "Rendering triggered by Property Read Access" section of [Understanding Internet Explorer Rendering Behaviour](https://web.archive.org/web/20121120093321/http://blog.dynatrace.com:80/2009/12/12/understanding-internet-explorer-rendering-behaviour/), where the following code in IE will cause rendering activity.
```
function askforHeight () {
$("#lower").height();
}
``` | When does reflow happen in a DOM environment? | [
"",
"javascript",
"performance",
"dom",
"reflow",
""
] |
I'm looking for an algorithm, or at least theory of operation on how you would find similar text in two or more different strings...
Much like the question posed here: [Algorithm to find articles with similar text](https://stackoverflow.com/questions/246961/algorithm-to-find-similar-text), the difference being that my text strings will only ever be a handful of words.
Like say I have a string:
"Into the clear blue sky"
and I'm doing a compare with the following two strings:
"The color is sky blue" and
"In the blue clear sky"
I'm looking for an algorithm that can be used to match the text in the two, and decide on how close they match. In my case, spelling, and punctuation are going to be important. I don't want them to affect the ability to discover the real text. In the above example, if the color reference is stored as "'sky-blue'", I want it to still be able to match. However, the 3rd string listed should be a BETTER match over the second, etc.
I'm sure places like Google probably use something similar with the "Did you mean:" feature...
**\* EDIT \***
In talking with a friend, he worked with a guy who wrote a paper on this topic. I thought I might share it with everyone reading this, as there are some really good methods and processes described in it...
Here's the [link to his paper](http://jeffhuang.com/sigir09_submit.pdf), I hope it is helpful to those reading this question, and on the topic of similar string algorithms. | I can't mark two answers here, so I'm going to answer and mark my own. The Levenshtein distance appears to be the correct method in most cases for this. But, it is worth mentioning [`j_random_hackers`](https://stackoverflow.com/users/47984/jrandomhacker) answer as well. I have used an implementation of LZMA to test his theory, and it proves to be a sound solution. In my original question I was looking for a method for short strings (2 to 200 chars), where the Levenshtein Distance algorithm will work. But, not mentioned in the question was the need to compare two (larger) strings (in this case, text files of moderate size) and to perform a quick check to see how similar the two are. I believe that this compression technique will work well but I have yet to study it to find at which point one becomes better than the other, in terms of the size of the sample data and the speed/cost of the operation in question. I think a lot of the answers given to this question are valuable, and worth mentioning, for anyone looking to solve a similar string ordeal like I'm doing here. Thank you all for your great answers, and I hope they can be used to serve others well too. | Levenshtein distance will not completely work, because you want to allow rearrangements. I think your best bet is going to be to find best rearrangement with levenstein distance as cost for each word.
To find the cost of rearrangement, kinda like the [pancake sorting problem](http://mathworld.wolfram.com/PancakeSorting.html). So, you can permute every combination of words (filtering out exact matches), with every combination of other string, trying to minimize a combination of permute distance and Levenshtein distance on each word pair.
*edit:*
Now that I have a second I can post a quick example (all 'best' guesses are on inspection and not actually running the algorithms):
```
original strings | best rearrangement w/ lev distance per word
Into the clear blue sky | Into the c_lear blue sky
The color is sky blue | is__ the colo_r blue sky
R_dist = dist( 3 1 2 5 4 ) --> 3 1 2 *4 5* --> *2 1 3* 4 5 --> *1 2* 3 4 5 = 3
L_dist = (2D+S) + (I+D+S) (Total Subsitutions: 2, deletions: 3, insertion: 1)
```
(notice all the flips include all elements in the range, and I use ranges where Xi - Xj = +/- 1)
Other example
```
original strings | best rearrangement w/ lev distance per word
Into the clear blue sky | Into the clear blue sky
In the blue clear sky | In__ the clear blue sky
R_dist = dist( 1 2 4 3 5 ) --> 1 2 *3 4* 5 = 1
L_dist = (2D) (Total Subsitutions: 0, deletions: 2, insertion: 0)
```
And to show all possible combinations of the three...
```
The color is sky blue | The colo_r is sky blue
In the blue clear sky | the c_lear in sky blue
R_dist = dist( 2 4 1 3 5 ) --> *2 3 1 4* 5 --> *1 3 2* 4 5 --> 1 *2 3* 4 5 = 3
L_dist = (D+I+S) + (S) (Total Subsitutions: 2, deletions: 1, insertion: 1)
```
Anyway you make the cost function the second choice will be lowest cost, which is what you expected! | Similar String algorithm | [
"",
"c++",
"c",
"algorithm",
"string",
""
] |
I was looking over [this code](http://www.ibm.com/developerworks/library/j-math1/index.html?ca=dgr-btw03JavaPart1&S_Tact=105AGX59&S_cmp=GRsitebtw03#hypotenuse) to calculate `math.sqrt` in Java. Why did they use hex values in some of the loops and normal values for variables? What benefits are there to use hex? | Because hex corresponds much more closely to bits that decimal numbers. Each hex digit corresponds to 4 bits (a nibble). So, once you've learned the bitmask associated with each hex digit (0-F), you can do something like "I want a mask for the low order byte":
```
0xff
```
or, "I want a mask for the bottom 31 bits":
```
0x7fffffff
```
Just for reference:
```
HEX BIN
0 -> 0000
1 -> 0001
2 -> 0010
3 -> 0011
4 -> 0100
5 -> 0101
6 -> 0110
7 -> 0111
8 -> 1000
9 -> 1001
A -> 1010
B -> 1011
C -> 1100
D -> 1101
E -> 1110
F -> 1111
``` | They probably used hex values because the numbers are easier to remember in hex. For example, 0x7fffffff is the same as 2147483647, but is a lot easier to remember. | Why use hex values instead of normal base 10 numbers? | [
"",
"java",
"c",
"hex",
"math.sqrt",
""
] |
Let's say I've got a control and I want to prevent it from being edited.
Setting the Enabled property of the control to False will work but the control appearance will change accordingly, usually to a difficult to read black over gray font. When readability is still important, this is a real problem.
For a TextBox, there are a few obvious fixes :
```
Textbox1.BackColor = Color.White;
```
or
```
Textbox1.ReadOnly= true; // instead of setting Enabled to false
```
but unfortunately this won't work for every controls (eg radio buttons)
Another solution is to let the Enabled property untouched, and to subscribe to the focus event like this (but this isn't a really elegant solution)
```
this.Textbox1.Enter += new System.EventHandler(this.Textbox1_Enter);
private void Textbox1_Enter(object sender, EventArgs e)
{
Textbox1.FindForm().ActiveControl = null;
}
```
Have you seen other ways of dealing with this problem? (and I mean real world solutions ; of course you can capture a screenshot of the control and display the copy over the control...:p) | There is an argument that interfering with standard Windows behaviour is confusing for the user, but that aside I have seen this done before, although more commonly in C++. You can subclass the control and handle paint messages yourself. When the control's enabled just delegate the drawing to the base class. When the control's disabled you can either let the base class draw itself and then do some custom drawing over the top or you can just draw the entire thing youself. I'd strongly recommend the first of these options. | Some controls can be set to ReadOnly which leaves them enabled, but unable to be changed. This may be what you're looking for.
That said you're probably going to be a world of hurt when your users start coming in confused because it looks like they should be able to edit the controls, but they can't. There's a reason they change their visual appearance -- it's to communicate the system's state to the user. Mess with that and they may end up very confused. | How would you disable .net Winforms Controls without changing their appearance? | [
"",
"c#",
".net",
"winforms",
""
] |
How do you think is really necessary to provide `IFormatProvider` in method `String.Format(string, object)` ?
Is it better to write full variant
```
String.Format(CultureInfo.CurrentCulture, "String is {0}", str);
```
or just
```
String.Format("String is {0}", str);
```
? | In general, you will want to use InvariantCulture if the string you are generating is to be persisted in a way that is independent of the current user's culture (e.g. in the registry, or in a file).
You will want to use CurrentCulture for strings that are to be presented in the UI to the current user (forms, reports).
Subtle bugs can arise if you use CurrentCulture where you should be using InvariantCulture: bugs that only come to light when you have multiple users with different cultures accessing the same registry entry or file, or if a user changes his default culture.
Explicitly specifying CurrentCulture (the default if the IFormatProvider argument is omitted), is essentially documentation that demonstrates that you have considered the above and that the string being generated should use the current user's culture. That's why FxCop recommends that you should specify the IFormatProvider argument. | If you do not specify the `IFormatProvider` (or equivalently pass `null`) most argument types will eventually fall through to being formatted according to `CultureInfo.CurrentCulture`. Where it gets interesting is that you can specify a custom `IFormatProvider` that can get first crack at formatting the arguments, or override the formatting culture depending on other context.
Note that `CultureInfo.CurrentCulture` affects argument formatting, not resource selection; resource selection is controlled by `CultureInfo.CurrentUICulture`. | Is CultureInfo.CurrentCulture really necessary in String.Format()? | [
"",
"c#",
".net",
"culture",
"string.format",
""
] |
I have a bunch of scripts - some in perl and some in bash - which are used for:
* Creating a database (tables, indexes,
constraints, views)
* Parsing spreadsheets and loading the data into the database
* Getting info about a bunch of files and loading that into the
database.
These scripts are used in conjunction with a much larger application that is written in java, and my manager has requested that I rewrite the scripts in java. His reasoning is that it is easier to work with, port, manage, understand, and support if it's all in one language, and that too many separate pieces is a design issue.
My initial reaction is that this is a bad idea. The scripts are beautifully concise and fast, and tasks that are trivial in the scripts - such as using regexs to find and replace invalid values - will be so much more verbose and very likely slower when done in java.
The one drawback of the scripts is that when they run on windows they require cygwin in order to run. Therefore I would like to give a counter proposition that I port all the bash scripts to perl so that they can run on windows without cygwin, and that I spend time organizing and documenting the scripts.
The problem is that a "gut reaction" type of response is not going to be enough to convince my manager. I come from a linux background, he from Windows, and we have some of the classic linux vs. windows differences in approaches.
So I have two questions:
1. Is my "gut reaction" correct? Is java slower, more verbose, and harder to maintain for database management, spreadsheet parsing, & file processing tasks?
2. If the answer to the first question is yes, what is the best way to present my case?
---
EDIT: Thanks everyone for the insights. I'd like to make one clarification: the scripts are not full-blown apps hidden away in obfuscated scripts. They are, for the most part, tasks that had been done manually that I automated via scripts and later embellished as the requirements developed. And the reason I used a scripting language instead of java to start with is because these tasks were *so* much easier to do in scripts. For example, one script runs a bunch of queries, formats the results, and outputs them to a file. How many LOC do you think it would take to do that in java? | The trouble is, your Gut reaction might be right, but that doesn't mean your manager is necessarily wrong - he probably has very good reasons for wanting it all done in java. Not least, if you fall under a bus, finding a replacement who knows java, perl and bash is going to be a lot harder than finding someone who knows java. And that's leaving aside the "they can only be run on a PC with cygwin installed" issue. And in all likelihood, performance isn't as big an issue as you think it is.
Having said that, your best bet is to spend a bit of time estimating the time it will take to port them all to java, so he can make an informed decision. And while you're at it, estimate how long it would take to port the bash scripts to perl **and** document them. Then let him decide. Remember - he doesn't get to spend the majority of his time coding, like you do, so it's only fair that he gets to make some decisions instead.
If he decides to proceed with the java option, port one of the scripts as well as you can, then report back with the two versions and, if you're right about the concision of the perl/bash scripts, you should be able to get some mileage from examining the two versions side by side.
**EDIT:** MCS, to be honest, it sounds to me as if those scripts are better implemented in perl and/or bash, rather than java, but that's not really the point - the point is how do you demonstrate that to your manager. If you address that, you address both the "gut reaction" question (btw, here's a tip - start referring to your gut reactions as "judgement, based on experience") and the "best way to present my case" question.
Now, the first thing you have to realise is that your manager is (probably) not going down this path just to piss you off. He almost certainly has genuine concerns about these scripts. Given that they're probably genuine concerns (and there's no point in going any further if they're not - if he's made his mind up to do this thing for some political reason then you're not going to change his mind, no matter what, so just get on with it and add it to your CV) it follows that you need to provide him with information that addresses his concerns if you're going to get anywhere. If you can do that then you're more than halfway to getting your own way.
So, what are his concerns? Based on your post, and on my judgement and experience :-) I'd say they are:
* maintainability
* that's it, just maintainability
I would also guess that his concerns are **not**:
* performance
I might be wrong about this last one, of course; in the last place I worked we had a SQL Server performance problem to do with replication that impacted the business's ability to provide customer support, so performance was an issue, so we addressed it. But generally speaking performance isn't as much of an issue as programmers think. If he's actually told you that performance is an issue, then factor it in. But if he hasn't mentioned it, forget it - it's probably only you that thinks the fact that these scripts run faster in perl/bash than they probably will in java matters at all.
So, maintainability. This comes down to answering the question "who will maintain these scripts if MCS falls under a bus?" and the supplementary question "will that cause me (i.e. your manager) problems?" (Aside: don't get hung up on the whole bus thing. "Falling under a bus" is a useful and diplomatic shorthand for all sorts of risks, e.g. "what happens if someone lures him away with a salary my company can't match?", "what happens if he decides to emigrate to Bermuda?", "what happens if I want to fire him?", "what happens if I want to promote him?", and, of course, "what happens if just he stops turning up for work one day for some unknown, possibly bus-related, reason?")
Remember, it's your manager's job to consider and mitigate these risks.
So, how to do that?
First, demonstrate how maintainable these scripts actually are. Or at least how maintainable they can be. Document them (in proper documents, not in the code). Train a colleague to maintain them (pick someone who would like to acquire/improve their perl and bash skills, and who your manager trusts). Refactor them to make them more readable (sacrificing performance and clever scripting tricks if necessary). If you want to continue using bash, create a document that provides step-by-step instructions for installing cygwin and bash. Regardless, document the process of installing perl, and running the scripts.
Second, pick one of the scripts and port it to java. Feel free to pick the script that best demonstrates the advantages of perl/bash over java, but **do the best job you can of porting it.** Use java.util.regex to do the same clever things you do in your perl. Document it to the standard that other in-house java utilities are documented. If performance is actually a factor, measure its performance relative to the perl/bash script.
Third, having been through that exercise, be honest with yourself about their relative maintainability. Ask the guy you trained what he thinks. If you still think the perl/bash scripts are more or less as maintainable as java versions would be, estimate the work involved in porting the remaining scripts to java as accurately as you can (you'll be able to do this pretty accurately now, because you'll have actually ported one). Then take the comparative scripts and the documentation and the estimates (and the performance figures, if appropriate) to your manager and go through them with him. Present your counter-proposals (a. leave them in perl and bash but document them and train a colleague, and b. port the bash scripts to perl, document them and train a colleague).
Finally, let your manager weigh up all the information and decide, and abide by his decision. In fact, don't just abide by his decision, accept the fact that he might be right. Just because you know more about perl/bash/java than him doesn't mean you necessarily know more about managing the team/department than he does. And if his decision is to stick with perl/bash, or port to perl, rejoice! Because you have not only got your own way, you have gone up in your manager's estimation and learned an invaluable lesson along the way. | It depends. I've found that text processing in Java can take up to 8 or 9 times the amount of code as in Perl. If these scripts need to be tightly integrated into the application then I would agree with your manager but if there just background tasks I'd look into using ActiveState on windows and rewriting the bash scripts in Perl. | Does it make sense to rewrite Perl and shell scripts in java? | [
"",
"java",
"perl",
"shell",
""
] |
On e of my current requirements is to take in an Excel spreadsheet that the user updates about once a week and be able to query that document for certain fields.
As of right now, I run through and push all the Excel (2007) data into an xml file (just once when they upload the file, then I just use the xml) that then holds all of the needed data (not all of the columns in the spreadsheet) for querying via Linq-to-XML; note that the xml file is smaller than the excel.
Now my question is, is there any performance difference between querying an XML file with Linq and an Excel file with OledbConnection? Am I just adding another unneccesary step?
I suppose the followup question would be, is it worth it for ease of use to keep pushing it to xml.
The file has about 1000 rows. | For something that is done only once per week I don't see the need to perform any optimizations. Instead you should focus on what is maintainable and understandable both for you and whoever will maintain the solution in the future.
Use whatever solution you find most natural :-) | As I understand it the performance side of things stands like this for accessing Excel data.
Fastest to Slowest
1. Custom 3rd party vendor software using C++ directly on the Excel file type.
2. OleDbConnection method using a schema file if necessary for data types, treats Excel as a flatfile db.
3. Linq 2 XML method superior method for read/write data with Excel 2007 file formats only.
4. Straight XML data manipulation using the OOXML SDK and optionally 3rd party xml libraries. Again limited to Excel 2007 file formats only.
5. Using an Object[,] array to read a region of cells (using .Value2 prop), and passing an Object[,] array back again to a region of cells (again .Value2 prop) to write data.
6. Updating and reading from cells individually using the .Cells(x,y) and .Offset(x,y) prop accessors. | Speed difference between Linq to XML and Excel with a OledbConnection? | [
"",
"c#",
"xml",
"linq",
"excel",
"linq-to-xml",
""
] |
What's a jQuery like and/or best practices way of getting the original target of an event in jQuery (or in browser javascript in general).
I've been using something like this
```
$('body').bind('click', function(e){
//depending on the browser, either srcElement or
//originalTarget will be populated with the first
//element that intercepted the click before it bubbled up
var originalElement = e.srcElement;
if(!originalElement){originalElement=e.originalTarget;}
});
```
which works, but I'm not pleased with the two line feature sniffing. Is there a better way? | You can do it in one line with `var originalElement = e.srcElement || e.originalTarget;` but it ain't pretty JQuery-like ;-)
[Edit: But according to <http://docs.jquery.com/Events/jQuery.Event#event.target> `event.target` might do...] | I believe e.target is what you require
```
$('body').bind('click', function(e){
e.target // the original target
e.target.id // the id of the original target
});
```
If you go to the [jQuery in Action website](http://www.manning.com/bibeault/) and download the source code, take a look at
* **Chapter 4 -** dom.2.propagation.html
which deals with event propagation with bubble and capture handlers | Best way to get the Original Target | [
"",
"javascript",
"jquery",
"events",
"dom",
"delegates",
""
] |
I want to create a product catalog that allows for intricate details on each of the product types in the catalog. The product types have vastly different data associated with them; some with only generic data, some with a few extra fields of data, some with many fields that are specific to that product type. I need to easily add new product types to the system and respect their configuration, and I'd love tips on how to design the data model for these products as well as how to handle persistence and retrieval.
Some products will be very generic and I plan to use a common UI for editing those products. The products that have extensible configuration associated with them will get new views (and controllers) created for their editing. I expect all custom products to have their own model defined but to share a common base class. The base class would represent the generic product that has no custom fields.
Example products that need to be handled:
1. Generic product
* Description
2. Light Bulb
* Description
* Type (with an enum of florescent, incandescent, halogen, led)
* Wattage
* Style (enum of flood, spot, etc.)
3. Refrigerator
* Description
* Make
* Model
* Style (with an enum in the domain model)
* Water Filter information
+ Part number
+ Description
I expect to use MEF for discovering what product types are available in the system. I plan to create assemblies that contain product type models, views, and controllers, drop those assemblies into the bin, and have the application discover the new product types, and show them in the navigation.
1. Using SQL Server 2008, what would be the best way to store products of these various types, allowing for new types to be added without having to grow the database schema?
2. When retrieving data from the database, what's the best way to translate these polymorphic entities into their correct domain models?
---
## Updates and Clarifications
1. To avoid the Inner Platform Effect, if there is a database table for every product type (to store the products of that type), then I still need a way to retrieve all products that spans product types. How would that be achieved?
2. I talked with Nikhilk in more detail about his SharePoint reference. Specifically, he was talking about this: <http://msdn.microsoft.com/en-us/library/ms998711.aspx>. It actually seems pretty attractive. No need to parse XML; and there is some indexing that could be done allowing for simple and fast queries over the data. For instance, I could say "find all 75-watt light bulbs" by knowing that the first int column in the row is the wattage when the row represents a light bulb. Something (NHibernate?) in the app tier would define the mapping from the product type to the userdata schema.
3. Voted down the schema that has the Property Table because this could lead to lots of rows per product. This could lead to index difficulties, plus all queries would have to essentially pivot the data. | Use a Sharepoint-style UserData table, that has a set of string columns, a set of int columns, etc. and a Type column.
Then you have a list of types table that specifies the schema for each type - its properties, and the specific columns they map to in the UserData table.
With things like Azure and other utility computing storage you don't even need to define a table. Every store object is basically a dictionary. | I think you need to go with a data model like --
**Product Table**
* ProductId (PK)
* ProductName
* Details
**Property Table**
* PropertyId (PK)
* ProductId (FK)
* ParentPropertyId (FK - Self referenced to categorize properties)
* PropertyName
* PropertyValue
* PropertyValueTypeId
**Property Value Lookup Table**
* PropertyValueLookupId (PK)
* PropertyId (FK)
* LookupValue
And then have a dynamic view based on this. You could use the PropertyValueTypeId coloumn to identify the type, using a convention, like (0- string, 1-integer, 2-float, 3-image etc) - But ultimately you can store everything untyped only. You could also use this column to select the control template to render the corresponding property to the user.
You can use the Value lookup table to keep lookups for a specific property (so that user can choose it from a list) | Define Generic Data Model for Custom Product Types | [
"",
"sql",
"sql-server",
"database",
"database-design",
"data-modeling",
""
] |
Given an absolute URI/URL, I want to get a URI/URL which doesn't contain the leaf portion. For example: given <http://foo.com/bar/baz.html>, I should get <http://foo.com/bar/>.
The code which I could come up with seems a bit lengthy, so I'm wondering if there is a better way.
```
static string GetParentUriString(Uri uri)
{
StringBuilder parentName = new StringBuilder();
// Append the scheme: http, ftp etc.
parentName.Append(uri.Scheme);
// Appned the '://' after the http, ftp etc.
parentName.Append("://");
// Append the host name www.foo.com
parentName.Append(uri.Host);
// Append each segment except the last one. The last one is the
// leaf and we will ignore it.
for (int i = 0; i < uri.Segments.Length - 1; i++)
{
parentName.Append(uri.Segments[i]);
}
return parentName.ToString();
}
```
One would use the function something like this:
```
static void Main(string[] args)
{
Uri uri = new Uri("http://foo.com/bar/baz.html");
// Should return http://foo.com/bar/
string parentName = GetParentUriString(uri);
}
```
Thanks,
Rohit | This is the shortest I can come up with:
```
static string GetParentUriString(Uri uri)
{
return uri.AbsoluteUri.Remove(uri.AbsoluteUri.Length - uri.Segments.Last().Length);
}
```
If you want to use the Last() method, you will have to include System.Linq. | Did you try this? Seems simple enough.
```
Uri parent = new Uri(uri, "..");
``` | Getting the parent name of a URI/URL from absolute name C# | [
"",
"c#",
"uri",
""
] |
Is it possible to get a list of the user defined functions in JavaScript?
I'm currently using this, but it returns functions which aren't user defined:
```
var functionNames = [];
for (var f in window) {
if (window.hasOwnProperty(f) && typeof window[f] === 'function') {
functionNames.push(f);
}
}
``` | I'm assuming you want to filter out native functions. In Firefox, `Function.toString()` returns the function body, which for native functions, will be in the form:
```
function addEventListener() {
[native code]
}
```
You could match the pattern `/\[native code\]/` in your loop and omit the functions that match. | As Chetan Sastry suggested in his answer, you can check for the existance of `[native code]` inside the stringified function:
```
Object.keys(window).filter(function(x)
{
if (!(window[x] instanceof Function)) return false;
return !/\[native code\]/.test(window[x].toString()) ? true : false;
});
```
Or simply:
```
Object.keys(window).filter(function(x)
{
return window[x] instanceof Function && !/\[native code\]/.test(window[x].toString());
});
```
in chrome you can get all non-native variables and functions by:
```
Object.keys(window);
``` | List of global user defined functions in JavaScript? | [
"",
"javascript",
"function",
""
] |
I am trying to read data off of a Windows serial port through Java. I have the javax.comm libraries and am able to get some data but not correct data. When I read the port into a byte array and convert it to text I get a series of characters but no real text string. I have tried to specify the byte array as being both "UTF-8" and "US-ASCII". Does anyone know how to get real text out of this?
Here is my code:
```
while (inputStream.available() > 0) {
int numBytes = inputStream.read(readBuffer);
System.out.println("Reading from " + portId.getName() + ": ");
System.out.println("Read " + numBytes + " bytes");
}
System.out.println(new String(readBuffer));
System.out.println(new String(readBuffer, "UTF-8"));
System.out.println(new String(readBuffer, "US-ASCII"));
```
the output of the first three lines will not let me copy and paste (I assume because they are not normal characters). Here is the output of the Hex:
78786000e67e9e60061e8606781e66e0869e98e086f89898861878809e1e9880
I am reading from a Hollux GPS device which does output in string format. I know this for sure because I did it through C#.
The settings that I am using for communication which I know are right from the work in the C# app are:
Baud Rate: 9600
Databits: 8
Stop bit: 1
parity: none | I can't tell from your code, but it looks like you're using `java.io.InputStream` to read the object. You should use either `java.io.BufferedReader` or `java.io.InputSreamReader`.
There are many ways to do this, but this is what we use:
```
javax.comm.SerialPort port = ...{initialize your serial port}
BufferedReader portReader =
new BufferedReader(new InputStreamReader(port.getInputStream()));
try {
String line = portReader.readLine();
} catch(IOException e) { ... }
``` | It's likely that the connected device doesn't use a text protocol. Many devices use a binary message format that may have text embedded in it, but represents other information with more compact codes.
Please edit your question and provide more information, such as the device that you're communicating with through the serial port, and the output that you get from the code you have running. Another helpful output to add would be something that prints a hexadecimal representation of the data you've read. A quick-and-dirty method for that is the following:
```
import java.math.BigInteger;
...
System.out.println(new BigInteger(1, readBuffer).toString(16));
```
---
I haven't worked with Holux GPS, but I have written a Garmin interface with javax.comm. In that case, the unit uses a proprietary (binary) Garmin protocol by default, but the (text-based) NMEA 0183 protocol can be enabled on the device. Is it possible that you are receiving a proprietary Holux message format, and need to use some command on the GPS unit to switch protocols? | How do I read text from a serial port? | [
"",
"java",
"string",
"serial-port",
"arrays",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.